Test Report: Docker_Linux_crio_arm64 21934

                    
                      0ee4f00f81c855d6dbc5c3cb2cb1b494940d38dc:2025-11-22:42437
                    
                

Test fail (39/328)

Order failed test Duration
29 TestAddons/serial/Volcano 0.29
35 TestAddons/parallel/Registry 15.95
36 TestAddons/parallel/RegistryCreds 0.48
37 TestAddons/parallel/Ingress 146.62
38 TestAddons/parallel/InspektorGadget 5.46
39 TestAddons/parallel/MetricsServer 5.42
41 TestAddons/parallel/CSI 43.51
42 TestAddons/parallel/Headlamp 3.38
43 TestAddons/parallel/CloudSpanner 5.37
44 TestAddons/parallel/LocalPath 9.48
45 TestAddons/parallel/NvidiaDevicePlugin 5.27
46 TestAddons/parallel/Yakd 6.26
97 TestFunctional/parallel/ServiceCmdConnect 603.53
125 TestFunctional/parallel/ServiceCmd/DeployApp 600.93
134 TestFunctional/parallel/ServiceCmd/HTTPS 0.51
135 TestFunctional/parallel/ServiceCmd/Format 0.48
136 TestFunctional/parallel/ServiceCmd/URL 0.49
145 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 1.31
146 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 0.94
147 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 1.16
148 TestFunctional/parallel/ImageCommands/ImageSaveToFile 0.31
150 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 0.2
151 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 0.46
173 TestMultiControlPlane/serial/RestartClusterKeepsNodes 532.4
174 TestMultiControlPlane/serial/DeleteSecondaryNode 8.65
175 TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete 3.22
191 TestJSONOutput/pause/Command 1.9
197 TestJSONOutput/unpause/Command 1.58
282 TestPause/serial/Pause 7.55
297 TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive 2.47
304 TestStartStop/group/old-k8s-version/serial/Pause 6.58
310 TestStartStop/group/no-preload/serial/EnableAddonWhileActive 2.64
315 TestStartStop/group/embed-certs/serial/EnableAddonWhileActive 3.73
322 TestStartStop/group/no-preload/serial/Pause 8.29
328 TestStartStop/group/embed-certs/serial/Pause 6.93
332 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive 2.93
337 TestStartStop/group/newest-cni/serial/EnableAddonWhileActive 2.88
344 TestStartStop/group/newest-cni/serial/Pause 6.88
349 TestStartStop/group/default-k8s-diff-port/serial/Pause 9.46
x
+
TestAddons/serial/Volcano (0.29s)

                                                
                                                
=== RUN   TestAddons/serial/Volcano
addons_test.go:850: skipping: crio not supported
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-882841 addons disable volcano --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-arm64 -p addons-882841 addons disable volcano --alsologtostderr -v=1: exit status 11 (285.2175ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1121 23:50:39.494772  523625 out.go:360] Setting OutFile to fd 1 ...
	I1121 23:50:39.495539  523625 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1121 23:50:39.495552  523625 out.go:374] Setting ErrFile to fd 2...
	I1121 23:50:39.495573  523625 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1121 23:50:39.495871  523625 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21934-513600/.minikube/bin
	I1121 23:50:39.496157  523625 mustload.go:66] Loading cluster: addons-882841
	I1121 23:50:39.496519  523625 config.go:182] Loaded profile config "addons-882841": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1121 23:50:39.496536  523625 addons.go:622] checking whether the cluster is paused
	I1121 23:50:39.496642  523625 config.go:182] Loaded profile config "addons-882841": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1121 23:50:39.496658  523625 host.go:66] Checking if "addons-882841" exists ...
	I1121 23:50:39.497208  523625 cli_runner.go:164] Run: docker container inspect addons-882841 --format={{.State.Status}}
	I1121 23:50:39.525879  523625 ssh_runner.go:195] Run: systemctl --version
	I1121 23:50:39.525956  523625 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-882841
	I1121 23:50:39.543290  523625 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33495 SSHKeyPath:/home/jenkins/minikube-integration/21934-513600/.minikube/machines/addons-882841/id_rsa Username:docker}
	I1121 23:50:39.644583  523625 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1121 23:50:39.644672  523625 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1121 23:50:39.674765  523625 cri.go:89] found id: "d4288bfcc52cab787e4c57cd6f6ce8b5e4eab8e0f753f7b4b8c0dfbb6d7fcacf"
	I1121 23:50:39.674783  523625 cri.go:89] found id: "c2b953c3eb94bb9506b88f8f9082db05bab7bbad3c3c92fb83a61fd4148bcd7c"
	I1121 23:50:39.674788  523625 cri.go:89] found id: "4b84c2719358c69a49238d48c9737dea6a7f98672fde68733fbdfd0c5f76c519"
	I1121 23:50:39.674791  523625 cri.go:89] found id: "03e07c8bd5633a56d64051d6a63832c1a6fc109d661151083091d50ee1a7dfb7"
	I1121 23:50:39.674804  523625 cri.go:89] found id: "4f400d29139bbf4aeaa41d46055af4710e38ae5d6a844d3ee8b87ce4de3a0f3a"
	I1121 23:50:39.674808  523625 cri.go:89] found id: "0ecc1bcf505044ab1e7b917a3904e9d8ff652e08129c43483e6ff6f465bc7f48"
	I1121 23:50:39.674811  523625 cri.go:89] found id: "0d6392a88c56afc228d16b7659bcfa96628c1585bdb4b03af537ff609bf9f34a"
	I1121 23:50:39.674814  523625 cri.go:89] found id: "492d7c5835fb8502b60633915d9d3f885aa7bc3696e4febec5b394bab0a6773b"
	I1121 23:50:39.674817  523625 cri.go:89] found id: "96794062627c79a139839f726287bb12566d789fa0f4d5b1994cd88518a2e2eb"
	I1121 23:50:39.674822  523625 cri.go:89] found id: "d6df5ea7f4eb5e6b4852fd9cf791dda6c35753e420985175cbcc2a80b368d82b"
	I1121 23:50:39.674825  523625 cri.go:89] found id: "800239fcbfd600dbcec2ac03099f8200b62cf3769357e03ab2d40f672490913e"
	I1121 23:50:39.674829  523625 cri.go:89] found id: "1d9bfd16346a7b544d9696f5ac0700133b9c44dfb05b597e6ece14fdf7c1ee4d"
	I1121 23:50:39.674831  523625 cri.go:89] found id: "b7eb954adbbab5deddd57f625c7dce81a5fbc6e9ee1d2cb260d88e1fbd1482da"
	I1121 23:50:39.674834  523625 cri.go:89] found id: "ba42295e49f9af2999894c8bde53ee31c193600b80cc921d12a7b280aefbca13"
	I1121 23:50:39.674837  523625 cri.go:89] found id: "36f901d7268653e5e64e73d2f8c787b658cab9899bc17b2cc522fb984b5ae3f7"
	I1121 23:50:39.674842  523625 cri.go:89] found id: "561d110537c5cbfb43c832086d9c8216a7180387df20a1b9c68b29a4b682f207"
	I1121 23:50:39.674845  523625 cri.go:89] found id: "e6dc31e0930681fac6fbd625f4ec7a07e57c10d13a728a7ec163a4c66a6d4a2b"
	I1121 23:50:39.674848  523625 cri.go:89] found id: "074654b9d6b9f820e2f61d8ef839ef5ebec8673802a3e034c02530f243f023d0"
	I1121 23:50:39.674851  523625 cri.go:89] found id: "970af788676bddee24edc4dbf7882805510ac451d6658537c9d2152752c3ffee"
	I1121 23:50:39.674854  523625 cri.go:89] found id: "f6c2269669bcf9942b43c38a6d80d882a37c12fba06a2b1b514b07dbd6183350"
	I1121 23:50:39.674858  523625 cri.go:89] found id: "415d7ebb38dbfa2b139e79fcf924802ecf12d3ba74e075e30026fdd18353d343"
	I1121 23:50:39.674861  523625 cri.go:89] found id: "ba805611fb053ae520be5762cfc188a2dd5915f488bed9770376cb5e14b60936"
	I1121 23:50:39.674864  523625 cri.go:89] found id: "0b539dfc17788b7400ee2eb5abadb008e5a8a9796bb112d8a69f14a34f2fd551"
	I1121 23:50:39.674867  523625 cri.go:89] found id: ""
	I1121 23:50:39.674922  523625 ssh_runner.go:195] Run: sudo runc list -f json
	I1121 23:50:39.690053  523625 out.go:203] 
	W1121 23:50:39.693088  523625 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-21T23:50:39Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-21T23:50:39Z" level=error msg="open /run/runc: no such file or directory"
	
	W1121 23:50:39.693130  523625 out.go:285] * 
	* 
	W1121 23:50:39.699995  523625 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_9bd16c244da2144137a37071fb77e06a574610a0_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_9bd16c244da2144137a37071fb77e06a574610a0_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1121 23:50:39.703000  523625 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable volcano addon: args "out/minikube-linux-arm64 -p addons-882841 addons disable volcano --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/serial/Volcano (0.29s)

                                                
                                    
x
+
TestAddons/parallel/Registry (15.95s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:382: registry stabilized in 4.316528ms
addons_test.go:384: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:352: "registry-6b586f9694-5jvr4" [7a29be8b-519d-4b81-81ff-bac494b2ea86] Running
addons_test.go:384: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 6.012127968s
addons_test.go:387: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:352: "registry-proxy-rrtfc" [1d8939ca-bf48-4609-94de-6b5ca07c973f] Running
addons_test.go:387: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 5.003148866s
addons_test.go:392: (dbg) Run:  kubectl --context addons-882841 delete po -l run=registry-test --now
addons_test.go:397: (dbg) Run:  kubectl --context addons-882841 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:397: (dbg) Done: kubectl --context addons-882841 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": (4.341339472s)
addons_test.go:411: (dbg) Run:  out/minikube-linux-arm64 -p addons-882841 ip
2025/11/21 23:51:06 [DEBUG] GET http://192.168.49.2:5000
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-882841 addons disable registry --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-arm64 -p addons-882841 addons disable registry --alsologtostderr -v=1: exit status 11 (314.659233ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1121 23:51:06.720607  524195 out.go:360] Setting OutFile to fd 1 ...
	I1121 23:51:06.721438  524195 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1121 23:51:06.721457  524195 out.go:374] Setting ErrFile to fd 2...
	I1121 23:51:06.721464  524195 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1121 23:51:06.721752  524195 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21934-513600/.minikube/bin
	I1121 23:51:06.722109  524195 mustload.go:66] Loading cluster: addons-882841
	I1121 23:51:06.722564  524195 config.go:182] Loaded profile config "addons-882841": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1121 23:51:06.722585  524195 addons.go:622] checking whether the cluster is paused
	I1121 23:51:06.722735  524195 config.go:182] Loaded profile config "addons-882841": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1121 23:51:06.722753  524195 host.go:66] Checking if "addons-882841" exists ...
	I1121 23:51:06.723267  524195 cli_runner.go:164] Run: docker container inspect addons-882841 --format={{.State.Status}}
	I1121 23:51:06.744199  524195 ssh_runner.go:195] Run: systemctl --version
	I1121 23:51:06.744265  524195 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-882841
	I1121 23:51:06.760891  524195 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33495 SSHKeyPath:/home/jenkins/minikube-integration/21934-513600/.minikube/machines/addons-882841/id_rsa Username:docker}
	I1121 23:51:06.860971  524195 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1121 23:51:06.861092  524195 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1121 23:51:06.907440  524195 cri.go:89] found id: "d4288bfcc52cab787e4c57cd6f6ce8b5e4eab8e0f753f7b4b8c0dfbb6d7fcacf"
	I1121 23:51:06.907464  524195 cri.go:89] found id: "c2b953c3eb94bb9506b88f8f9082db05bab7bbad3c3c92fb83a61fd4148bcd7c"
	I1121 23:51:06.907469  524195 cri.go:89] found id: "4b84c2719358c69a49238d48c9737dea6a7f98672fde68733fbdfd0c5f76c519"
	I1121 23:51:06.907473  524195 cri.go:89] found id: "03e07c8bd5633a56d64051d6a63832c1a6fc109d661151083091d50ee1a7dfb7"
	I1121 23:51:06.907477  524195 cri.go:89] found id: "4f400d29139bbf4aeaa41d46055af4710e38ae5d6a844d3ee8b87ce4de3a0f3a"
	I1121 23:51:06.907481  524195 cri.go:89] found id: "0ecc1bcf505044ab1e7b917a3904e9d8ff652e08129c43483e6ff6f465bc7f48"
	I1121 23:51:06.907484  524195 cri.go:89] found id: "0d6392a88c56afc228d16b7659bcfa96628c1585bdb4b03af537ff609bf9f34a"
	I1121 23:51:06.907487  524195 cri.go:89] found id: "492d7c5835fb8502b60633915d9d3f885aa7bc3696e4febec5b394bab0a6773b"
	I1121 23:51:06.907490  524195 cri.go:89] found id: "96794062627c79a139839f726287bb12566d789fa0f4d5b1994cd88518a2e2eb"
	I1121 23:51:06.907496  524195 cri.go:89] found id: "d6df5ea7f4eb5e6b4852fd9cf791dda6c35753e420985175cbcc2a80b368d82b"
	I1121 23:51:06.907499  524195 cri.go:89] found id: "800239fcbfd600dbcec2ac03099f8200b62cf3769357e03ab2d40f672490913e"
	I1121 23:51:06.907502  524195 cri.go:89] found id: "1d9bfd16346a7b544d9696f5ac0700133b9c44dfb05b597e6ece14fdf7c1ee4d"
	I1121 23:51:06.907504  524195 cri.go:89] found id: "b7eb954adbbab5deddd57f625c7dce81a5fbc6e9ee1d2cb260d88e1fbd1482da"
	I1121 23:51:06.907507  524195 cri.go:89] found id: "ba42295e49f9af2999894c8bde53ee31c193600b80cc921d12a7b280aefbca13"
	I1121 23:51:06.907512  524195 cri.go:89] found id: "36f901d7268653e5e64e73d2f8c787b658cab9899bc17b2cc522fb984b5ae3f7"
	I1121 23:51:06.907517  524195 cri.go:89] found id: "561d110537c5cbfb43c832086d9c8216a7180387df20a1b9c68b29a4b682f207"
	I1121 23:51:06.907520  524195 cri.go:89] found id: "e6dc31e0930681fac6fbd625f4ec7a07e57c10d13a728a7ec163a4c66a6d4a2b"
	I1121 23:51:06.907524  524195 cri.go:89] found id: "074654b9d6b9f820e2f61d8ef839ef5ebec8673802a3e034c02530f243f023d0"
	I1121 23:51:06.907527  524195 cri.go:89] found id: "970af788676bddee24edc4dbf7882805510ac451d6658537c9d2152752c3ffee"
	I1121 23:51:06.907530  524195 cri.go:89] found id: "f6c2269669bcf9942b43c38a6d80d882a37c12fba06a2b1b514b07dbd6183350"
	I1121 23:51:06.907534  524195 cri.go:89] found id: "415d7ebb38dbfa2b139e79fcf924802ecf12d3ba74e075e30026fdd18353d343"
	I1121 23:51:06.907538  524195 cri.go:89] found id: "ba805611fb053ae520be5762cfc188a2dd5915f488bed9770376cb5e14b60936"
	I1121 23:51:06.907541  524195 cri.go:89] found id: "0b539dfc17788b7400ee2eb5abadb008e5a8a9796bb112d8a69f14a34f2fd551"
	I1121 23:51:06.907543  524195 cri.go:89] found id: ""
	I1121 23:51:06.907597  524195 ssh_runner.go:195] Run: sudo runc list -f json
	I1121 23:51:06.934124  524195 out.go:203] 
	W1121 23:51:06.939819  524195 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-21T23:51:06Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-21T23:51:06Z" level=error msg="open /run/runc: no such file or directory"
	
	W1121 23:51:06.939846  524195 out.go:285] * 
	* 
	W1121 23:51:06.954971  524195 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_94fa7435cdb0fda2540861b9b71556c8cae5c5f1_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_94fa7435cdb0fda2540861b9b71556c8cae5c5f1_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1121 23:51:06.966964  524195 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable registry addon: args "out/minikube-linux-arm64 -p addons-882841 addons disable registry --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/Registry (15.95s)

                                                
                                    
x
+
TestAddons/parallel/RegistryCreds (0.48s)

                                                
                                                
=== RUN   TestAddons/parallel/RegistryCreds
=== PAUSE TestAddons/parallel/RegistryCreds

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/RegistryCreds
addons_test.go:323: registry-creds stabilized in 3.103489ms
addons_test.go:325: (dbg) Run:  out/minikube-linux-arm64 addons configure registry-creds -f ./testdata/addons_testconfig.json -p addons-882841
addons_test.go:332: (dbg) Run:  kubectl --context addons-882841 -n kube-system get secret -o yaml
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-882841 addons disable registry-creds --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-arm64 -p addons-882841 addons disable registry-creds --alsologtostderr -v=1: exit status 11 (269.662363ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1121 23:51:56.128480  526181 out.go:360] Setting OutFile to fd 1 ...
	I1121 23:51:56.129447  526181 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1121 23:51:56.129478  526181 out.go:374] Setting ErrFile to fd 2...
	I1121 23:51:56.129485  526181 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1121 23:51:56.129968  526181 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21934-513600/.minikube/bin
	I1121 23:51:56.130564  526181 mustload.go:66] Loading cluster: addons-882841
	I1121 23:51:56.130947  526181 config.go:182] Loaded profile config "addons-882841": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1121 23:51:56.130968  526181 addons.go:622] checking whether the cluster is paused
	I1121 23:51:56.131081  526181 config.go:182] Loaded profile config "addons-882841": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1121 23:51:56.131096  526181 host.go:66] Checking if "addons-882841" exists ...
	I1121 23:51:56.131722  526181 cli_runner.go:164] Run: docker container inspect addons-882841 --format={{.State.Status}}
	I1121 23:51:56.150532  526181 ssh_runner.go:195] Run: systemctl --version
	I1121 23:51:56.150594  526181 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-882841
	I1121 23:51:56.171410  526181 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33495 SSHKeyPath:/home/jenkins/minikube-integration/21934-513600/.minikube/machines/addons-882841/id_rsa Username:docker}
	I1121 23:51:56.272308  526181 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1121 23:51:56.272389  526181 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1121 23:51:56.300689  526181 cri.go:89] found id: "d4288bfcc52cab787e4c57cd6f6ce8b5e4eab8e0f753f7b4b8c0dfbb6d7fcacf"
	I1121 23:51:56.300711  526181 cri.go:89] found id: "c2b953c3eb94bb9506b88f8f9082db05bab7bbad3c3c92fb83a61fd4148bcd7c"
	I1121 23:51:56.300716  526181 cri.go:89] found id: "4b84c2719358c69a49238d48c9737dea6a7f98672fde68733fbdfd0c5f76c519"
	I1121 23:51:56.300720  526181 cri.go:89] found id: "03e07c8bd5633a56d64051d6a63832c1a6fc109d661151083091d50ee1a7dfb7"
	I1121 23:51:56.300724  526181 cri.go:89] found id: "4f400d29139bbf4aeaa41d46055af4710e38ae5d6a844d3ee8b87ce4de3a0f3a"
	I1121 23:51:56.300727  526181 cri.go:89] found id: "0ecc1bcf505044ab1e7b917a3904e9d8ff652e08129c43483e6ff6f465bc7f48"
	I1121 23:51:56.300730  526181 cri.go:89] found id: "0d6392a88c56afc228d16b7659bcfa96628c1585bdb4b03af537ff609bf9f34a"
	I1121 23:51:56.300733  526181 cri.go:89] found id: "492d7c5835fb8502b60633915d9d3f885aa7bc3696e4febec5b394bab0a6773b"
	I1121 23:51:56.300736  526181 cri.go:89] found id: "96794062627c79a139839f726287bb12566d789fa0f4d5b1994cd88518a2e2eb"
	I1121 23:51:56.300742  526181 cri.go:89] found id: "d6df5ea7f4eb5e6b4852fd9cf791dda6c35753e420985175cbcc2a80b368d82b"
	I1121 23:51:56.300746  526181 cri.go:89] found id: "800239fcbfd600dbcec2ac03099f8200b62cf3769357e03ab2d40f672490913e"
	I1121 23:51:56.300749  526181 cri.go:89] found id: "1d9bfd16346a7b544d9696f5ac0700133b9c44dfb05b597e6ece14fdf7c1ee4d"
	I1121 23:51:56.300753  526181 cri.go:89] found id: "b7eb954adbbab5deddd57f625c7dce81a5fbc6e9ee1d2cb260d88e1fbd1482da"
	I1121 23:51:56.300756  526181 cri.go:89] found id: "ba42295e49f9af2999894c8bde53ee31c193600b80cc921d12a7b280aefbca13"
	I1121 23:51:56.300759  526181 cri.go:89] found id: "36f901d7268653e5e64e73d2f8c787b658cab9899bc17b2cc522fb984b5ae3f7"
	I1121 23:51:56.300764  526181 cri.go:89] found id: "561d110537c5cbfb43c832086d9c8216a7180387df20a1b9c68b29a4b682f207"
	I1121 23:51:56.300778  526181 cri.go:89] found id: "e6dc31e0930681fac6fbd625f4ec7a07e57c10d13a728a7ec163a4c66a6d4a2b"
	I1121 23:51:56.300782  526181 cri.go:89] found id: "074654b9d6b9f820e2f61d8ef839ef5ebec8673802a3e034c02530f243f023d0"
	I1121 23:51:56.300785  526181 cri.go:89] found id: "970af788676bddee24edc4dbf7882805510ac451d6658537c9d2152752c3ffee"
	I1121 23:51:56.300788  526181 cri.go:89] found id: "f6c2269669bcf9942b43c38a6d80d882a37c12fba06a2b1b514b07dbd6183350"
	I1121 23:51:56.300793  526181 cri.go:89] found id: "415d7ebb38dbfa2b139e79fcf924802ecf12d3ba74e075e30026fdd18353d343"
	I1121 23:51:56.300796  526181 cri.go:89] found id: "ba805611fb053ae520be5762cfc188a2dd5915f488bed9770376cb5e14b60936"
	I1121 23:51:56.300799  526181 cri.go:89] found id: "0b539dfc17788b7400ee2eb5abadb008e5a8a9796bb112d8a69f14a34f2fd551"
	I1121 23:51:56.300802  526181 cri.go:89] found id: ""
	I1121 23:51:56.300857  526181 ssh_runner.go:195] Run: sudo runc list -f json
	I1121 23:51:56.315851  526181 out.go:203] 
	W1121 23:51:56.318936  526181 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-21T23:51:56Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-21T23:51:56Z" level=error msg="open /run/runc: no such file or directory"
	
	W1121 23:51:56.318959  526181 out.go:285] * 
	* 
	W1121 23:51:56.325575  526181 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_ac42ae7bb4bac5cd909a08f6506d602b3d2ccf6c_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_ac42ae7bb4bac5cd909a08f6506d602b3d2ccf6c_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1121 23:51:56.328467  526181 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable registry-creds addon: args "out/minikube-linux-arm64 -p addons-882841 addons disable registry-creds --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/RegistryCreds (0.48s)

                                                
                                    
x
+
TestAddons/parallel/Ingress (146.62s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:209: (dbg) Run:  kubectl --context addons-882841 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:234: (dbg) Run:  kubectl --context addons-882841 replace --force -f testdata/nginx-ingress-v1.yaml
addons_test.go:247: (dbg) Run:  kubectl --context addons-882841 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:252: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:352: "nginx" [b335e2c6-1a65-433b-9d00-9de7be772d99] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:352: "nginx" [b335e2c6-1a65-433b-9d00-9de7be772d99] Running
addons_test.go:252: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 11.005863893s
I1121 23:51:38.141141  516937 kapi.go:150] Service nginx in namespace default found.
addons_test.go:264: (dbg) Run:  out/minikube-linux-arm64 -p addons-882841 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:264: (dbg) Non-zero exit: out/minikube-linux-arm64 -p addons-882841 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'": exit status 1 (2m10.707387831s)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 28

                                                
                                                
** /stderr **
addons_test.go:280: failed to get expected response from http://127.0.0.1/ within minikube: exit status 1
addons_test.go:288: (dbg) Run:  kubectl --context addons-882841 replace --force -f testdata/ingress-dns-example-v1.yaml
addons_test.go:293: (dbg) Run:  out/minikube-linux-arm64 -p addons-882841 ip
addons_test.go:299: (dbg) Run:  nslookup hello-john.test 192.168.49.2
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestAddons/parallel/Ingress]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestAddons/parallel/Ingress]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect addons-882841
helpers_test.go:243: (dbg) docker inspect addons-882841:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "cbf01a114cc53a8b6c72a0ed56d9776d5ffd3dfdacd5a45cb3e08babfb8e2033",
	        "Created": "2025-11-21T23:48:11.665008112Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 518101,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-11-21T23:48:11.703162071Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:97061622515a85470dad13cb046ad5fc5021fcd2b05aa921a620abb52951cd5d",
	        "ResolvConfPath": "/var/lib/docker/containers/cbf01a114cc53a8b6c72a0ed56d9776d5ffd3dfdacd5a45cb3e08babfb8e2033/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/cbf01a114cc53a8b6c72a0ed56d9776d5ffd3dfdacd5a45cb3e08babfb8e2033/hostname",
	        "HostsPath": "/var/lib/docker/containers/cbf01a114cc53a8b6c72a0ed56d9776d5ffd3dfdacd5a45cb3e08babfb8e2033/hosts",
	        "LogPath": "/var/lib/docker/containers/cbf01a114cc53a8b6c72a0ed56d9776d5ffd3dfdacd5a45cb3e08babfb8e2033/cbf01a114cc53a8b6c72a0ed56d9776d5ffd3dfdacd5a45cb3e08babfb8e2033-json.log",
	        "Name": "/addons-882841",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "addons-882841:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "addons-882841",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "cbf01a114cc53a8b6c72a0ed56d9776d5ffd3dfdacd5a45cb3e08babfb8e2033",
	                "LowerDir": "/var/lib/docker/overlay2/6c3988b3528b3a3bf63b623a08f0a43fa28c9bfbdf23b4a999ec7d70676a8e42-init/diff:/var/lib/docker/overlay2/7e8788c6de692bc1c3758a2bb2c4b8da0fbba26855f855c0f3b655bfbdd92f8e/diff",
	                "MergedDir": "/var/lib/docker/overlay2/6c3988b3528b3a3bf63b623a08f0a43fa28c9bfbdf23b4a999ec7d70676a8e42/merged",
	                "UpperDir": "/var/lib/docker/overlay2/6c3988b3528b3a3bf63b623a08f0a43fa28c9bfbdf23b4a999ec7d70676a8e42/diff",
	                "WorkDir": "/var/lib/docker/overlay2/6c3988b3528b3a3bf63b623a08f0a43fa28c9bfbdf23b4a999ec7d70676a8e42/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "addons-882841",
	                "Source": "/var/lib/docker/volumes/addons-882841/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "addons-882841",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763588073-21934@sha256:19d3da0413e1bfa354cbb88004c6796f8e9772a083e0230b0f6e50212ee04c7e",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "addons-882841",
	                "name.minikube.sigs.k8s.io": "addons-882841",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "5cf314fe3031b36014fe97d0f307d0af8308642c7f1c4dbb4b3be2895bcb12b4",
	            "SandboxKey": "/var/run/docker/netns/5cf314fe3031",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33495"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33496"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33499"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33497"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33498"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "addons-882841": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "26:43:f7:c6:39:89",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "129d8f735ebe44960774442ba542960f928613e67c001d7be8766fc635e8e2ec",
	                    "EndpointID": "529b816e88f8f65b7e9d124edf03f4e2170d844d4d1cb2d5af6003ccf3f08c45",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "addons-882841",
	                        "cbf01a114cc5"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p addons-882841 -n addons-882841
helpers_test.go:252: <<< TestAddons/parallel/Ingress FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestAddons/parallel/Ingress]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p addons-882841 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p addons-882841 logs -n 25: (1.520037007s)
helpers_test.go:260: TestAddons/parallel/Ingress logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                                                                                                                                                                   ARGS                                                                                                                                                                                                                                   │        PROFILE         │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ delete  │ -p download-docker-291874                                                                                                                                                                                                                                                                                                                                                                                                                                                │ download-docker-291874 │ jenkins │ v1.37.0 │ 21 Nov 25 23:47 UTC │ 21 Nov 25 23:47 UTC │
	│ start   │ --download-only -p binary-mirror-343381 --alsologtostderr --binary-mirror http://127.0.0.1:44455 --driver=docker  --container-runtime=crio                                                                                                                                                                                                                                                                                                                               │ binary-mirror-343381   │ jenkins │ v1.37.0 │ 21 Nov 25 23:47 UTC │                     │
	│ delete  │ -p binary-mirror-343381                                                                                                                                                                                                                                                                                                                                                                                                                                                  │ binary-mirror-343381   │ jenkins │ v1.37.0 │ 21 Nov 25 23:47 UTC │ 21 Nov 25 23:47 UTC │
	│ addons  │ enable dashboard -p addons-882841                                                                                                                                                                                                                                                                                                                                                                                                                                        │ addons-882841          │ jenkins │ v1.37.0 │ 21 Nov 25 23:47 UTC │                     │
	│ addons  │ disable dashboard -p addons-882841                                                                                                                                                                                                                                                                                                                                                                                                                                       │ addons-882841          │ jenkins │ v1.37.0 │ 21 Nov 25 23:47 UTC │                     │
	│ start   │ -p addons-882841 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=docker  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher │ addons-882841          │ jenkins │ v1.37.0 │ 21 Nov 25 23:47 UTC │ 21 Nov 25 23:50 UTC │
	│ addons  │ addons-882841 addons disable volcano --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                              │ addons-882841          │ jenkins │ v1.37.0 │ 21 Nov 25 23:50 UTC │                     │
	│ addons  │ addons-882841 addons disable gcp-auth --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                             │ addons-882841          │ jenkins │ v1.37.0 │ 21 Nov 25 23:50 UTC │                     │
	│ addons  │ addons-882841 addons disable yakd --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                                 │ addons-882841          │ jenkins │ v1.37.0 │ 21 Nov 25 23:50 UTC │                     │
	│ addons  │ addons-882841 addons disable nvidia-device-plugin --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                 │ addons-882841          │ jenkins │ v1.37.0 │ 21 Nov 25 23:51 UTC │                     │
	│ ip      │ addons-882841 ip                                                                                                                                                                                                                                                                                                                                                                                                                                                         │ addons-882841          │ jenkins │ v1.37.0 │ 21 Nov 25 23:51 UTC │ 21 Nov 25 23:51 UTC │
	│ addons  │ addons-882841 addons disable registry --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                             │ addons-882841          │ jenkins │ v1.37.0 │ 21 Nov 25 23:51 UTC │                     │
	│ ssh     │ addons-882841 ssh cat /opt/local-path-provisioner/pvc-f4b21e23-9cc1-4889-8265-d98a837f9eda_default_test-pvc/file1                                                                                                                                                                                                                                                                                                                                                        │ addons-882841          │ jenkins │ v1.37.0 │ 21 Nov 25 23:51 UTC │ 21 Nov 25 23:51 UTC │
	│ addons  │ addons-882841 addons disable storage-provisioner-rancher --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                          │ addons-882841          │ jenkins │ v1.37.0 │ 21 Nov 25 23:51 UTC │                     │
	│ addons  │ addons-882841 addons disable cloud-spanner --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                        │ addons-882841          │ jenkins │ v1.37.0 │ 21 Nov 25 23:51 UTC │                     │
	│ addons  │ enable headlamp -p addons-882841 --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                                  │ addons-882841          │ jenkins │ v1.37.0 │ 21 Nov 25 23:51 UTC │                     │
	│ addons  │ addons-882841 addons disable headlamp --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                             │ addons-882841          │ jenkins │ v1.37.0 │ 21 Nov 25 23:51 UTC │                     │
	│ addons  │ addons-882841 addons disable metrics-server --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                       │ addons-882841          │ jenkins │ v1.37.0 │ 21 Nov 25 23:51 UTC │                     │
	│ addons  │ addons-882841 addons disable inspektor-gadget --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                     │ addons-882841          │ jenkins │ v1.37.0 │ 21 Nov 25 23:51 UTC │                     │
	│ ssh     │ addons-882841 ssh curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'                                                                                                                                                                                                                                                                                                                                                                                                 │ addons-882841          │ jenkins │ v1.37.0 │ 21 Nov 25 23:51 UTC │                     │
	│ addons  │ addons-882841 addons disable volumesnapshots --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                      │ addons-882841          │ jenkins │ v1.37.0 │ 21 Nov 25 23:51 UTC │                     │
	│ addons  │ addons-882841 addons disable csi-hostpath-driver --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                  │ addons-882841          │ jenkins │ v1.37.0 │ 21 Nov 25 23:51 UTC │                     │
	│ addons  │ configure registry-creds -f ./testdata/addons_testconfig.json -p addons-882841                                                                                                                                                                                                                                                                                                                                                                                           │ addons-882841          │ jenkins │ v1.37.0 │ 21 Nov 25 23:51 UTC │ 21 Nov 25 23:51 UTC │
	│ addons  │ addons-882841 addons disable registry-creds --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                       │ addons-882841          │ jenkins │ v1.37.0 │ 21 Nov 25 23:51 UTC │                     │
	│ ip      │ addons-882841 ip                                                                                                                                                                                                                                                                                                                                                                                                                                                         │ addons-882841          │ jenkins │ v1.37.0 │ 21 Nov 25 23:53 UTC │ 21 Nov 25 23:53 UTC │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/21 23:47:47
	Running on machine: ip-172-31-21-244
	Binary: Built with gc go1.25.3 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1121 23:47:47.136572  517697 out.go:360] Setting OutFile to fd 1 ...
	I1121 23:47:47.136742  517697 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1121 23:47:47.136771  517697 out.go:374] Setting ErrFile to fd 2...
	I1121 23:47:47.136792  517697 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1121 23:47:47.137151  517697 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21934-513600/.minikube/bin
	I1121 23:47:47.137694  517697 out.go:368] Setting JSON to false
	I1121 23:47:47.139012  517697 start.go:133] hostinfo: {"hostname":"ip-172-31-21-244","uptime":16184,"bootTime":1763752684,"procs":145,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"da8ac1fd-6236-412a-a346-95873c98230d"}
	I1121 23:47:47.139091  517697 start.go:143] virtualization:  
	I1121 23:47:47.142241  517697 out.go:179] * [addons-882841] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1121 23:47:47.146046  517697 out.go:179]   - MINIKUBE_LOCATION=21934
	I1121 23:47:47.146123  517697 notify.go:221] Checking for updates...
	I1121 23:47:47.151861  517697 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1121 23:47:47.154703  517697 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21934-513600/kubeconfig
	I1121 23:47:47.157409  517697 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21934-513600/.minikube
	I1121 23:47:47.160274  517697 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1121 23:47:47.163198  517697 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1121 23:47:47.166192  517697 driver.go:422] Setting default libvirt URI to qemu:///system
	I1121 23:47:47.186586  517697 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1121 23:47:47.186710  517697 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1121 23:47:47.254013  517697 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:27 OomKillDisable:true NGoroutines:43 SystemTime:2025-11-21 23:47:47.237918784 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1121 23:47:47.254120  517697 docker.go:319] overlay module found
	I1121 23:47:47.257244  517697 out.go:179] * Using the docker driver based on user configuration
	I1121 23:47:47.260086  517697 start.go:309] selected driver: docker
	I1121 23:47:47.260103  517697 start.go:930] validating driver "docker" against <nil>
	I1121 23:47:47.260116  517697 start.go:941] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1121 23:47:47.260836  517697 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1121 23:47:47.313203  517697 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:23 OomKillDisable:true NGoroutines:43 SystemTime:2025-11-21 23:47:47.303843437 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1121 23:47:47.313367  517697 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1121 23:47:47.313592  517697 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1121 23:47:47.316566  517697 out.go:179] * Using Docker driver with root privileges
	I1121 23:47:47.319435  517697 cni.go:84] Creating CNI manager for ""
	I1121 23:47:47.319505  517697 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1121 23:47:47.319518  517697 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1121 23:47:47.319597  517697 start.go:353] cluster config:
	{Name:addons-882841 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763588073-21934@sha256:19d3da0413e1bfa354cbb88004c6796f8e9772a083e0230b0f6e50212ee04c7e Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:addons-882841 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime
:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:
AutoPauseInterval:1m0s}
	I1121 23:47:47.322725  517697 out.go:179] * Starting "addons-882841" primary control-plane node in "addons-882841" cluster
	I1121 23:47:47.325532  517697 cache.go:134] Beginning downloading kic base image for docker with crio
	I1121 23:47:47.328474  517697 out.go:179] * Pulling base image v0.0.48-1763588073-21934 ...
	I1121 23:47:47.331328  517697 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1121 23:47:47.331375  517697 preload.go:203] Found local preload: /home/jenkins/minikube-integration/21934-513600/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4
	I1121 23:47:47.331389  517697 cache.go:65] Caching tarball of preloaded images
	I1121 23:47:47.331397  517697 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763588073-21934@sha256:19d3da0413e1bfa354cbb88004c6796f8e9772a083e0230b0f6e50212ee04c7e in local docker daemon
	I1121 23:47:47.331480  517697 preload.go:238] Found /home/jenkins/minikube-integration/21934-513600/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1121 23:47:47.331506  517697 cache.go:68] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1121 23:47:47.331938  517697 profile.go:143] Saving config to /home/jenkins/minikube-integration/21934-513600/.minikube/profiles/addons-882841/config.json ...
	I1121 23:47:47.331961  517697 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21934-513600/.minikube/profiles/addons-882841/config.json: {Name:mk942f8f2ad4834012eb7442332ef1f177632391 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1121 23:47:47.347262  517697 cache.go:163] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763588073-21934@sha256:19d3da0413e1bfa354cbb88004c6796f8e9772a083e0230b0f6e50212ee04c7e to local cache
	I1121 23:47:47.347417  517697 image.go:65] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763588073-21934@sha256:19d3da0413e1bfa354cbb88004c6796f8e9772a083e0230b0f6e50212ee04c7e in local cache directory
	I1121 23:47:47.347438  517697 image.go:68] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763588073-21934@sha256:19d3da0413e1bfa354cbb88004c6796f8e9772a083e0230b0f6e50212ee04c7e in local cache directory, skipping pull
	I1121 23:47:47.347442  517697 image.go:137] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763588073-21934@sha256:19d3da0413e1bfa354cbb88004c6796f8e9772a083e0230b0f6e50212ee04c7e exists in cache, skipping pull
	I1121 23:47:47.347449  517697 cache.go:166] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763588073-21934@sha256:19d3da0413e1bfa354cbb88004c6796f8e9772a083e0230b0f6e50212ee04c7e as a tarball
	I1121 23:47:47.347454  517697 cache.go:176] Loading gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763588073-21934@sha256:19d3da0413e1bfa354cbb88004c6796f8e9772a083e0230b0f6e50212ee04c7e from local cache
	I1121 23:48:05.149534  517697 cache.go:178] successfully loaded and using gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763588073-21934@sha256:19d3da0413e1bfa354cbb88004c6796f8e9772a083e0230b0f6e50212ee04c7e from cached tarball
	I1121 23:48:05.149571  517697 cache.go:243] Successfully downloaded all kic artifacts
	I1121 23:48:05.149605  517697 start.go:360] acquireMachinesLock for addons-882841: {Name:mk32b69fee55935d27dd144fc65beab88981c1d2 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1121 23:48:05.149734  517697 start.go:364] duration metric: took 105.876µs to acquireMachinesLock for "addons-882841"
	I1121 23:48:05.149776  517697 start.go:93] Provisioning new machine with config: &{Name:addons-882841 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763588073-21934@sha256:19d3da0413e1bfa354cbb88004c6796f8e9772a083e0230b0f6e50212ee04c7e Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:addons-882841 Namespace:default APIServerHAVIP: APIServerName:min
ikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath:
SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1121 23:48:05.149875  517697 start.go:125] createHost starting for "" (driver="docker")
	I1121 23:48:05.151566  517697 out.go:252] * Creating docker container (CPUs=2, Memory=4096MB) ...
	I1121 23:48:05.151811  517697 start.go:159] libmachine.API.Create for "addons-882841" (driver="docker")
	I1121 23:48:05.151847  517697 client.go:173] LocalClient.Create starting
	I1121 23:48:05.151964  517697 main.go:143] libmachine: Creating CA: /home/jenkins/minikube-integration/21934-513600/.minikube/certs/ca.pem
	I1121 23:48:05.331380  517697 main.go:143] libmachine: Creating client certificate: /home/jenkins/minikube-integration/21934-513600/.minikube/certs/cert.pem
	I1121 23:48:05.458382  517697 cli_runner.go:164] Run: docker network inspect addons-882841 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1121 23:48:05.473231  517697 cli_runner.go:211] docker network inspect addons-882841 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1121 23:48:05.473326  517697 network_create.go:284] running [docker network inspect addons-882841] to gather additional debugging logs...
	I1121 23:48:05.473345  517697 cli_runner.go:164] Run: docker network inspect addons-882841
	W1121 23:48:05.494850  517697 cli_runner.go:211] docker network inspect addons-882841 returned with exit code 1
	I1121 23:48:05.494879  517697 network_create.go:287] error running [docker network inspect addons-882841]: docker network inspect addons-882841: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network addons-882841 not found
	I1121 23:48:05.494893  517697 network_create.go:289] output of [docker network inspect addons-882841]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network addons-882841 not found
	
	** /stderr **
	I1121 23:48:05.494993  517697 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1121 23:48:05.511973  517697 network.go:206] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x40019cc4e0}
	I1121 23:48:05.512012  517697 network_create.go:124] attempt to create docker network addons-882841 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I1121 23:48:05.512064  517697 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=addons-882841 addons-882841
	I1121 23:48:05.564350  517697 network_create.go:108] docker network addons-882841 192.168.49.0/24 created
	I1121 23:48:05.564385  517697 kic.go:121] calculated static IP "192.168.49.2" for the "addons-882841" container
	I1121 23:48:05.564464  517697 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1121 23:48:05.579858  517697 cli_runner.go:164] Run: docker volume create addons-882841 --label name.minikube.sigs.k8s.io=addons-882841 --label created_by.minikube.sigs.k8s.io=true
	I1121 23:48:05.597413  517697 oci.go:103] Successfully created a docker volume addons-882841
	I1121 23:48:05.597509  517697 cli_runner.go:164] Run: docker run --rm --name addons-882841-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-882841 --entrypoint /usr/bin/test -v addons-882841:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763588073-21934@sha256:19d3da0413e1bfa354cbb88004c6796f8e9772a083e0230b0f6e50212ee04c7e -d /var/lib
	I1121 23:48:07.230877  517697 cli_runner.go:217] Completed: docker run --rm --name addons-882841-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-882841 --entrypoint /usr/bin/test -v addons-882841:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763588073-21934@sha256:19d3da0413e1bfa354cbb88004c6796f8e9772a083e0230b0f6e50212ee04c7e -d /var/lib: (1.633328928s)
	I1121 23:48:07.230916  517697 oci.go:107] Successfully prepared a docker volume addons-882841
	I1121 23:48:07.230969  517697 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1121 23:48:07.230982  517697 kic.go:194] Starting extracting preloaded images to volume ...
	I1121 23:48:07.231044  517697 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21934-513600/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v addons-882841:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763588073-21934@sha256:19d3da0413e1bfa354cbb88004c6796f8e9772a083e0230b0f6e50212ee04c7e -I lz4 -xf /preloaded.tar -C /extractDir
	I1121 23:48:11.595058  517697 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21934-513600/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v addons-882841:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763588073-21934@sha256:19d3da0413e1bfa354cbb88004c6796f8e9772a083e0230b0f6e50212ee04c7e -I lz4 -xf /preloaded.tar -C /extractDir: (4.363977578s)
	I1121 23:48:11.595090  517697 kic.go:203] duration metric: took 4.364104442s to extract preloaded images to volume ...
	W1121 23:48:11.595235  517697 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I1121 23:48:11.595344  517697 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1121 23:48:11.650987  517697 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname addons-882841 --name addons-882841 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-882841 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=addons-882841 --network addons-882841 --ip 192.168.49.2 --volume addons-882841:/var --security-opt apparmor=unconfined --memory=4096mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763588073-21934@sha256:19d3da0413e1bfa354cbb88004c6796f8e9772a083e0230b0f6e50212ee04c7e
	I1121 23:48:11.905285  517697 cli_runner.go:164] Run: docker container inspect addons-882841 --format={{.State.Running}}
	I1121 23:48:11.934176  517697 cli_runner.go:164] Run: docker container inspect addons-882841 --format={{.State.Status}}
	I1121 23:48:11.955140  517697 cli_runner.go:164] Run: docker exec addons-882841 stat /var/lib/dpkg/alternatives/iptables
	I1121 23:48:12.005438  517697 oci.go:144] the created container "addons-882841" has a running status.
	I1121 23:48:12.005473  517697 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21934-513600/.minikube/machines/addons-882841/id_rsa...
	I1121 23:48:12.410732  517697 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21934-513600/.minikube/machines/addons-882841/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1121 23:48:12.435339  517697 cli_runner.go:164] Run: docker container inspect addons-882841 --format={{.State.Status}}
	I1121 23:48:12.459635  517697 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1121 23:48:12.459660  517697 kic_runner.go:114] Args: [docker exec --privileged addons-882841 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1121 23:48:12.517192  517697 cli_runner.go:164] Run: docker container inspect addons-882841 --format={{.State.Status}}
	I1121 23:48:12.546138  517697 machine.go:94] provisionDockerMachine start ...
	I1121 23:48:12.546236  517697 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-882841
	I1121 23:48:12.581125  517697 main.go:143] libmachine: Using SSH client type: native
	I1121 23:48:12.581442  517697 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33495 <nil> <nil>}
	I1121 23:48:12.581452  517697 main.go:143] libmachine: About to run SSH command:
	hostname
	I1121 23:48:12.582477  517697 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:45812->127.0.0.1:33495: read: connection reset by peer
	I1121 23:48:15.721412  517697 main.go:143] libmachine: SSH cmd err, output: <nil>: addons-882841
	
	I1121 23:48:15.721439  517697 ubuntu.go:182] provisioning hostname "addons-882841"
	I1121 23:48:15.721502  517697 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-882841
	I1121 23:48:15.738594  517697 main.go:143] libmachine: Using SSH client type: native
	I1121 23:48:15.738917  517697 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33495 <nil> <nil>}
	I1121 23:48:15.738936  517697 main.go:143] libmachine: About to run SSH command:
	sudo hostname addons-882841 && echo "addons-882841" | sudo tee /etc/hostname
	I1121 23:48:15.890656  517697 main.go:143] libmachine: SSH cmd err, output: <nil>: addons-882841
	
	I1121 23:48:15.890752  517697 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-882841
	I1121 23:48:15.907558  517697 main.go:143] libmachine: Using SSH client type: native
	I1121 23:48:15.907885  517697 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33495 <nil> <nil>}
	I1121 23:48:15.907913  517697 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-882841' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-882841/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-882841' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1121 23:48:16.050194  517697 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1121 23:48:16.050286  517697 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21934-513600/.minikube CaCertPath:/home/jenkins/minikube-integration/21934-513600/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21934-513600/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21934-513600/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21934-513600/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21934-513600/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21934-513600/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21934-513600/.minikube}
	I1121 23:48:16.050350  517697 ubuntu.go:190] setting up certificates
	I1121 23:48:16.050381  517697 provision.go:84] configureAuth start
	I1121 23:48:16.050473  517697 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-882841
	I1121 23:48:16.067951  517697 provision.go:143] copyHostCerts
	I1121 23:48:16.068036  517697 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21934-513600/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21934-513600/.minikube/ca.pem (1078 bytes)
	I1121 23:48:16.068202  517697 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21934-513600/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21934-513600/.minikube/cert.pem (1123 bytes)
	I1121 23:48:16.068260  517697 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21934-513600/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21934-513600/.minikube/key.pem (1675 bytes)
	I1121 23:48:16.068312  517697 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21934-513600/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21934-513600/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21934-513600/.minikube/certs/ca-key.pem org=jenkins.addons-882841 san=[127.0.0.1 192.168.49.2 addons-882841 localhost minikube]
	I1121 23:48:16.302165  517697 provision.go:177] copyRemoteCerts
	I1121 23:48:16.302232  517697 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1121 23:48:16.302280  517697 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-882841
	I1121 23:48:16.318408  517697 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33495 SSHKeyPath:/home/jenkins/minikube-integration/21934-513600/.minikube/machines/addons-882841/id_rsa Username:docker}
	I1121 23:48:16.417159  517697 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21934-513600/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1121 23:48:16.433274  517697 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21934-513600/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1121 23:48:16.449582  517697 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21934-513600/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1121 23:48:16.466913  517697 provision.go:87] duration metric: took 416.496161ms to configureAuth
	I1121 23:48:16.466943  517697 ubuntu.go:206] setting minikube options for container-runtime
	I1121 23:48:16.467121  517697 config.go:182] Loaded profile config "addons-882841": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1121 23:48:16.467231  517697 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-882841
	I1121 23:48:16.483442  517697 main.go:143] libmachine: Using SSH client type: native
	I1121 23:48:16.483801  517697 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33495 <nil> <nil>}
	I1121 23:48:16.483820  517697 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1121 23:48:16.752746  517697 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1121 23:48:16.752766  517697 machine.go:97] duration metric: took 4.206600004s to provisionDockerMachine
	I1121 23:48:16.752777  517697 client.go:176] duration metric: took 11.600920915s to LocalClient.Create
	I1121 23:48:16.752790  517697 start.go:167] duration metric: took 11.60097999s to libmachine.API.Create "addons-882841"
	I1121 23:48:16.752798  517697 start.go:293] postStartSetup for "addons-882841" (driver="docker")
	I1121 23:48:16.752807  517697 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1121 23:48:16.752875  517697 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1121 23:48:16.752934  517697 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-882841
	I1121 23:48:16.769771  517697 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33495 SSHKeyPath:/home/jenkins/minikube-integration/21934-513600/.minikube/machines/addons-882841/id_rsa Username:docker}
	I1121 23:48:16.869846  517697 ssh_runner.go:195] Run: cat /etc/os-release
	I1121 23:48:16.872921  517697 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1121 23:48:16.872950  517697 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1121 23:48:16.872962  517697 filesync.go:126] Scanning /home/jenkins/minikube-integration/21934-513600/.minikube/addons for local assets ...
	I1121 23:48:16.873024  517697 filesync.go:126] Scanning /home/jenkins/minikube-integration/21934-513600/.minikube/files for local assets ...
	I1121 23:48:16.873052  517697 start.go:296] duration metric: took 120.248926ms for postStartSetup
	I1121 23:48:16.873361  517697 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-882841
	I1121 23:48:16.889496  517697 profile.go:143] Saving config to /home/jenkins/minikube-integration/21934-513600/.minikube/profiles/addons-882841/config.json ...
	I1121 23:48:16.889779  517697 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1121 23:48:16.889960  517697 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-882841
	I1121 23:48:16.906306  517697 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33495 SSHKeyPath:/home/jenkins/minikube-integration/21934-513600/.minikube/machines/addons-882841/id_rsa Username:docker}
	I1121 23:48:17.004031  517697 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1121 23:48:17.009063  517697 start.go:128] duration metric: took 11.859173381s to createHost
	I1121 23:48:17.009089  517697 start.go:83] releasing machines lock for "addons-882841", held for 11.859342714s
	I1121 23:48:17.009190  517697 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-882841
	I1121 23:48:17.028832  517697 ssh_runner.go:195] Run: cat /version.json
	I1121 23:48:17.028889  517697 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-882841
	I1121 23:48:17.029147  517697 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1121 23:48:17.029208  517697 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-882841
	I1121 23:48:17.048237  517697 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33495 SSHKeyPath:/home/jenkins/minikube-integration/21934-513600/.minikube/machines/addons-882841/id_rsa Username:docker}
	I1121 23:48:17.052561  517697 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33495 SSHKeyPath:/home/jenkins/minikube-integration/21934-513600/.minikube/machines/addons-882841/id_rsa Username:docker}
	I1121 23:48:17.230903  517697 ssh_runner.go:195] Run: systemctl --version
	I1121 23:48:17.237108  517697 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1121 23:48:17.271256  517697 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1121 23:48:17.275463  517697 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1121 23:48:17.275532  517697 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1121 23:48:17.298265  517697 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/10-crio-bridge.conflist.disabled] bridge cni config(s)
	I1121 23:48:17.298285  517697 start.go:496] detecting cgroup driver to use...
	I1121 23:48:17.298315  517697 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1121 23:48:17.298364  517697 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1121 23:48:17.314136  517697 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1121 23:48:17.326555  517697 docker.go:218] disabling cri-docker service (if available) ...
	I1121 23:48:17.326667  517697 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1121 23:48:17.343626  517697 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1121 23:48:17.361525  517697 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1121 23:48:17.482750  517697 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1121 23:48:17.607661  517697 docker.go:234] disabling docker service ...
	I1121 23:48:17.607779  517697 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1121 23:48:17.627288  517697 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1121 23:48:17.640236  517697 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1121 23:48:17.765937  517697 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1121 23:48:17.880912  517697 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1121 23:48:17.893686  517697 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1121 23:48:17.907952  517697 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1121 23:48:17.908026  517697 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1121 23:48:17.916599  517697 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1121 23:48:17.916665  517697 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1121 23:48:17.925668  517697 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1121 23:48:17.935119  517697 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1121 23:48:17.943902  517697 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1121 23:48:17.951889  517697 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1121 23:48:17.960462  517697 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1121 23:48:17.973487  517697 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1121 23:48:17.982021  517697 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1121 23:48:17.989407  517697 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1121 23:48:17.996629  517697 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1121 23:48:18.115499  517697 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1121 23:48:18.279685  517697 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1121 23:48:18.279789  517697 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1121 23:48:18.283660  517697 start.go:564] Will wait 60s for crictl version
	I1121 23:48:18.283729  517697 ssh_runner.go:195] Run: which crictl
	I1121 23:48:18.287221  517697 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1121 23:48:18.314920  517697 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.2
	RuntimeApiVersion:  v1
	I1121 23:48:18.315041  517697 ssh_runner.go:195] Run: crio --version
	I1121 23:48:18.345078  517697 ssh_runner.go:195] Run: crio --version
	I1121 23:48:18.373136  517697 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.2 ...
	I1121 23:48:18.374363  517697 cli_runner.go:164] Run: docker network inspect addons-882841 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1121 23:48:18.389968  517697 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1121 23:48:18.393792  517697 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1121 23:48:18.403095  517697 kubeadm.go:884] updating cluster {Name:addons-882841 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763588073-21934@sha256:19d3da0413e1bfa354cbb88004c6796f8e9772a083e0230b0f6e50212ee04c7e Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:addons-882841 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNa
mes:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketV
MnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1121 23:48:18.403222  517697 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1121 23:48:18.403275  517697 ssh_runner.go:195] Run: sudo crictl images --output json
	I1121 23:48:18.441897  517697 crio.go:514] all images are preloaded for cri-o runtime.
	I1121 23:48:18.441922  517697 crio.go:433] Images already preloaded, skipping extraction
	I1121 23:48:18.441977  517697 ssh_runner.go:195] Run: sudo crictl images --output json
	I1121 23:48:18.466095  517697 crio.go:514] all images are preloaded for cri-o runtime.
	I1121 23:48:18.466122  517697 cache_images.go:86] Images are preloaded, skipping loading
	I1121 23:48:18.466130  517697 kubeadm.go:935] updating node { 192.168.49.2 8443 v1.34.1 crio true true} ...
	I1121 23:48:18.466213  517697 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=addons-882841 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:addons-882841 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1121 23:48:18.466306  517697 ssh_runner.go:195] Run: crio config
	I1121 23:48:18.519668  517697 cni.go:84] Creating CNI manager for ""
	I1121 23:48:18.519692  517697 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1121 23:48:18.519719  517697 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1121 23:48:18.519744  517697 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-882841 NodeName:addons-882841 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kuberne
tes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1121 23:48:18.519870  517697 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "addons-882841"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1121 23:48:18.519943  517697 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1121 23:48:18.527814  517697 binaries.go:51] Found k8s binaries, skipping transfer
	I1121 23:48:18.527917  517697 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1121 23:48:18.535677  517697 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (363 bytes)
	I1121 23:48:18.548725  517697 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1121 23:48:18.561762  517697 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2210 bytes)
	I1121 23:48:18.574841  517697 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I1121 23:48:18.578495  517697 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1121 23:48:18.588813  517697 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1121 23:48:18.710443  517697 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1121 23:48:18.727312  517697 certs.go:69] Setting up /home/jenkins/minikube-integration/21934-513600/.minikube/profiles/addons-882841 for IP: 192.168.49.2
	I1121 23:48:18.727333  517697 certs.go:195] generating shared ca certs ...
	I1121 23:48:18.727348  517697 certs.go:227] acquiring lock for ca certs: {Name:mkaf4c79493334cb2058b349c36be7473837f9b7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1121 23:48:18.727472  517697 certs.go:241] generating "minikubeCA" ca cert: /home/jenkins/minikube-integration/21934-513600/.minikube/ca.key
	I1121 23:48:18.911581  517697 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21934-513600/.minikube/ca.crt ...
	I1121 23:48:18.911614  517697 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21934-513600/.minikube/ca.crt: {Name:mk9aa55453fcf9a5a4c30ab97d8e3cf50d149db9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1121 23:48:18.911819  517697 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21934-513600/.minikube/ca.key ...
	I1121 23:48:18.911832  517697 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21934-513600/.minikube/ca.key: {Name:mka98daf7e34c04048cf452042bef2d442adadb8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1121 23:48:18.911919  517697 certs.go:241] generating "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21934-513600/.minikube/proxy-client-ca.key
	I1121 23:48:19.140262  517697 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21934-513600/.minikube/proxy-client-ca.crt ...
	I1121 23:48:19.140292  517697 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21934-513600/.minikube/proxy-client-ca.crt: {Name:mkaf717a1819d0db70b6e4130ef58174f05fbada Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1121 23:48:19.140472  517697 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21934-513600/.minikube/proxy-client-ca.key ...
	I1121 23:48:19.140484  517697 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21934-513600/.minikube/proxy-client-ca.key: {Name:mkd8ffc55dc4383da1bb533ba0063c89b86f7eda Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1121 23:48:19.140594  517697 certs.go:257] generating profile certs ...
	I1121 23:48:19.140660  517697 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21934-513600/.minikube/profiles/addons-882841/client.key
	I1121 23:48:19.140678  517697 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21934-513600/.minikube/profiles/addons-882841/client.crt with IP's: []
	I1121 23:48:19.509316  517697 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21934-513600/.minikube/profiles/addons-882841/client.crt ...
	I1121 23:48:19.509358  517697 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21934-513600/.minikube/profiles/addons-882841/client.crt: {Name:mk846505275ee80b58d909ce5fd9b6d3a3629ebb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1121 23:48:19.509541  517697 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21934-513600/.minikube/profiles/addons-882841/client.key ...
	I1121 23:48:19.509554  517697 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21934-513600/.minikube/profiles/addons-882841/client.key: {Name:mk1565060f77005f003a53864b1e37ed589f4b5c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1121 23:48:19.509634  517697 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21934-513600/.minikube/profiles/addons-882841/apiserver.key.696983bf
	I1121 23:48:19.509656  517697 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21934-513600/.minikube/profiles/addons-882841/apiserver.crt.696983bf with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2]
	I1121 23:48:19.801808  517697 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21934-513600/.minikube/profiles/addons-882841/apiserver.crt.696983bf ...
	I1121 23:48:19.801840  517697 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21934-513600/.minikube/profiles/addons-882841/apiserver.crt.696983bf: {Name:mk50ea577b93205edaa13b5cdd71cddb9428b381 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1121 23:48:19.802021  517697 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21934-513600/.minikube/profiles/addons-882841/apiserver.key.696983bf ...
	I1121 23:48:19.802038  517697 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21934-513600/.minikube/profiles/addons-882841/apiserver.key.696983bf: {Name:mk33f735dfbc5b7a4a68736b59562f6821940f4b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1121 23:48:19.802117  517697 certs.go:382] copying /home/jenkins/minikube-integration/21934-513600/.minikube/profiles/addons-882841/apiserver.crt.696983bf -> /home/jenkins/minikube-integration/21934-513600/.minikube/profiles/addons-882841/apiserver.crt
	I1121 23:48:19.802197  517697 certs.go:386] copying /home/jenkins/minikube-integration/21934-513600/.minikube/profiles/addons-882841/apiserver.key.696983bf -> /home/jenkins/minikube-integration/21934-513600/.minikube/profiles/addons-882841/apiserver.key
	I1121 23:48:19.802248  517697 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21934-513600/.minikube/profiles/addons-882841/proxy-client.key
	I1121 23:48:19.802271  517697 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21934-513600/.minikube/profiles/addons-882841/proxy-client.crt with IP's: []
	I1121 23:48:19.967783  517697 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21934-513600/.minikube/profiles/addons-882841/proxy-client.crt ...
	I1121 23:48:19.967812  517697 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21934-513600/.minikube/profiles/addons-882841/proxy-client.crt: {Name:mk2a9aab16ec6d745447f7af0a56129168b939be Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1121 23:48:19.967980  517697 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21934-513600/.minikube/profiles/addons-882841/proxy-client.key ...
	I1121 23:48:19.967994  517697 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21934-513600/.minikube/profiles/addons-882841/proxy-client.key: {Name:mk7e74e0c51b6b1ebff112ae6f72f2251877ef76 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1121 23:48:19.968182  517697 certs.go:484] found cert: /home/jenkins/minikube-integration/21934-513600/.minikube/certs/ca-key.pem (1679 bytes)
	I1121 23:48:19.968224  517697 certs.go:484] found cert: /home/jenkins/minikube-integration/21934-513600/.minikube/certs/ca.pem (1078 bytes)
	I1121 23:48:19.968252  517697 certs.go:484] found cert: /home/jenkins/minikube-integration/21934-513600/.minikube/certs/cert.pem (1123 bytes)
	I1121 23:48:19.968286  517697 certs.go:484] found cert: /home/jenkins/minikube-integration/21934-513600/.minikube/certs/key.pem (1675 bytes)
	I1121 23:48:19.968934  517697 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21934-513600/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1121 23:48:19.986622  517697 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21934-513600/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1121 23:48:20.007319  517697 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21934-513600/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1121 23:48:20.030364  517697 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21934-513600/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1121 23:48:20.049721  517697 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21934-513600/.minikube/profiles/addons-882841/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I1121 23:48:20.067831  517697 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21934-513600/.minikube/profiles/addons-882841/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1121 23:48:20.086649  517697 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21934-513600/.minikube/profiles/addons-882841/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1121 23:48:20.105563  517697 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21934-513600/.minikube/profiles/addons-882841/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1121 23:48:20.124429  517697 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21934-513600/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1121 23:48:20.142128  517697 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1121 23:48:20.156162  517697 ssh_runner.go:195] Run: openssl version
	I1121 23:48:20.162869  517697 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1121 23:48:20.171978  517697 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1121 23:48:20.175908  517697 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov 21 23:48 /usr/share/ca-certificates/minikubeCA.pem
	I1121 23:48:20.175977  517697 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1121 23:48:20.217381  517697 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1121 23:48:20.225893  517697 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1121 23:48:20.229383  517697 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1121 23:48:20.229433  517697 kubeadm.go:401] StartCluster: {Name:addons-882841 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763588073-21934@sha256:19d3da0413e1bfa354cbb88004c6796f8e9772a083e0230b0f6e50212ee04c7e Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:addons-882841 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames
:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMne
tClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1121 23:48:20.229506  517697 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1121 23:48:20.229571  517697 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1121 23:48:20.258235  517697 cri.go:89] found id: ""
	I1121 23:48:20.258353  517697 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1121 23:48:20.266115  517697 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1121 23:48:20.275056  517697 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1121 23:48:20.275129  517697 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1121 23:48:20.286902  517697 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1121 23:48:20.286923  517697 kubeadm.go:158] found existing configuration files:
	
	I1121 23:48:20.286980  517697 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1121 23:48:20.296202  517697 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1121 23:48:20.296316  517697 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1121 23:48:20.304464  517697 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1121 23:48:20.313524  517697 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1121 23:48:20.313587  517697 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1121 23:48:20.321705  517697 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1121 23:48:20.331182  517697 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1121 23:48:20.331270  517697 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1121 23:48:20.338363  517697 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1121 23:48:20.345664  517697 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1121 23:48:20.345748  517697 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1121 23:48:20.353173  517697 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1121 23:48:20.416064  517697 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is in maintenance mode, please migrate to cgroups v2
	I1121 23:48:20.416373  517697 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1121 23:48:20.482942  517697 kubeadm.go:319] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1121 23:48:37.128530  517697 kubeadm.go:319] [init] Using Kubernetes version: v1.34.1
	I1121 23:48:37.128587  517697 kubeadm.go:319] [preflight] Running pre-flight checks
	I1121 23:48:37.128676  517697 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1121 23:48:37.128767  517697 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1121 23:48:37.128802  517697 kubeadm.go:319] OS: Linux
	I1121 23:48:37.128852  517697 kubeadm.go:319] CGROUPS_CPU: enabled
	I1121 23:48:37.128901  517697 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1121 23:48:37.128948  517697 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1121 23:48:37.128996  517697 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1121 23:48:37.129045  517697 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1121 23:48:37.129093  517697 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1121 23:48:37.129138  517697 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1121 23:48:37.129194  517697 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1121 23:48:37.129240  517697 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1121 23:48:37.129312  517697 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1121 23:48:37.129406  517697 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1121 23:48:37.129496  517697 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1121 23:48:37.129558  517697 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1121 23:48:37.130973  517697 out.go:252]   - Generating certificates and keys ...
	I1121 23:48:37.131073  517697 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1121 23:48:37.131161  517697 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1121 23:48:37.131245  517697 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1121 23:48:37.131319  517697 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1121 23:48:37.131421  517697 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1121 23:48:37.131492  517697 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1121 23:48:37.131551  517697 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1121 23:48:37.131675  517697 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [addons-882841 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1121 23:48:37.131755  517697 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1121 23:48:37.131898  517697 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [addons-882841 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1121 23:48:37.131975  517697 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1121 23:48:37.132048  517697 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1121 23:48:37.132108  517697 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1121 23:48:37.132191  517697 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1121 23:48:37.132257  517697 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1121 23:48:37.132322  517697 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1121 23:48:37.132399  517697 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1121 23:48:37.132499  517697 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1121 23:48:37.132582  517697 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1121 23:48:37.132677  517697 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1121 23:48:37.132755  517697 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1121 23:48:37.134298  517697 out.go:252]   - Booting up control plane ...
	I1121 23:48:37.134399  517697 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1121 23:48:37.134504  517697 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1121 23:48:37.134611  517697 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1121 23:48:37.134723  517697 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1121 23:48:37.134864  517697 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1121 23:48:37.135004  517697 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1121 23:48:37.135105  517697 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1121 23:48:37.135163  517697 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1121 23:48:37.135316  517697 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1121 23:48:37.135449  517697 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1121 23:48:37.135529  517697 kubeadm.go:319] [kubelet-check] The kubelet is healthy after 1.001578574s
	I1121 23:48:37.135648  517697 kubeadm.go:319] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1121 23:48:37.135741  517697 kubeadm.go:319] [control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez
	I1121 23:48:37.135889  517697 kubeadm.go:319] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1121 23:48:37.136026  517697 kubeadm.go:319] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1121 23:48:37.136122  517697 kubeadm.go:319] [control-plane-check] kube-controller-manager is healthy after 3.762253005s
	I1121 23:48:37.136207  517697 kubeadm.go:319] [control-plane-check] kube-scheduler is healthy after 4.63050742s
	I1121 23:48:37.136321  517697 kubeadm.go:319] [control-plane-check] kube-apiserver is healthy after 6.001177086s
	I1121 23:48:37.136456  517697 kubeadm.go:319] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1121 23:48:37.136587  517697 kubeadm.go:319] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1121 23:48:37.136671  517697 kubeadm.go:319] [upload-certs] Skipping phase. Please see --upload-certs
	I1121 23:48:37.136871  517697 kubeadm.go:319] [mark-control-plane] Marking the node addons-882841 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1121 23:48:37.136946  517697 kubeadm.go:319] [bootstrap-token] Using token: aisjqn.1bv21k6igtg6gyat
	I1121 23:48:37.138344  517697 out.go:252]   - Configuring RBAC rules ...
	I1121 23:48:37.138495  517697 kubeadm.go:319] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1121 23:48:37.138604  517697 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1121 23:48:37.138815  517697 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1121 23:48:37.138977  517697 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1121 23:48:37.139120  517697 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1121 23:48:37.139217  517697 kubeadm.go:319] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1121 23:48:37.139345  517697 kubeadm.go:319] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1121 23:48:37.139431  517697 kubeadm.go:319] [addons] Applied essential addon: CoreDNS
	I1121 23:48:37.139488  517697 kubeadm.go:319] [addons] Applied essential addon: kube-proxy
	I1121 23:48:37.139496  517697 kubeadm.go:319] 
	I1121 23:48:37.139557  517697 kubeadm.go:319] Your Kubernetes control-plane has initialized successfully!
	I1121 23:48:37.139565  517697 kubeadm.go:319] 
	I1121 23:48:37.139654  517697 kubeadm.go:319] To start using your cluster, you need to run the following as a regular user:
	I1121 23:48:37.139673  517697 kubeadm.go:319] 
	I1121 23:48:37.139732  517697 kubeadm.go:319]   mkdir -p $HOME/.kube
	I1121 23:48:37.139842  517697 kubeadm.go:319]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1121 23:48:37.139917  517697 kubeadm.go:319]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1121 23:48:37.139930  517697 kubeadm.go:319] 
	I1121 23:48:37.139995  517697 kubeadm.go:319] Alternatively, if you are the root user, you can run:
	I1121 23:48:37.140004  517697 kubeadm.go:319] 
	I1121 23:48:37.140052  517697 kubeadm.go:319]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1121 23:48:37.140069  517697 kubeadm.go:319] 
	I1121 23:48:37.140122  517697 kubeadm.go:319] You should now deploy a pod network to the cluster.
	I1121 23:48:37.140200  517697 kubeadm.go:319] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1121 23:48:37.140276  517697 kubeadm.go:319]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1121 23:48:37.140284  517697 kubeadm.go:319] 
	I1121 23:48:37.140368  517697 kubeadm.go:319] You can now join any number of control-plane nodes by copying certificate authorities
	I1121 23:48:37.140447  517697 kubeadm.go:319] and service account keys on each node and then running the following as root:
	I1121 23:48:37.140456  517697 kubeadm.go:319] 
	I1121 23:48:37.140541  517697 kubeadm.go:319]   kubeadm join control-plane.minikube.internal:8443 --token aisjqn.1bv21k6igtg6gyat \
	I1121 23:48:37.140678  517697 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:ecfebb5fda4f065a571cf90106e71e452abce05aaa4d3155b81d7383977d6854 \
	I1121 23:48:37.140725  517697 kubeadm.go:319] 	--control-plane 
	I1121 23:48:37.140736  517697 kubeadm.go:319] 
	I1121 23:48:37.140837  517697 kubeadm.go:319] Then you can join any number of worker nodes by running the following on each as root:
	I1121 23:48:37.140847  517697 kubeadm.go:319] 
	I1121 23:48:37.140939  517697 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8443 --token aisjqn.1bv21k6igtg6gyat \
	I1121 23:48:37.141073  517697 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:ecfebb5fda4f065a571cf90106e71e452abce05aaa4d3155b81d7383977d6854 
	I1121 23:48:37.141088  517697 cni.go:84] Creating CNI manager for ""
	I1121 23:48:37.141096  517697 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1121 23:48:37.142711  517697 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I1121 23:48:37.144311  517697 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1121 23:48:37.149645  517697 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.1/kubectl ...
	I1121 23:48:37.149668  517697 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1121 23:48:37.163740  517697 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1121 23:48:37.470381  517697 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1121 23:48:37.470532  517697 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1121 23:48:37.470610  517697 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-882841 minikube.k8s.io/updated_at=2025_11_21T23_48_37_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=299bbe887a12c40541707cc636234f35f4ff1785 minikube.k8s.io/name=addons-882841 minikube.k8s.io/primary=true
	I1121 23:48:37.701514  517697 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1121 23:48:37.701567  517697 ops.go:34] apiserver oom_adj: -16
	I1121 23:48:38.201667  517697 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1121 23:48:38.702267  517697 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1121 23:48:39.201920  517697 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1121 23:48:39.701633  517697 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1121 23:48:40.201654  517697 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1121 23:48:40.702501  517697 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1121 23:48:40.799912  517697 kubeadm.go:1114] duration metric: took 3.329431907s to wait for elevateKubeSystemPrivileges
	I1121 23:48:40.799948  517697 kubeadm.go:403] duration metric: took 20.570518195s to StartCluster
	I1121 23:48:40.799965  517697 settings.go:142] acquiring lock: {Name:mk6c31eb57ec65b047b78b4e1046e03fe7cc77bb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1121 23:48:40.800089  517697 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21934-513600/kubeconfig
	I1121 23:48:40.800515  517697 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21934-513600/kubeconfig: {Name:mkcd094d8a765e5a81369c0fb33cb5e696b17bb8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1121 23:48:40.800700  517697 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1121 23:48:40.800726  517697 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1121 23:48:40.800965  517697 config.go:182] Loaded profile config "addons-882841": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1121 23:48:40.801005  517697 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:true auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:true storage-provisioner:true storage-provisioner-rancher:true volcano:true volumesnapshots:true yakd:true]
	I1121 23:48:40.801076  517697 addons.go:70] Setting yakd=true in profile "addons-882841"
	I1121 23:48:40.801090  517697 addons.go:239] Setting addon yakd=true in "addons-882841"
	I1121 23:48:40.801114  517697 host.go:66] Checking if "addons-882841" exists ...
	I1121 23:48:40.801556  517697 cli_runner.go:164] Run: docker container inspect addons-882841 --format={{.State.Status}}
	I1121 23:48:40.801760  517697 addons.go:70] Setting inspektor-gadget=true in profile "addons-882841"
	I1121 23:48:40.801786  517697 addons.go:239] Setting addon inspektor-gadget=true in "addons-882841"
	I1121 23:48:40.801847  517697 host.go:66] Checking if "addons-882841" exists ...
	I1121 23:48:40.802135  517697 addons.go:70] Setting metrics-server=true in profile "addons-882841"
	I1121 23:48:40.802158  517697 addons.go:239] Setting addon metrics-server=true in "addons-882841"
	I1121 23:48:40.802183  517697 host.go:66] Checking if "addons-882841" exists ...
	I1121 23:48:40.802436  517697 cli_runner.go:164] Run: docker container inspect addons-882841 --format={{.State.Status}}
	I1121 23:48:40.802611  517697 cli_runner.go:164] Run: docker container inspect addons-882841 --format={{.State.Status}}
	I1121 23:48:40.804764  517697 addons.go:70] Setting amd-gpu-device-plugin=true in profile "addons-882841"
	I1121 23:48:40.804922  517697 addons.go:239] Setting addon amd-gpu-device-plugin=true in "addons-882841"
	I1121 23:48:40.804954  517697 host.go:66] Checking if "addons-882841" exists ...
	I1121 23:48:40.806700  517697 addons.go:70] Setting nvidia-device-plugin=true in profile "addons-882841"
	I1121 23:48:40.806759  517697 addons.go:239] Setting addon nvidia-device-plugin=true in "addons-882841"
	I1121 23:48:40.806804  517697 host.go:66] Checking if "addons-882841" exists ...
	I1121 23:48:40.807250  517697 cli_runner.go:164] Run: docker container inspect addons-882841 --format={{.State.Status}}
	I1121 23:48:40.807310  517697 cli_runner.go:164] Run: docker container inspect addons-882841 --format={{.State.Status}}
	I1121 23:48:40.807453  517697 addons.go:70] Setting cloud-spanner=true in profile "addons-882841"
	I1121 23:48:40.807979  517697 addons.go:239] Setting addon cloud-spanner=true in "addons-882841"
	I1121 23:48:40.808003  517697 host.go:66] Checking if "addons-882841" exists ...
	I1121 23:48:40.808403  517697 cli_runner.go:164] Run: docker container inspect addons-882841 --format={{.State.Status}}
	I1121 23:48:40.807465  517697 addons.go:70] Setting csi-hostpath-driver=true in profile "addons-882841"
	I1121 23:48:40.807472  517697 addons.go:70] Setting default-storageclass=true in profile "addons-882841"
	I1121 23:48:40.821208  517697 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "addons-882841"
	I1121 23:48:40.842078  517697 addons.go:239] Setting addon csi-hostpath-driver=true in "addons-882841"
	I1121 23:48:40.842205  517697 host.go:66] Checking if "addons-882841" exists ...
	I1121 23:48:40.842772  517697 cli_runner.go:164] Run: docker container inspect addons-882841 --format={{.State.Status}}
	I1121 23:48:40.807479  517697 addons.go:70] Setting gcp-auth=true in profile "addons-882841"
	I1121 23:48:40.843040  517697 mustload.go:66] Loading cluster: addons-882841
	I1121 23:48:40.843240  517697 config.go:182] Loaded profile config "addons-882841": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1121 23:48:40.843538  517697 cli_runner.go:164] Run: docker container inspect addons-882841 --format={{.State.Status}}
	I1121 23:48:40.857251  517697 cli_runner.go:164] Run: docker container inspect addons-882841 --format={{.State.Status}}
	I1121 23:48:40.807484  517697 addons.go:70] Setting ingress=true in profile "addons-882841"
	I1121 23:48:40.807492  517697 addons.go:70] Setting ingress-dns=true in profile "addons-882841"
	I1121 23:48:40.857554  517697 addons.go:239] Setting addon ingress-dns=true in "addons-882841"
	I1121 23:48:40.807566  517697 out.go:179] * Verifying Kubernetes components...
	I1121 23:48:40.807714  517697 addons.go:70] Setting volcano=true in profile "addons-882841"
	I1121 23:48:40.807722  517697 addons.go:70] Setting registry=true in profile "addons-882841"
	I1121 23:48:40.807727  517697 addons.go:70] Setting registry-creds=true in profile "addons-882841"
	I1121 23:48:40.807733  517697 addons.go:70] Setting storage-provisioner=true in profile "addons-882841"
	I1121 23:48:40.807738  517697 addons.go:70] Setting storage-provisioner-rancher=true in profile "addons-882841"
	I1121 23:48:40.807769  517697 addons.go:70] Setting volumesnapshots=true in profile "addons-882841"
	I1121 23:48:40.877166  517697 addons.go:239] Setting addon volumesnapshots=true in "addons-882841"
	I1121 23:48:40.877210  517697 host.go:66] Checking if "addons-882841" exists ...
	I1121 23:48:40.877662  517697 cli_runner.go:164] Run: docker container inspect addons-882841 --format={{.State.Status}}
	I1121 23:48:40.893171  517697 addons.go:239] Setting addon ingress=true in "addons-882841"
	I1121 23:48:40.893243  517697 host.go:66] Checking if "addons-882841" exists ...
	I1121 23:48:40.893710  517697 cli_runner.go:164] Run: docker container inspect addons-882841 --format={{.State.Status}}
	I1121 23:48:40.917467  517697 host.go:66] Checking if "addons-882841" exists ...
	I1121 23:48:40.918136  517697 cli_runner.go:164] Run: docker container inspect addons-882841 --format={{.State.Status}}
	I1121 23:48:40.920523  517697 addons.go:239] Setting addon volcano=true in "addons-882841"
	I1121 23:48:40.920612  517697 host.go:66] Checking if "addons-882841" exists ...
	I1121 23:48:40.921203  517697 cli_runner.go:164] Run: docker container inspect addons-882841 --format={{.State.Status}}
	I1121 23:48:40.939318  517697 addons.go:239] Setting addon registry=true in "addons-882841"
	I1121 23:48:40.939449  517697 host.go:66] Checking if "addons-882841" exists ...
	I1121 23:48:40.940007  517697 cli_runner.go:164] Run: docker container inspect addons-882841 --format={{.State.Status}}
	I1121 23:48:40.947191  517697 addons.go:239] Setting addon registry-creds=true in "addons-882841"
	I1121 23:48:40.947248  517697 host.go:66] Checking if "addons-882841" exists ...
	I1121 23:48:40.947742  517697 cli_runner.go:164] Run: docker container inspect addons-882841 --format={{.State.Status}}
	I1121 23:48:40.959642  517697 addons.go:239] Setting addon storage-provisioner=true in "addons-882841"
	I1121 23:48:40.959695  517697 host.go:66] Checking if "addons-882841" exists ...
	I1121 23:48:40.960178  517697 cli_runner.go:164] Run: docker container inspect addons-882841 --format={{.State.Status}}
	I1121 23:48:40.962094  517697 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1121 23:48:40.974852  517697 addons_storage_classes.go:34] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-882841"
	I1121 23:48:40.975199  517697 cli_runner.go:164] Run: docker container inspect addons-882841 --format={{.State.Status}}
	I1121 23:48:40.992633  517697 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1121 23:48:40.992961  517697 out.go:179]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.46.0
	I1121 23:48:41.004129  517697 out.go:179]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I1121 23:48:41.008526  517697 out.go:179]   - Using image docker.io/rocm/k8s-device-plugin:1.25.2.8
	I1121 23:48:41.008688  517697 out.go:179]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.18.0
	I1121 23:48:41.008696  517697 out.go:179]   - Using image docker.io/marcnuri/yakd:0.0.5
	I1121 23:48:41.008701  517697 out.go:179]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.43
	I1121 23:48:41.008704  517697 out.go:179]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.8.0
	I1121 23:48:41.008719  517697 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I1121 23:48:41.008726  517697 out.go:179]   - Using image docker.io/registry:3.0.0
	I1121 23:48:41.008747  517697 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I1121 23:48:41.008807  517697 addons.go:436] installing /etc/kubernetes/addons/ig-deployment.yaml
	I1121 23:48:41.011284  517697 host.go:66] Checking if "addons-882841" exists ...
	I1121 23:48:41.016902  517697 addons.go:436] installing /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I1121 23:48:41.021285  517697 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/amd-gpu-device-plugin.yaml (1868 bytes)
	I1121 23:48:41.021627  517697 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-882841
	I1121 23:48:41.017899  517697 ssh_runner.go:362] scp volumesnapshots/csi-hostpath-snapshotclass.yaml --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I1121 23:48:41.017908  517697 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-deployment.yaml (15034 bytes)
	I1121 23:48:41.018008  517697 addons.go:436] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I1121 23:48:41.021140  517697 addons.go:436] installing /etc/kubernetes/addons/yakd-ns.yaml
	I1121 23:48:41.035236  517697 ssh_runner.go:362] scp yakd/yakd-ns.yaml --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I1121 23:48:41.035341  517697 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-882841
	I1121 23:48:41.038425  517697 addons.go:436] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1121 23:48:41.038452  517697 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1121 23:48:41.038516  517697 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-882841
	I1121 23:48:41.053925  517697 addons.go:436] installing /etc/kubernetes/addons/deployment.yaml
	I1121 23:48:41.053991  517697 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I1121 23:48:41.054094  517697 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-882841
	I1121 23:48:41.062608  517697 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I1121 23:48:41.065935  517697 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I1121 23:48:41.070064  517697 out.go:179]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I1121 23:48:41.076137  517697 out.go:179]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I1121 23:48:41.079015  517697 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I1121 23:48:41.090942  517697 out.go:179]   - Using image docker.io/upmcenterprises/registry-creds:1.10
	I1121 23:48:41.091924  517697 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-882841
	I1121 23:48:41.093239  517697 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-882841
	I1121 23:48:41.105192  517697 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I1121 23:48:41.105272  517697 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-882841
	I1121 23:48:41.110181  517697 addons.go:239] Setting addon default-storageclass=true in "addons-882841"
	I1121 23:48:41.110219  517697 host.go:66] Checking if "addons-882841" exists ...
	I1121 23:48:41.110619  517697 cli_runner.go:164] Run: docker container inspect addons-882841 --format={{.State.Status}}
	I1121 23:48:41.122857  517697 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.4
	I1121 23:48:41.129932  517697 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.4
	I1121 23:48:41.132891  517697 out.go:179]   - Using image registry.k8s.io/ingress-nginx/controller:v1.14.0
	I1121 23:48:41.135856  517697 addons.go:436] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I1121 23:48:41.135880  517697 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
	I1121 23:48:41.135956  517697 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-882841
	I1121 23:48:41.156495  517697 addons.go:436] installing /etc/kubernetes/addons/registry-creds-rc.yaml
	I1121 23:48:41.156514  517697 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-creds-rc.yaml (3306 bytes)
	I1121 23:48:41.156572  517697 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-882841
	I1121 23:48:41.157390  517697 out.go:179]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.9
	I1121 23:48:41.183218  517697 addons.go:436] installing /etc/kubernetes/addons/registry-rc.yaml
	I1121 23:48:41.183239  517697 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (860 bytes)
	I1121 23:48:41.183301  517697 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-882841
	I1121 23:48:41.185958  517697 out.go:179]   - Using image docker.io/kicbase/minikube-ingress-dns:0.0.4
	I1121 23:48:41.191268  517697 addons.go:436] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I1121 23:48:41.191354  517697 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2889 bytes)
	I1121 23:48:41.191473  517697 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-882841
	I1121 23:48:41.205598  517697 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1121 23:48:41.208504  517697 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1121 23:48:41.208527  517697 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1121 23:48:41.208587  517697 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-882841
	I1121 23:48:41.215711  517697 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I1121 23:48:41.221977  517697 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I1121 23:48:41.225680  517697 addons.go:436] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I1121 23:48:41.225751  517697 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-attacher.yaml --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I1121 23:48:41.226075  517697 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-882841
	I1121 23:48:41.260681  517697 addons.go:239] Setting addon storage-provisioner-rancher=true in "addons-882841"
	I1121 23:48:41.260723  517697 host.go:66] Checking if "addons-882841" exists ...
	I1121 23:48:41.261117  517697 cli_runner.go:164] Run: docker container inspect addons-882841 --format={{.State.Status}}
	W1121 23:48:41.284444  517697 out.go:285] ! Enabling 'volcano' returned an error: running callbacks: [volcano addon does not support crio]
	I1121 23:48:41.292278  517697 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33495 SSHKeyPath:/home/jenkins/minikube-integration/21934-513600/.minikube/machines/addons-882841/id_rsa Username:docker}
	I1121 23:48:41.315521  517697 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33495 SSHKeyPath:/home/jenkins/minikube-integration/21934-513600/.minikube/machines/addons-882841/id_rsa Username:docker}
	I1121 23:48:41.335825  517697 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33495 SSHKeyPath:/home/jenkins/minikube-integration/21934-513600/.minikube/machines/addons-882841/id_rsa Username:docker}
	I1121 23:48:41.358169  517697 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33495 SSHKeyPath:/home/jenkins/minikube-integration/21934-513600/.minikube/machines/addons-882841/id_rsa Username:docker}
	I1121 23:48:41.381915  517697 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33495 SSHKeyPath:/home/jenkins/minikube-integration/21934-513600/.minikube/machines/addons-882841/id_rsa Username:docker}
	I1121 23:48:41.410063  517697 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33495 SSHKeyPath:/home/jenkins/minikube-integration/21934-513600/.minikube/machines/addons-882841/id_rsa Username:docker}
	I1121 23:48:41.416459  517697 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1121 23:48:41.416477  517697 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1121 23:48:41.416547  517697 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-882841
	I1121 23:48:41.420960  517697 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33495 SSHKeyPath:/home/jenkins/minikube-integration/21934-513600/.minikube/machines/addons-882841/id_rsa Username:docker}
	I1121 23:48:41.422054  517697 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33495 SSHKeyPath:/home/jenkins/minikube-integration/21934-513600/.minikube/machines/addons-882841/id_rsa Username:docker}
	I1121 23:48:41.427521  517697 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33495 SSHKeyPath:/home/jenkins/minikube-integration/21934-513600/.minikube/machines/addons-882841/id_rsa Username:docker}
	I1121 23:48:41.428867  517697 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1121 23:48:41.441926  517697 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33495 SSHKeyPath:/home/jenkins/minikube-integration/21934-513600/.minikube/machines/addons-882841/id_rsa Username:docker}
	I1121 23:48:41.442740  517697 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33495 SSHKeyPath:/home/jenkins/minikube-integration/21934-513600/.minikube/machines/addons-882841/id_rsa Username:docker}
	I1121 23:48:41.443356  517697 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33495 SSHKeyPath:/home/jenkins/minikube-integration/21934-513600/.minikube/machines/addons-882841/id_rsa Username:docker}
	I1121 23:48:41.462979  517697 out.go:179]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I1121 23:48:41.465197  517697 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33495 SSHKeyPath:/home/jenkins/minikube-integration/21934-513600/.minikube/machines/addons-882841/id_rsa Username:docker}
	W1121 23:48:41.468442  517697 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I1121 23:48:41.468469  517697 retry.go:31] will retry after 227.362888ms: ssh: handshake failed: EOF
	I1121 23:48:41.472466  517697 out.go:179]   - Using image docker.io/busybox:stable
	I1121 23:48:41.475275  517697 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I1121 23:48:41.475296  517697 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I1121 23:48:41.475361  517697 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-882841
	I1121 23:48:41.493329  517697 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33495 SSHKeyPath:/home/jenkins/minikube-integration/21934-513600/.minikube/machines/addons-882841/id_rsa Username:docker}
	I1121 23:48:41.504926  517697 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33495 SSHKeyPath:/home/jenkins/minikube-integration/21934-513600/.minikube/machines/addons-882841/id_rsa Username:docker}
	W1121 23:48:41.505995  517697 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I1121 23:48:41.506019  517697 retry.go:31] will retry after 341.176085ms: ssh: handshake failed: EOF
	W1121 23:48:41.697448  517697 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I1121 23:48:41.697527  517697 retry.go:31] will retry after 327.994212ms: ssh: handshake failed: EOF
	I1121 23:48:42.200413  517697 addons.go:436] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I1121 23:48:42.200500  517697 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-hostpath.yaml --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I1121 23:48:42.275539  517697 addons.go:436] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1121 23:48:42.275622  517697 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I1121 23:48:42.297953  517697 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1121 23:48:42.309739  517697 addons.go:436] installing /etc/kubernetes/addons/yakd-sa.yaml
	I1121 23:48:42.309834  517697 ssh_runner.go:362] scp yakd/yakd-sa.yaml --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I1121 23:48:42.341174  517697 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I1121 23:48:42.342395  517697 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ig-deployment.yaml
	I1121 23:48:42.365927  517697 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/registry-creds-rc.yaml
	I1121 23:48:42.368550  517697 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1121 23:48:42.399231  517697 addons.go:436] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I1121 23:48:42.399254  517697 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-health-monitor-controller.yaml --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I1121 23:48:42.418555  517697 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I1121 23:48:42.425233  517697 addons.go:436] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I1121 23:48:42.425312  517697 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotclasses.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I1121 23:48:42.429285  517697 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I1121 23:48:42.437173  517697 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I1121 23:48:42.451364  517697 addons.go:436] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I1121 23:48:42.451441  517697 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-provisioner.yaml --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I1121 23:48:42.492612  517697 addons.go:436] installing /etc/kubernetes/addons/registry-svc.yaml
	I1121 23:48:42.492689  517697 ssh_runner.go:362] scp registry/registry-svc.yaml --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I1121 23:48:42.548252  517697 addons.go:436] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1121 23:48:42.548330  517697 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1121 23:48:42.558540  517697 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (1.129612068s)
	I1121 23:48:42.558707  517697 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (1.566050239s)
	I1121 23:48:42.558741  517697 start.go:977] {"host.minikube.internal": 192.168.49.1} host record injected into CoreDNS's ConfigMap
	I1121 23:48:42.560162  517697 node_ready.go:35] waiting up to 6m0s for node "addons-882841" to be "Ready" ...
	I1121 23:48:42.585313  517697 addons.go:436] installing /etc/kubernetes/addons/yakd-crb.yaml
	I1121 23:48:42.585334  517697 ssh_runner.go:362] scp yakd/yakd-crb.yaml --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I1121 23:48:42.600048  517697 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I1121 23:48:42.606452  517697 addons.go:436] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I1121 23:48:42.606519  517697 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-resizer.yaml --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I1121 23:48:42.622931  517697 addons.go:436] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I1121 23:48:42.623007  517697 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotcontents.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I1121 23:48:42.638911  517697 addons.go:436] installing /etc/kubernetes/addons/registry-proxy.yaml
	I1121 23:48:42.638935  517697 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I1121 23:48:42.791446  517697 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I1121 23:48:42.797349  517697 addons.go:436] installing /etc/kubernetes/addons/yakd-svc.yaml
	I1121 23:48:42.797376  517697 ssh_runner.go:362] scp yakd/yakd-svc.yaml --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I1121 23:48:42.842188  517697 addons.go:436] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I1121 23:48:42.842214  517697 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-snapshotter.yaml --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I1121 23:48:42.862812  517697 addons.go:436] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1121 23:48:42.862835  517697 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1121 23:48:42.873710  517697 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I1121 23:48:42.878867  517697 addons.go:436] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I1121 23:48:42.878896  517697 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshots.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I1121 23:48:42.949932  517697 addons.go:436] installing /etc/kubernetes/addons/yakd-dp.yaml
	I1121 23:48:42.949956  517697 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I1121 23:48:43.036741  517697 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1121 23:48:43.041664  517697 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I1121 23:48:43.041688  517697 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I1121 23:48:43.065098  517697 kapi.go:214] "coredns" deployment in "kube-system" namespace and "addons-882841" context rescaled to 1 replicas
	I1121 23:48:43.104097  517697 addons.go:436] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I1121 23:48:43.104130  517697 ssh_runner.go:362] scp volumesnapshots/rbac-volume-snapshot-controller.yaml --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I1121 23:48:43.131184  517697 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I1121 23:48:43.131209  517697 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-driverinfo.yaml --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I1121 23:48:43.187217  517697 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I1121 23:48:43.191056  517697 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I1121 23:48:43.191078  517697 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I1121 23:48:43.196607  517697 addons.go:436] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1121 23:48:43.196632  517697 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I1121 23:48:43.261080  517697 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I1121 23:48:43.261104  517697 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I1121 23:48:43.282692  517697 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1121 23:48:43.572640  517697 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I1121 23:48:43.572667  517697 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-storageclass.yaml --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I1121 23:48:43.726081  517697 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.428031103s)
	I1121 23:48:43.932650  517697 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	W1121 23:48:44.622106  517697 node_ready.go:57] node "addons-882841" has "Ready":"False" status (will retry)
	W1121 23:48:47.065403  517697 node_ready.go:57] node "addons-882841" has "Ready":"False" status (will retry)
	I1121 23:48:47.212765  517697 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (4.871485315s)
	I1121 23:48:47.212799  517697 addons.go:495] Verifying addon ingress=true in "addons-882841"
	I1121 23:48:47.212973  517697 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ig-deployment.yaml: (4.870514611s)
	I1121 23:48:47.213015  517697 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/registry-creds-rc.yaml: (4.847018641s)
	I1121 23:48:47.213064  517697 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (4.844448098s)
	I1121 23:48:47.213099  517697 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/amd-gpu-device-plugin.yaml: (4.794474042s)
	I1121 23:48:47.213132  517697 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (4.783785874s)
	I1121 23:48:47.213159  517697 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (4.775914766s)
	I1121 23:48:47.213213  517697 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (4.613096772s)
	I1121 23:48:47.213356  517697 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (4.421879992s)
	I1121 23:48:47.213425  517697 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (4.339689257s)
	I1121 23:48:47.213443  517697 addons.go:495] Verifying addon registry=true in "addons-882841"
	I1121 23:48:47.213530  517697 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (4.176760227s)
	I1121 23:48:47.213545  517697 addons.go:495] Verifying addon metrics-server=true in "addons-882841"
	I1121 23:48:47.213579  517697 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (4.026337395s)
	I1121 23:48:47.213882  517697 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (3.931156996s)
	W1121 23:48:47.213911  517697 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I1121 23:48:47.213929  517697 retry.go:31] will retry after 137.248039ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I1121 23:48:47.216039  517697 out.go:179] * Verifying ingress addon...
	I1121 23:48:47.218009  517697 out.go:179] * Verifying registry addon...
	I1121 23:48:47.218010  517697 out.go:179] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-882841 service yakd-dashboard -n yakd-dashboard
	
	I1121 23:48:47.222278  517697 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I1121 23:48:47.223070  517697 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I1121 23:48:47.228113  517697 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=registry
	I1121 23:48:47.228131  517697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1121 23:48:47.233595  517697 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I1121 23:48:47.233613  517697 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 23:48:47.352282  517697 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1121 23:48:47.551727  517697 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (3.619015174s)
	I1121 23:48:47.551764  517697 addons.go:495] Verifying addon csi-hostpath-driver=true in "addons-882841"
	I1121 23:48:47.554633  517697 out.go:179] * Verifying csi-hostpath-driver addon...
	I1121 23:48:47.558397  517697 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I1121 23:48:47.569660  517697 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I1121 23:48:47.569729  517697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 23:48:47.737586  517697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1121 23:48:47.738405  517697 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 23:48:48.061918  517697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 23:48:48.225916  517697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1121 23:48:48.226248  517697 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 23:48:48.562211  517697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 23:48:48.625190  517697 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I1121 23:48:48.625301  517697 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-882841
	I1121 23:48:48.644632  517697 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33495 SSHKeyPath:/home/jenkins/minikube-integration/21934-513600/.minikube/machines/addons-882841/id_rsa Username:docker}
	I1121 23:48:48.726251  517697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1121 23:48:48.726446  517697 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 23:48:48.750941  517697 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I1121 23:48:48.763430  517697 addons.go:239] Setting addon gcp-auth=true in "addons-882841"
	I1121 23:48:48.763476  517697 host.go:66] Checking if "addons-882841" exists ...
	I1121 23:48:48.763939  517697 cli_runner.go:164] Run: docker container inspect addons-882841 --format={{.State.Status}}
	I1121 23:48:48.779975  517697 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I1121 23:48:48.780025  517697 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-882841
	I1121 23:48:48.796885  517697 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33495 SSHKeyPath:/home/jenkins/minikube-integration/21934-513600/.minikube/machines/addons-882841/id_rsa Username:docker}
	I1121 23:48:49.061896  517697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 23:48:49.226404  517697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1121 23:48:49.226545  517697 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 23:48:49.562094  517697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1121 23:48:49.563918  517697 node_ready.go:57] node "addons-882841" has "Ready":"False" status (will retry)
	I1121 23:48:49.726115  517697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1121 23:48:49.726369  517697 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 23:48:50.064918  517697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 23:48:50.141656  517697 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (2.78932831s)
	I1121 23:48:50.141793  517697 ssh_runner.go:235] Completed: cat /var/lib/minikube/google_application_credentials.json: (1.361792432s)
	I1121 23:48:50.144974  517697 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.4
	I1121 23:48:50.147969  517697 out.go:179]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.3
	I1121 23:48:50.150894  517697 addons.go:436] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I1121 23:48:50.150915  517697 ssh_runner.go:362] scp gcp-auth/gcp-auth-ns.yaml --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I1121 23:48:50.168601  517697 addons.go:436] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I1121 23:48:50.168627  517697 ssh_runner.go:362] scp gcp-auth/gcp-auth-service.yaml --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I1121 23:48:50.182828  517697 addons.go:436] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I1121 23:48:50.182853  517697 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I1121 23:48:50.197267  517697 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I1121 23:48:50.227251  517697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1121 23:48:50.227799  517697 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 23:48:50.569789  517697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 23:48:50.676131  517697 addons.go:495] Verifying addon gcp-auth=true in "addons-882841"
	I1121 23:48:50.680174  517697 out.go:179] * Verifying gcp-auth addon...
	I1121 23:48:50.683738  517697 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I1121 23:48:50.697275  517697 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I1121 23:48:50.697345  517697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 23:48:50.798280  517697 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 23:48:50.798388  517697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1121 23:48:51.061547  517697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 23:48:51.186844  517697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 23:48:51.225706  517697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1121 23:48:51.226050  517697 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 23:48:51.563019  517697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1121 23:48:51.564754  517697 node_ready.go:57] node "addons-882841" has "Ready":"False" status (will retry)
	I1121 23:48:51.687027  517697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 23:48:51.725980  517697 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 23:48:51.726125  517697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1121 23:48:52.061508  517697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 23:48:52.187273  517697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 23:48:52.225188  517697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1121 23:48:52.226134  517697 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 23:48:52.562351  517697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 23:48:52.686783  517697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 23:48:52.726106  517697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1121 23:48:52.726291  517697 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 23:48:53.061251  517697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 23:48:53.187139  517697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 23:48:53.226699  517697 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 23:48:53.226763  517697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1121 23:48:53.563539  517697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1121 23:48:53.565561  517697 node_ready.go:57] node "addons-882841" has "Ready":"False" status (will retry)
	I1121 23:48:53.687980  517697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 23:48:53.726016  517697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1121 23:48:53.726578  517697 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 23:48:54.061824  517697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 23:48:54.186807  517697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 23:48:54.226561  517697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1121 23:48:54.226747  517697 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 23:48:54.563124  517697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 23:48:54.687570  517697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 23:48:54.726097  517697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1121 23:48:54.726986  517697 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 23:48:55.063073  517697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 23:48:55.187204  517697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 23:48:55.226823  517697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1121 23:48:55.227161  517697 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 23:48:55.562710  517697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 23:48:55.688139  517697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 23:48:55.726509  517697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1121 23:48:55.726782  517697 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 23:48:56.062614  517697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1121 23:48:56.063923  517697 node_ready.go:57] node "addons-882841" has "Ready":"False" status (will retry)
	I1121 23:48:56.187003  517697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 23:48:56.225895  517697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1121 23:48:56.226667  517697 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 23:48:56.562961  517697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 23:48:56.687069  517697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 23:48:56.726608  517697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1121 23:48:56.727051  517697 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 23:48:57.063599  517697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 23:48:57.186794  517697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 23:48:57.225554  517697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1121 23:48:57.226029  517697 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 23:48:57.563267  517697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 23:48:57.687499  517697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 23:48:57.725119  517697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1121 23:48:57.726256  517697 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 23:48:58.061478  517697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 23:48:58.186451  517697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 23:48:58.225058  517697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1121 23:48:58.226205  517697 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 23:48:58.561761  517697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1121 23:48:58.565599  517697 node_ready.go:57] node "addons-882841" has "Ready":"False" status (will retry)
	I1121 23:48:58.687677  517697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 23:48:58.726602  517697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1121 23:48:58.727014  517697 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 23:48:59.062247  517697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 23:48:59.187169  517697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 23:48:59.226299  517697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1121 23:48:59.226608  517697 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 23:48:59.561553  517697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 23:48:59.687264  517697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 23:48:59.726299  517697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1121 23:48:59.726492  517697 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 23:49:00.076039  517697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 23:49:00.196691  517697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 23:49:00.239729  517697 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 23:49:00.244383  517697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1121 23:49:00.561728  517697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 23:49:00.686747  517697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 23:49:00.725892  517697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1121 23:49:00.726202  517697 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 23:49:01.061697  517697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1121 23:49:01.063632  517697 node_ready.go:57] node "addons-882841" has "Ready":"False" status (will retry)
	I1121 23:49:01.186874  517697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 23:49:01.225673  517697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1121 23:49:01.226085  517697 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 23:49:01.562097  517697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 23:49:01.688322  517697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 23:49:01.726889  517697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1121 23:49:01.727325  517697 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 23:49:02.062222  517697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 23:49:02.186562  517697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 23:49:02.226082  517697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1121 23:49:02.226253  517697 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 23:49:02.561882  517697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 23:49:02.687128  517697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 23:49:02.725633  517697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1121 23:49:02.725788  517697 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 23:49:03.063308  517697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1121 23:49:03.064255  517697 node_ready.go:57] node "addons-882841" has "Ready":"False" status (will retry)
	I1121 23:49:03.186917  517697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 23:49:03.226137  517697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1121 23:49:03.226321  517697 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 23:49:03.561513  517697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 23:49:03.686947  517697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 23:49:03.726091  517697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1121 23:49:03.726942  517697 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 23:49:04.062007  517697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 23:49:04.186997  517697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 23:49:04.226157  517697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1121 23:49:04.226333  517697 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 23:49:04.561975  517697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 23:49:04.687072  517697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 23:49:04.726155  517697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1121 23:49:04.726387  517697 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 23:49:05.062040  517697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 23:49:05.192916  517697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 23:49:05.231624  517697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1121 23:49:05.232209  517697 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 23:49:05.564761  517697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1121 23:49:05.565157  517697 node_ready.go:57] node "addons-882841" has "Ready":"False" status (will retry)
	I1121 23:49:05.687452  517697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 23:49:05.724971  517697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1121 23:49:05.726226  517697 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 23:49:06.061488  517697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 23:49:06.186597  517697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 23:49:06.225841  517697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1121 23:49:06.225942  517697 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 23:49:06.561885  517697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 23:49:06.687415  517697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 23:49:06.725334  517697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1121 23:49:06.725705  517697 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 23:49:07.061687  517697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 23:49:07.186991  517697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 23:49:07.226382  517697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1121 23:49:07.226531  517697 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 23:49:07.561770  517697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 23:49:07.687534  517697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 23:49:07.726183  517697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1121 23:49:07.726998  517697 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 23:49:08.062329  517697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1121 23:49:08.063658  517697 node_ready.go:57] node "addons-882841" has "Ready":"False" status (will retry)
	I1121 23:49:08.186447  517697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 23:49:08.225429  517697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1121 23:49:08.226019  517697 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 23:49:08.561941  517697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 23:49:08.686781  517697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 23:49:08.725936  517697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1121 23:49:08.726216  517697 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 23:49:09.062730  517697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 23:49:09.186894  517697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 23:49:09.226053  517697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1121 23:49:09.226370  517697 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 23:49:09.561763  517697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 23:49:09.687353  517697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 23:49:09.725387  517697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1121 23:49:09.726540  517697 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 23:49:10.062277  517697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1121 23:49:10.064037  517697 node_ready.go:57] node "addons-882841" has "Ready":"False" status (will retry)
	I1121 23:49:10.187005  517697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 23:49:10.226247  517697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1121 23:49:10.226506  517697 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 23:49:10.561590  517697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 23:49:10.686959  517697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 23:49:10.731027  517697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1121 23:49:10.731447  517697 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 23:49:11.061677  517697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 23:49:11.186865  517697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 23:49:11.225907  517697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1121 23:49:11.226220  517697 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 23:49:11.561064  517697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 23:49:11.687483  517697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 23:49:11.726400  517697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1121 23:49:11.726909  517697 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 23:49:12.062146  517697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1121 23:49:12.064143  517697 node_ready.go:57] node "addons-882841" has "Ready":"False" status (will retry)
	I1121 23:49:12.187347  517697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 23:49:12.225612  517697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1121 23:49:12.226251  517697 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 23:49:12.563342  517697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 23:49:12.687336  517697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 23:49:12.725312  517697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1121 23:49:12.726375  517697 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 23:49:13.061260  517697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 23:49:13.187283  517697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 23:49:13.226420  517697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1121 23:49:13.226871  517697 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 23:49:13.562089  517697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 23:49:13.687007  517697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 23:49:13.726165  517697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1121 23:49:13.726301  517697 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 23:49:14.062473  517697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1121 23:49:14.064308  517697 node_ready.go:57] node "addons-882841" has "Ready":"False" status (will retry)
	I1121 23:49:14.187177  517697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 23:49:14.226479  517697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1121 23:49:14.226766  517697 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 23:49:14.562288  517697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 23:49:14.686823  517697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 23:49:14.726126  517697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1121 23:49:14.726930  517697 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 23:49:15.061756  517697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 23:49:15.187536  517697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 23:49:15.229283  517697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1121 23:49:15.229534  517697 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 23:49:15.562613  517697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 23:49:15.687244  517697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 23:49:15.724929  517697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1121 23:49:15.726124  517697 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 23:49:16.061275  517697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 23:49:16.187309  517697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 23:49:16.226590  517697 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 23:49:16.226705  517697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1121 23:49:16.561153  517697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1121 23:49:16.562888  517697 node_ready.go:57] node "addons-882841" has "Ready":"False" status (will retry)
	I1121 23:49:16.686897  517697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 23:49:16.726452  517697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1121 23:49:16.726896  517697 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 23:49:17.061964  517697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 23:49:17.187095  517697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 23:49:17.226224  517697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1121 23:49:17.228108  517697 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 23:49:17.561718  517697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 23:49:17.687317  517697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 23:49:17.726552  517697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1121 23:49:17.726915  517697 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 23:49:18.062285  517697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 23:49:18.186578  517697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 23:49:18.225353  517697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1121 23:49:18.226376  517697 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 23:49:18.561880  517697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1121 23:49:18.563940  517697 node_ready.go:57] node "addons-882841" has "Ready":"False" status (will retry)
	I1121 23:49:18.687080  517697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 23:49:18.726038  517697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1121 23:49:18.726183  517697 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 23:49:19.061317  517697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 23:49:19.187314  517697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 23:49:19.226257  517697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1121 23:49:19.226396  517697 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 23:49:19.561080  517697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 23:49:19.686947  517697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 23:49:19.725411  517697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1121 23:49:19.726276  517697 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 23:49:20.062498  517697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 23:49:20.186997  517697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 23:49:20.225881  517697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1121 23:49:20.225948  517697 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 23:49:20.561867  517697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 23:49:20.686786  517697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 23:49:20.725545  517697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1121 23:49:20.726314  517697 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 23:49:21.061452  517697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1121 23:49:21.063459  517697 node_ready.go:57] node "addons-882841" has "Ready":"False" status (will retry)
	I1121 23:49:21.187379  517697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 23:49:21.225793  517697 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 23:49:21.225840  517697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1121 23:49:21.561845  517697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 23:49:21.686625  517697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 23:49:21.726416  517697 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 23:49:21.726415  517697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1121 23:49:22.061295  517697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 23:49:22.187096  517697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 23:49:22.226372  517697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1121 23:49:22.226460  517697 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 23:49:22.561144  517697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 23:49:22.686865  517697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 23:49:22.725639  517697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1121 23:49:22.726303  517697 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 23:49:23.091883  517697 node_ready.go:49] node "addons-882841" is "Ready"
	I1121 23:49:23.091915  517697 node_ready.go:38] duration metric: took 40.531603726s for node "addons-882841" to be "Ready" ...
	I1121 23:49:23.091932  517697 api_server.go:52] waiting for apiserver process to appear ...
	I1121 23:49:23.091991  517697 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1121 23:49:23.092710  517697 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I1121 23:49:23.092736  517697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 23:49:23.107070  517697 api_server.go:72] duration metric: took 42.306317438s to wait for apiserver process to appear ...
	I1121 23:49:23.107095  517697 api_server.go:88] waiting for apiserver healthz status ...
	I1121 23:49:23.107117  517697 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1121 23:49:23.118923  517697 api_server.go:279] https://192.168.49.2:8443/healthz returned 200:
	ok
	I1121 23:49:23.121086  517697 api_server.go:141] control plane version: v1.34.1
	I1121 23:49:23.121114  517697 api_server.go:131] duration metric: took 14.011805ms to wait for apiserver health ...
	I1121 23:49:23.121124  517697 system_pods.go:43] waiting for kube-system pods to appear ...
	I1121 23:49:23.144501  517697 system_pods.go:59] 19 kube-system pods found
	I1121 23:49:23.144537  517697 system_pods.go:61] "coredns-66bc5c9577-zjrtb" [98eb0f4e-21c8-4403-adb4-1d0f4decde4b] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1121 23:49:23.144544  517697 system_pods.go:61] "csi-hostpath-attacher-0" [974f6c76-34db-4887-a36d-ef4b2ccc1e37] Pending
	I1121 23:49:23.144551  517697 system_pods.go:61] "csi-hostpath-resizer-0" [b719458e-8db2-43dc-8896-8fd232b5bc58] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1121 23:49:23.144556  517697 system_pods.go:61] "csi-hostpathplugin-mkngh" [083d366a-f53b-4a51-b7ee-7acd56800894] Pending
	I1121 23:49:23.144560  517697 system_pods.go:61] "etcd-addons-882841" [5565d49c-434d-4db8-94fc-d88d8f8e9bd2] Running
	I1121 23:49:23.144564  517697 system_pods.go:61] "kindnet-wghw5" [f4454a98-7446-4179-a382-982d231fb9a7] Running
	I1121 23:49:23.144568  517697 system_pods.go:61] "kube-apiserver-addons-882841" [6bc0f536-d888-4818-9e4b-597d98d3edb4] Running
	I1121 23:49:23.144572  517697 system_pods.go:61] "kube-controller-manager-addons-882841" [1a2214c6-e2e0-4bb0-8c36-3571a5fda69c] Running
	I1121 23:49:23.144582  517697 system_pods.go:61] "kube-ingress-dns-minikube" [05451ec4-2e91-4a5d-8d8e-29b8f3931ab2] Pending
	I1121 23:49:23.144586  517697 system_pods.go:61] "kube-proxy-gthqw" [05b79d7f-9659-444f-946f-88f641a45731] Running
	I1121 23:49:23.144593  517697 system_pods.go:61] "kube-scheduler-addons-882841" [4160616a-418b-48a6-8c7c-3dc4f43ace3c] Running
	I1121 23:49:23.144600  517697 system_pods.go:61] "metrics-server-85b7d694d7-7tk8r" [99849e7c-e2a9-4b60-b8f9-7ed8bd487c73] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1121 23:49:23.144611  517697 system_pods.go:61] "nvidia-device-plugin-daemonset-4jvp9" [54878aa0-88b5-4a6b-ad02-91d34115cc3d] Pending
	I1121 23:49:23.144618  517697 system_pods.go:61] "registry-6b586f9694-5jvr4" [7a29be8b-519d-4b81-81ff-bac494b2ea86] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1121 23:49:23.144625  517697 system_pods.go:61] "registry-creds-764b6fb674-8wv2f" [dfc3c5ef-fcf8-4a4c-908c-fa2a665d682c] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1121 23:49:23.144630  517697 system_pods.go:61] "registry-proxy-rrtfc" [1d8939ca-bf48-4609-94de-6b5ca07c973f] Pending
	I1121 23:49:23.144635  517697 system_pods.go:61] "snapshot-controller-7d9fbc56b8-44w6b" [9fceaa9e-21a1-46a5-acea-1901a3b30539] Pending
	I1121 23:49:23.144648  517697 system_pods.go:61] "snapshot-controller-7d9fbc56b8-q99bt" [bb9f8fcb-0d34-489e-b7f3-e8c20fc906bb] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1121 23:49:23.144652  517697 system_pods.go:61] "storage-provisioner" [0ff2b406-8d5a-4cf0-a6a5-c79a4614dcf6] Pending
	I1121 23:49:23.144658  517697 system_pods.go:74] duration metric: took 23.528822ms to wait for pod list to return data ...
	I1121 23:49:23.144668  517697 default_sa.go:34] waiting for default service account to be created ...
	I1121 23:49:23.153980  517697 default_sa.go:45] found service account: "default"
	I1121 23:49:23.154008  517697 default_sa.go:55] duration metric: took 9.332784ms for default service account to be created ...
	I1121 23:49:23.154018  517697 system_pods.go:116] waiting for k8s-apps to be running ...
	I1121 23:49:23.257937  517697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 23:49:23.260230  517697 system_pods.go:86] 19 kube-system pods found
	I1121 23:49:23.260261  517697 system_pods.go:89] "coredns-66bc5c9577-zjrtb" [98eb0f4e-21c8-4403-adb4-1d0f4decde4b] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1121 23:49:23.260268  517697 system_pods.go:89] "csi-hostpath-attacher-0" [974f6c76-34db-4887-a36d-ef4b2ccc1e37] Pending
	I1121 23:49:23.260275  517697 system_pods.go:89] "csi-hostpath-resizer-0" [b719458e-8db2-43dc-8896-8fd232b5bc58] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1121 23:49:23.260279  517697 system_pods.go:89] "csi-hostpathplugin-mkngh" [083d366a-f53b-4a51-b7ee-7acd56800894] Pending
	I1121 23:49:23.260284  517697 system_pods.go:89] "etcd-addons-882841" [5565d49c-434d-4db8-94fc-d88d8f8e9bd2] Running
	I1121 23:49:23.260288  517697 system_pods.go:89] "kindnet-wghw5" [f4454a98-7446-4179-a382-982d231fb9a7] Running
	I1121 23:49:23.260292  517697 system_pods.go:89] "kube-apiserver-addons-882841" [6bc0f536-d888-4818-9e4b-597d98d3edb4] Running
	I1121 23:49:23.260297  517697 system_pods.go:89] "kube-controller-manager-addons-882841" [1a2214c6-e2e0-4bb0-8c36-3571a5fda69c] Running
	I1121 23:49:23.260303  517697 system_pods.go:89] "kube-ingress-dns-minikube" [05451ec4-2e91-4a5d-8d8e-29b8f3931ab2] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1121 23:49:23.260310  517697 system_pods.go:89] "kube-proxy-gthqw" [05b79d7f-9659-444f-946f-88f641a45731] Running
	I1121 23:49:23.260315  517697 system_pods.go:89] "kube-scheduler-addons-882841" [4160616a-418b-48a6-8c7c-3dc4f43ace3c] Running
	I1121 23:49:23.260323  517697 system_pods.go:89] "metrics-server-85b7d694d7-7tk8r" [99849e7c-e2a9-4b60-b8f9-7ed8bd487c73] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1121 23:49:23.260327  517697 system_pods.go:89] "nvidia-device-plugin-daemonset-4jvp9" [54878aa0-88b5-4a6b-ad02-91d34115cc3d] Pending
	I1121 23:49:23.260341  517697 system_pods.go:89] "registry-6b586f9694-5jvr4" [7a29be8b-519d-4b81-81ff-bac494b2ea86] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1121 23:49:23.260348  517697 system_pods.go:89] "registry-creds-764b6fb674-8wv2f" [dfc3c5ef-fcf8-4a4c-908c-fa2a665d682c] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1121 23:49:23.260358  517697 system_pods.go:89] "registry-proxy-rrtfc" [1d8939ca-bf48-4609-94de-6b5ca07c973f] Pending
	I1121 23:49:23.260363  517697 system_pods.go:89] "snapshot-controller-7d9fbc56b8-44w6b" [9fceaa9e-21a1-46a5-acea-1901a3b30539] Pending
	I1121 23:49:23.260368  517697 system_pods.go:89] "snapshot-controller-7d9fbc56b8-q99bt" [bb9f8fcb-0d34-489e-b7f3-e8c20fc906bb] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1121 23:49:23.260372  517697 system_pods.go:89] "storage-provisioner" [0ff2b406-8d5a-4cf0-a6a5-c79a4614dcf6] Pending
	I1121 23:49:23.260391  517697 retry.go:31] will retry after 292.015422ms: missing components: kube-dns
	I1121 23:49:23.287719  517697 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 23:49:23.288033  517697 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I1121 23:49:23.288051  517697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1121 23:49:23.558948  517697 system_pods.go:86] 19 kube-system pods found
	I1121 23:49:23.558982  517697 system_pods.go:89] "coredns-66bc5c9577-zjrtb" [98eb0f4e-21c8-4403-adb4-1d0f4decde4b] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1121 23:49:23.558991  517697 system_pods.go:89] "csi-hostpath-attacher-0" [974f6c76-34db-4887-a36d-ef4b2ccc1e37] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1121 23:49:23.558998  517697 system_pods.go:89] "csi-hostpath-resizer-0" [b719458e-8db2-43dc-8896-8fd232b5bc58] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1121 23:49:23.559006  517697 system_pods.go:89] "csi-hostpathplugin-mkngh" [083d366a-f53b-4a51-b7ee-7acd56800894] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1121 23:49:23.559011  517697 system_pods.go:89] "etcd-addons-882841" [5565d49c-434d-4db8-94fc-d88d8f8e9bd2] Running
	I1121 23:49:23.559016  517697 system_pods.go:89] "kindnet-wghw5" [f4454a98-7446-4179-a382-982d231fb9a7] Running
	I1121 23:49:23.559026  517697 system_pods.go:89] "kube-apiserver-addons-882841" [6bc0f536-d888-4818-9e4b-597d98d3edb4] Running
	I1121 23:49:23.559031  517697 system_pods.go:89] "kube-controller-manager-addons-882841" [1a2214c6-e2e0-4bb0-8c36-3571a5fda69c] Running
	I1121 23:49:23.559040  517697 system_pods.go:89] "kube-ingress-dns-minikube" [05451ec4-2e91-4a5d-8d8e-29b8f3931ab2] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1121 23:49:23.559044  517697 system_pods.go:89] "kube-proxy-gthqw" [05b79d7f-9659-444f-946f-88f641a45731] Running
	I1121 23:49:23.559060  517697 system_pods.go:89] "kube-scheduler-addons-882841" [4160616a-418b-48a6-8c7c-3dc4f43ace3c] Running
	I1121 23:49:23.559066  517697 system_pods.go:89] "metrics-server-85b7d694d7-7tk8r" [99849e7c-e2a9-4b60-b8f9-7ed8bd487c73] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1121 23:49:23.559070  517697 system_pods.go:89] "nvidia-device-plugin-daemonset-4jvp9" [54878aa0-88b5-4a6b-ad02-91d34115cc3d] Pending
	I1121 23:49:23.559083  517697 system_pods.go:89] "registry-6b586f9694-5jvr4" [7a29be8b-519d-4b81-81ff-bac494b2ea86] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1121 23:49:23.559089  517697 system_pods.go:89] "registry-creds-764b6fb674-8wv2f" [dfc3c5ef-fcf8-4a4c-908c-fa2a665d682c] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1121 23:49:23.559093  517697 system_pods.go:89] "registry-proxy-rrtfc" [1d8939ca-bf48-4609-94de-6b5ca07c973f] Pending
	I1121 23:49:23.559099  517697 system_pods.go:89] "snapshot-controller-7d9fbc56b8-44w6b" [9fceaa9e-21a1-46a5-acea-1901a3b30539] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1121 23:49:23.559106  517697 system_pods.go:89] "snapshot-controller-7d9fbc56b8-q99bt" [bb9f8fcb-0d34-489e-b7f3-e8c20fc906bb] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1121 23:49:23.559114  517697 system_pods.go:89] "storage-provisioner" [0ff2b406-8d5a-4cf0-a6a5-c79a4614dcf6] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1121 23:49:23.559130  517697 retry.go:31] will retry after 316.889207ms: missing components: kube-dns
	I1121 23:49:23.562696  517697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 23:49:23.687425  517697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 23:49:23.779408  517697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1121 23:49:23.779565  517697 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 23:49:23.883709  517697 system_pods.go:86] 19 kube-system pods found
	I1121 23:49:23.883755  517697 system_pods.go:89] "coredns-66bc5c9577-zjrtb" [98eb0f4e-21c8-4403-adb4-1d0f4decde4b] Running
	I1121 23:49:23.883765  517697 system_pods.go:89] "csi-hostpath-attacher-0" [974f6c76-34db-4887-a36d-ef4b2ccc1e37] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1121 23:49:23.883772  517697 system_pods.go:89] "csi-hostpath-resizer-0" [b719458e-8db2-43dc-8896-8fd232b5bc58] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1121 23:49:23.883782  517697 system_pods.go:89] "csi-hostpathplugin-mkngh" [083d366a-f53b-4a51-b7ee-7acd56800894] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1121 23:49:23.883787  517697 system_pods.go:89] "etcd-addons-882841" [5565d49c-434d-4db8-94fc-d88d8f8e9bd2] Running
	I1121 23:49:23.883792  517697 system_pods.go:89] "kindnet-wghw5" [f4454a98-7446-4179-a382-982d231fb9a7] Running
	I1121 23:49:23.883808  517697 system_pods.go:89] "kube-apiserver-addons-882841" [6bc0f536-d888-4818-9e4b-597d98d3edb4] Running
	I1121 23:49:23.883813  517697 system_pods.go:89] "kube-controller-manager-addons-882841" [1a2214c6-e2e0-4bb0-8c36-3571a5fda69c] Running
	I1121 23:49:23.883833  517697 system_pods.go:89] "kube-ingress-dns-minikube" [05451ec4-2e91-4a5d-8d8e-29b8f3931ab2] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1121 23:49:23.883837  517697 system_pods.go:89] "kube-proxy-gthqw" [05b79d7f-9659-444f-946f-88f641a45731] Running
	I1121 23:49:23.883842  517697 system_pods.go:89] "kube-scheduler-addons-882841" [4160616a-418b-48a6-8c7c-3dc4f43ace3c] Running
	I1121 23:49:23.883854  517697 system_pods.go:89] "metrics-server-85b7d694d7-7tk8r" [99849e7c-e2a9-4b60-b8f9-7ed8bd487c73] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1121 23:49:23.883861  517697 system_pods.go:89] "nvidia-device-plugin-daemonset-4jvp9" [54878aa0-88b5-4a6b-ad02-91d34115cc3d] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1121 23:49:23.883871  517697 system_pods.go:89] "registry-6b586f9694-5jvr4" [7a29be8b-519d-4b81-81ff-bac494b2ea86] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1121 23:49:23.883878  517697 system_pods.go:89] "registry-creds-764b6fb674-8wv2f" [dfc3c5ef-fcf8-4a4c-908c-fa2a665d682c] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1121 23:49:23.883884  517697 system_pods.go:89] "registry-proxy-rrtfc" [1d8939ca-bf48-4609-94de-6b5ca07c973f] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1121 23:49:23.883891  517697 system_pods.go:89] "snapshot-controller-7d9fbc56b8-44w6b" [9fceaa9e-21a1-46a5-acea-1901a3b30539] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1121 23:49:23.883900  517697 system_pods.go:89] "snapshot-controller-7d9fbc56b8-q99bt" [bb9f8fcb-0d34-489e-b7f3-e8c20fc906bb] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1121 23:49:23.883904  517697 system_pods.go:89] "storage-provisioner" [0ff2b406-8d5a-4cf0-a6a5-c79a4614dcf6] Running
	I1121 23:49:23.883927  517697 system_pods.go:126] duration metric: took 729.893104ms to wait for k8s-apps to be running ...
	I1121 23:49:23.883939  517697 system_svc.go:44] waiting for kubelet service to be running ....
	I1121 23:49:23.884004  517697 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1121 23:49:23.907047  517697 system_svc.go:56] duration metric: took 23.097745ms WaitForService to wait for kubelet
	I1121 23:49:23.907076  517697 kubeadm.go:587] duration metric: took 43.106328361s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1121 23:49:23.907100  517697 node_conditions.go:102] verifying NodePressure condition ...
	I1121 23:49:23.910894  517697 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1121 23:49:23.910927  517697 node_conditions.go:123] node cpu capacity is 2
	I1121 23:49:23.910948  517697 node_conditions.go:105] duration metric: took 3.838306ms to run NodePressure ...
	I1121 23:49:23.910968  517697 start.go:242] waiting for startup goroutines ...
	I1121 23:49:24.062817  517697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 23:49:24.186814  517697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 23:49:24.227544  517697 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 23:49:24.228325  517697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1121 23:49:24.561932  517697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 23:49:24.688700  517697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 23:49:24.733242  517697 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 23:49:24.733629  517697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1121 23:49:25.062555  517697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 23:49:25.187223  517697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 23:49:25.227140  517697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1121 23:49:25.227449  517697 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 23:49:25.561986  517697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 23:49:25.687540  517697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 23:49:25.726756  517697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1121 23:49:25.727615  517697 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 23:49:26.062167  517697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 23:49:26.187520  517697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 23:49:26.225860  517697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1121 23:49:26.227830  517697 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 23:49:26.562917  517697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 23:49:26.688047  517697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 23:49:26.789052  517697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1121 23:49:26.789289  517697 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 23:49:27.062102  517697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 23:49:27.187542  517697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 23:49:27.227312  517697 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 23:49:27.227503  517697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1121 23:49:27.562246  517697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 23:49:27.687740  517697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 23:49:27.726586  517697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1121 23:49:27.727240  517697 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 23:49:28.063082  517697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 23:49:28.187265  517697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 23:49:28.227395  517697 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 23:49:28.227987  517697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1121 23:49:28.562660  517697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 23:49:28.686665  517697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 23:49:28.726555  517697 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 23:49:28.727197  517697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1121 23:49:29.062941  517697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 23:49:29.187278  517697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 23:49:29.227551  517697 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 23:49:29.228218  517697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1121 23:49:29.561870  517697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 23:49:29.687358  517697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 23:49:29.727107  517697 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 23:49:29.727350  517697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1121 23:49:30.062266  517697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 23:49:30.187810  517697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 23:49:30.227770  517697 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 23:49:30.227900  517697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1121 23:49:30.562480  517697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 23:49:30.687643  517697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 23:49:30.731367  517697 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 23:49:30.731694  517697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1121 23:49:31.062417  517697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 23:49:31.187731  517697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 23:49:31.227662  517697 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 23:49:31.228003  517697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1121 23:49:31.564337  517697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 23:49:31.688418  517697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 23:49:31.726863  517697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1121 23:49:31.727030  517697 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 23:49:32.063185  517697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 23:49:32.187200  517697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 23:49:32.226317  517697 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 23:49:32.226749  517697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1121 23:49:32.565751  517697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 23:49:32.691172  517697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 23:49:32.728699  517697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1121 23:49:32.736937  517697 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 23:49:33.062920  517697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 23:49:33.187650  517697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 23:49:33.228365  517697 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 23:49:33.228646  517697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1121 23:49:33.564268  517697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 23:49:33.688202  517697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 23:49:33.727579  517697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1121 23:49:33.727813  517697 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 23:49:34.062949  517697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 23:49:34.187375  517697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 23:49:34.225725  517697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1121 23:49:34.226398  517697 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 23:49:34.562741  517697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 23:49:34.688385  517697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 23:49:34.728882  517697 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 23:49:34.729273  517697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1121 23:49:35.062729  517697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 23:49:35.187399  517697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 23:49:35.228380  517697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1121 23:49:35.228827  517697 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 23:49:35.562451  517697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 23:49:35.687595  517697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 23:49:35.788767  517697 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 23:49:35.788721  517697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1121 23:49:36.062187  517697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 23:49:36.187024  517697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 23:49:36.227244  517697 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 23:49:36.227384  517697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1121 23:49:36.561639  517697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 23:49:36.686801  517697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 23:49:36.727237  517697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1121 23:49:36.727445  517697 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 23:49:37.062848  517697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 23:49:37.187493  517697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 23:49:37.227419  517697 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 23:49:37.227618  517697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1121 23:49:37.562745  517697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 23:49:37.687004  517697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 23:49:37.726276  517697 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 23:49:37.727040  517697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1121 23:49:38.062993  517697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 23:49:38.187243  517697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 23:49:38.227530  517697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1121 23:49:38.228295  517697 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 23:49:38.562077  517697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 23:49:38.687132  517697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 23:49:38.726423  517697 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 23:49:38.727363  517697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1121 23:49:39.062066  517697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 23:49:39.187605  517697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 23:49:39.228471  517697 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 23:49:39.228751  517697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1121 23:49:39.562917  517697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 23:49:39.693116  517697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 23:49:39.728033  517697 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 23:49:39.728456  517697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1121 23:49:40.062685  517697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 23:49:40.186735  517697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 23:49:40.226654  517697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1121 23:49:40.226693  517697 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 23:49:40.561890  517697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 23:49:40.687342  517697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 23:49:40.735286  517697 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 23:49:40.736416  517697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1121 23:49:41.062233  517697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 23:49:41.187405  517697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 23:49:41.288710  517697 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 23:49:41.289033  517697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1121 23:49:41.562904  517697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 23:49:41.686714  517697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 23:49:41.726845  517697 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 23:49:41.726958  517697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1121 23:49:42.063444  517697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 23:49:42.187410  517697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 23:49:42.226561  517697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1121 23:49:42.228088  517697 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 23:49:42.561869  517697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 23:49:42.687551  517697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 23:49:42.725965  517697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1121 23:49:42.726852  517697 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 23:49:43.068039  517697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 23:49:43.187655  517697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 23:49:43.228111  517697 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 23:49:43.228505  517697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1121 23:49:43.562270  517697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 23:49:43.687608  517697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 23:49:43.736256  517697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1121 23:49:43.736719  517697 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 23:49:44.065521  517697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 23:49:44.187726  517697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 23:49:44.228549  517697 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 23:49:44.229538  517697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1121 23:49:44.562842  517697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 23:49:44.687893  517697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 23:49:44.728348  517697 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 23:49:44.728756  517697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1121 23:49:45.064666  517697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 23:49:45.189958  517697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 23:49:45.232713  517697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1121 23:49:45.233222  517697 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 23:49:45.562565  517697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 23:49:45.687729  517697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 23:49:45.728652  517697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1121 23:49:45.728828  517697 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 23:49:46.062829  517697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 23:49:46.187133  517697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 23:49:46.227957  517697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1121 23:49:46.228583  517697 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 23:49:46.562410  517697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 23:49:46.687602  517697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 23:49:46.727915  517697 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 23:49:46.728345  517697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1121 23:49:47.062156  517697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 23:49:47.186843  517697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 23:49:47.226563  517697 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 23:49:47.227434  517697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1121 23:49:47.562511  517697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 23:49:47.687267  517697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 23:49:47.725543  517697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1121 23:49:47.726242  517697 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 23:49:48.062228  517697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 23:49:48.187485  517697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 23:49:48.227470  517697 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 23:49:48.227859  517697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1121 23:49:48.562620  517697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 23:49:48.687421  517697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 23:49:48.727207  517697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1121 23:49:48.727510  517697 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 23:49:49.066594  517697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 23:49:49.186748  517697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 23:49:49.226331  517697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1121 23:49:49.226512  517697 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 23:49:49.562185  517697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 23:49:49.686716  517697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 23:49:49.726308  517697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1121 23:49:49.726661  517697 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 23:49:50.062650  517697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 23:49:50.186477  517697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 23:49:50.225439  517697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1121 23:49:50.227030  517697 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 23:49:50.562458  517697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 23:49:50.690472  517697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 23:49:50.725724  517697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1121 23:49:50.726954  517697 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 23:49:51.064161  517697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 23:49:51.188247  517697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 23:49:51.227598  517697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1121 23:49:51.229215  517697 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 23:49:51.561529  517697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 23:49:51.687299  517697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 23:49:51.726249  517697 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 23:49:51.726396  517697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1121 23:49:52.062447  517697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 23:49:52.186636  517697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 23:49:52.227279  517697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1121 23:49:52.228716  517697 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 23:49:52.562053  517697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 23:49:52.687745  517697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 23:49:52.728468  517697 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 23:49:52.729474  517697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1121 23:49:53.062572  517697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 23:49:53.187945  517697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 23:49:53.227110  517697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1121 23:49:53.227773  517697 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 23:49:53.563038  517697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 23:49:53.686884  517697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 23:49:53.726752  517697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1121 23:49:53.726929  517697 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 23:49:54.062680  517697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 23:49:54.187182  517697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 23:49:54.227599  517697 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 23:49:54.228016  517697 kapi.go:107] duration metric: took 1m7.005739653s to wait for kubernetes.io/minikube-addons=registry ...
	I1121 23:49:54.562356  517697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 23:49:54.688624  517697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 23:49:54.726988  517697 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 23:49:55.062844  517697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 23:49:55.186873  517697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 23:49:55.227003  517697 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 23:49:55.562969  517697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 23:49:55.686995  517697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 23:49:55.727451  517697 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 23:49:56.062214  517697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 23:49:56.187023  517697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 23:49:56.227016  517697 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 23:49:56.562660  517697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 23:49:56.689743  517697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 23:49:56.726846  517697 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 23:49:57.074853  517697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 23:49:57.187826  517697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 23:49:57.227365  517697 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 23:49:57.562076  517697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 23:49:57.687001  517697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 23:49:57.727166  517697 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 23:49:58.061930  517697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 23:49:58.187333  517697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 23:49:58.226876  517697 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 23:49:58.562587  517697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 23:49:58.686498  517697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 23:49:58.727089  517697 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 23:49:59.062674  517697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 23:49:59.187012  517697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 23:49:59.226358  517697 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 23:49:59.562139  517697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 23:49:59.687429  517697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 23:49:59.726781  517697 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 23:50:00.106593  517697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 23:50:00.215454  517697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 23:50:00.240663  517697 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 23:50:00.563131  517697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 23:50:00.687514  517697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 23:50:00.727276  517697 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 23:50:01.061705  517697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 23:50:01.187688  517697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 23:50:01.227437  517697 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 23:50:01.563064  517697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 23:50:01.687137  517697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 23:50:01.726952  517697 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 23:50:02.065222  517697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 23:50:02.188113  517697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 23:50:02.228380  517697 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 23:50:02.562625  517697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 23:50:02.686556  517697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 23:50:02.726543  517697 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 23:50:03.062657  517697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 23:50:03.187096  517697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 23:50:03.287602  517697 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 23:50:03.562081  517697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 23:50:03.687899  517697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 23:50:03.727116  517697 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 23:50:04.063303  517697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 23:50:04.187766  517697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 23:50:04.227386  517697 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 23:50:04.562216  517697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 23:50:04.689316  517697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 23:50:04.726665  517697 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 23:50:05.062705  517697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 23:50:05.187631  517697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 23:50:05.226985  517697 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 23:50:05.563175  517697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 23:50:05.687215  517697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 23:50:05.726557  517697 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 23:50:06.062824  517697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 23:50:06.187263  517697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 23:50:06.226588  517697 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 23:50:06.562928  517697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 23:50:06.686910  517697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 23:50:06.727761  517697 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 23:50:07.071286  517697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 23:50:07.188061  517697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 23:50:07.226053  517697 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 23:50:07.565143  517697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 23:50:07.687205  517697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 23:50:07.726865  517697 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 23:50:08.062595  517697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 23:50:08.188417  517697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 23:50:08.227203  517697 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 23:50:08.566899  517697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 23:50:08.686595  517697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 23:50:08.729651  517697 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 23:50:09.062177  517697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 23:50:09.187632  517697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 23:50:09.227024  517697 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 23:50:09.562312  517697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 23:50:09.688679  517697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 23:50:09.792517  517697 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 23:50:10.063121  517697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 23:50:10.187037  517697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 23:50:10.227775  517697 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 23:50:10.562414  517697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 23:50:10.690316  517697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 23:50:10.728869  517697 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 23:50:11.062711  517697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 23:50:11.186982  517697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 23:50:11.227175  517697 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 23:50:11.563447  517697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 23:50:11.687691  517697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 23:50:11.727131  517697 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 23:50:12.063221  517697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 23:50:12.187423  517697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 23:50:12.226480  517697 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 23:50:12.561561  517697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 23:50:12.687692  517697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 23:50:12.726512  517697 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 23:50:13.062627  517697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 23:50:13.187882  517697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 23:50:13.227581  517697 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 23:50:13.571068  517697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 23:50:13.686834  517697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 23:50:13.726643  517697 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 23:50:14.062610  517697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 23:50:14.188398  517697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 23:50:14.226713  517697 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 23:50:14.562128  517697 kapi.go:107] duration metric: took 1m27.003730955s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I1121 23:50:14.686868  517697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 23:50:14.727257  517697 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 23:50:15.187581  517697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 23:50:15.226687  517697 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 23:50:15.686956  517697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 23:50:15.727146  517697 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 23:50:16.188092  517697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 23:50:16.227262  517697 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 23:50:16.686951  517697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 23:50:16.727123  517697 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 23:50:17.187477  517697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 23:50:17.226519  517697 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 23:50:17.687614  517697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 23:50:17.726347  517697 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 23:50:18.187147  517697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 23:50:18.226262  517697 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 23:50:18.687743  517697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 23:50:18.726912  517697 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 23:50:19.187344  517697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 23:50:19.288198  517697 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 23:50:19.687996  517697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 23:50:19.789297  517697 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 23:50:20.187301  517697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 23:50:20.226458  517697 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 23:50:20.686817  517697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 23:50:20.727027  517697 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 23:50:21.191351  517697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 23:50:21.226528  517697 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 23:50:21.686658  517697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 23:50:21.726738  517697 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 23:50:22.193334  517697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 23:50:22.235000  517697 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 23:50:22.687252  517697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 23:50:22.726107  517697 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 23:50:23.187541  517697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 23:50:23.226559  517697 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 23:50:23.686817  517697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 23:50:23.727177  517697 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 23:50:24.197364  517697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 23:50:24.226484  517697 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 23:50:24.686992  517697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 23:50:24.727468  517697 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 23:50:25.187335  517697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 23:50:25.226153  517697 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 23:50:25.687975  517697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 23:50:25.727087  517697 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 23:50:26.187790  517697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 23:50:26.226901  517697 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 23:50:26.687629  517697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 23:50:26.727002  517697 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 23:50:27.188138  517697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 23:50:27.228646  517697 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 23:50:27.687097  517697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 23:50:27.726079  517697 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 23:50:28.187963  517697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 23:50:28.226402  517697 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 23:50:28.687955  517697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 23:50:28.727008  517697 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 23:50:29.188310  517697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 23:50:29.227458  517697 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 23:50:29.688709  517697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 23:50:29.728614  517697 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 23:50:30.191373  517697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 23:50:30.226549  517697 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 23:50:30.690001  517697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 23:50:30.727161  517697 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 23:50:31.195490  517697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 23:50:31.231331  517697 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 23:50:31.687865  517697 kapi.go:107] duration metric: took 1m41.004128252s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I1121 23:50:31.691505  517697 out.go:179] * Your GCP credentials will now be mounted into every pod created in the addons-882841 cluster.
	I1121 23:50:31.694749  517697 out.go:179] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I1121 23:50:31.698143  517697 out.go:179] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I1121 23:50:31.727783  517697 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 23:50:32.227784  517697 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 23:50:32.726638  517697 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 23:50:33.227306  517697 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 23:50:33.726413  517697 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 23:50:34.229660  517697 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 23:50:34.727146  517697 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 23:50:35.227335  517697 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 23:50:35.727107  517697 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 23:50:36.233629  517697 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 23:50:36.727762  517697 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 23:50:37.226669  517697 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 23:50:37.726800  517697 kapi.go:107] duration metric: took 1m50.503726384s to wait for app.kubernetes.io/name=ingress-nginx ...
	I1121 23:50:37.730115  517697 out.go:179] * Enabled addons: default-storageclass, inspektor-gadget, registry-creds, storage-provisioner, amd-gpu-device-plugin, nvidia-device-plugin, cloud-spanner, ingress-dns, metrics-server, yakd, storage-provisioner-rancher, volumesnapshots, registry, csi-hostpath-driver, gcp-auth, ingress
	I1121 23:50:37.733327  517697 addons.go:530] duration metric: took 1m56.932312921s for enable addons: enabled=[default-storageclass inspektor-gadget registry-creds storage-provisioner amd-gpu-device-plugin nvidia-device-plugin cloud-spanner ingress-dns metrics-server yakd storage-provisioner-rancher volumesnapshots registry csi-hostpath-driver gcp-auth ingress]
	I1121 23:50:37.733373  517697 start.go:247] waiting for cluster config update ...
	I1121 23:50:37.733399  517697 start.go:256] writing updated cluster config ...
	I1121 23:50:37.733687  517697 ssh_runner.go:195] Run: rm -f paused
	I1121 23:50:37.738474  517697 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1121 23:50:37.741769  517697 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-zjrtb" in "kube-system" namespace to be "Ready" or be gone ...
	I1121 23:50:37.746938  517697 pod_ready.go:94] pod "coredns-66bc5c9577-zjrtb" is "Ready"
	I1121 23:50:37.747009  517697 pod_ready.go:86] duration metric: took 5.133432ms for pod "coredns-66bc5c9577-zjrtb" in "kube-system" namespace to be "Ready" or be gone ...
	I1121 23:50:37.749591  517697 pod_ready.go:83] waiting for pod "etcd-addons-882841" in "kube-system" namespace to be "Ready" or be gone ...
	I1121 23:50:37.754400  517697 pod_ready.go:94] pod "etcd-addons-882841" is "Ready"
	I1121 23:50:37.754430  517697 pod_ready.go:86] duration metric: took 4.811889ms for pod "etcd-addons-882841" in "kube-system" namespace to be "Ready" or be gone ...
	I1121 23:50:37.756715  517697 pod_ready.go:83] waiting for pod "kube-apiserver-addons-882841" in "kube-system" namespace to be "Ready" or be gone ...
	I1121 23:50:37.761504  517697 pod_ready.go:94] pod "kube-apiserver-addons-882841" is "Ready"
	I1121 23:50:37.761530  517697 pod_ready.go:86] duration metric: took 4.750525ms for pod "kube-apiserver-addons-882841" in "kube-system" namespace to be "Ready" or be gone ...
	I1121 23:50:37.764141  517697 pod_ready.go:83] waiting for pod "kube-controller-manager-addons-882841" in "kube-system" namespace to be "Ready" or be gone ...
	I1121 23:50:38.142465  517697 pod_ready.go:94] pod "kube-controller-manager-addons-882841" is "Ready"
	I1121 23:50:38.142497  517697 pod_ready.go:86] duration metric: took 378.334868ms for pod "kube-controller-manager-addons-882841" in "kube-system" namespace to be "Ready" or be gone ...
	I1121 23:50:38.342625  517697 pod_ready.go:83] waiting for pod "kube-proxy-gthqw" in "kube-system" namespace to be "Ready" or be gone ...
	I1121 23:50:38.742880  517697 pod_ready.go:94] pod "kube-proxy-gthqw" is "Ready"
	I1121 23:50:38.742908  517697 pod_ready.go:86] duration metric: took 400.251724ms for pod "kube-proxy-gthqw" in "kube-system" namespace to be "Ready" or be gone ...
	I1121 23:50:38.942322  517697 pod_ready.go:83] waiting for pod "kube-scheduler-addons-882841" in "kube-system" namespace to be "Ready" or be gone ...
	I1121 23:50:39.342604  517697 pod_ready.go:94] pod "kube-scheduler-addons-882841" is "Ready"
	I1121 23:50:39.342635  517697 pod_ready.go:86] duration metric: took 400.288014ms for pod "kube-scheduler-addons-882841" in "kube-system" namespace to be "Ready" or be gone ...
	I1121 23:50:39.342649  517697 pod_ready.go:40] duration metric: took 1.604140769s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1121 23:50:39.404354  517697 start.go:628] kubectl: 1.33.2, cluster: 1.34.1 (minor skew: 1)
	I1121 23:50:39.407995  517697 out.go:179] * Done! kubectl is now configured to use "addons-882841" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Nov 21 23:53:49 addons-882841 crio[827]: time="2025-11-21T23:53:49.564142019Z" level=info msg="Checking image status: docker.io/upmcenterprises/registry-creds:1.10@sha256:93a633d4f2b76a1c66bf19c664dbddc56093a543de6d54320f19f585ccd7d605" id=1b84726c-275c-4574-979a-b8e376430d01 name=/runtime.v1.ImageService/ImageStatus
	Nov 21 23:53:49 addons-882841 crio[827]: time="2025-11-21T23:53:49.565220217Z" level=info msg="Checking image status: docker.io/upmcenterprises/registry-creds:1.10@sha256:93a633d4f2b76a1c66bf19c664dbddc56093a543de6d54320f19f585ccd7d605" id=00e1f77f-dea8-4c85-a5fe-71dd79b61422 name=/runtime.v1.ImageService/ImageStatus
	Nov 21 23:53:49 addons-882841 crio[827]: time="2025-11-21T23:53:49.566616068Z" level=info msg="Creating container: kube-system/registry-creds-764b6fb674-8wv2f/registry-creds" id=d1216a2e-471e-4efe-8b72-921cab9b0b9d name=/runtime.v1.RuntimeService/CreateContainer
	Nov 21 23:53:49 addons-882841 crio[827]: time="2025-11-21T23:53:49.566739527Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 21 23:53:49 addons-882841 crio[827]: time="2025-11-21T23:53:49.579657899Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 21 23:53:49 addons-882841 crio[827]: time="2025-11-21T23:53:49.580960329Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 21 23:53:49 addons-882841 crio[827]: time="2025-11-21T23:53:49.607935635Z" level=info msg="Created container 59d531c0f0a7a7f8e8d693bbbf23dc2d087ef4a78aac796f830d21d42626a0f0: kube-system/registry-creds-764b6fb674-8wv2f/registry-creds" id=d1216a2e-471e-4efe-8b72-921cab9b0b9d name=/runtime.v1.RuntimeService/CreateContainer
	Nov 21 23:53:49 addons-882841 crio[827]: time="2025-11-21T23:53:49.612416718Z" level=info msg="Starting container: 59d531c0f0a7a7f8e8d693bbbf23dc2d087ef4a78aac796f830d21d42626a0f0" id=b3ca9059-52af-43cb-9b21-690ee131976c name=/runtime.v1.RuntimeService/StartContainer
	Nov 21 23:53:49 addons-882841 crio[827]: time="2025-11-21T23:53:49.614346552Z" level=info msg="Started container" PID=7178 containerID=59d531c0f0a7a7f8e8d693bbbf23dc2d087ef4a78aac796f830d21d42626a0f0 description=kube-system/registry-creds-764b6fb674-8wv2f/registry-creds id=b3ca9059-52af-43cb-9b21-690ee131976c name=/runtime.v1.RuntimeService/StartContainer sandboxID=686d24dcd0091f4e02f02fbd5810bb0708b9ba8cba47a1effb8215372a2fd355
	Nov 21 23:53:49 addons-882841 conmon[7176]: conmon 59d531c0f0a7a7f8e8d6 <ninfo>: container 7178 exited with status 1
	Nov 21 23:53:50 addons-882841 crio[827]: time="2025-11-21T23:53:50.009437775Z" level=info msg="Removing container: 273bfc38d60cb0b9c99a162365b410090fafa11e0c75ee599f0b7a175ec83c6b" id=89be87ce-e76c-44c1-9559-7860414aa5ad name=/runtime.v1.RuntimeService/RemoveContainer
	Nov 21 23:53:50 addons-882841 crio[827]: time="2025-11-21T23:53:50.066771523Z" level=info msg="Pulled image: docker.io/kicbase/echo-server@sha256:42a89d9b22e5307cb88494990d5d929c401339f508c0a7e98a4d8ac52623fc5b" id=b8c25600-7188-444b-9806-f5a04fd5b7a5 name=/runtime.v1.ImageService/PullImage
	Nov 21 23:53:50 addons-882841 crio[827]: time="2025-11-21T23:53:50.067872244Z" level=info msg="Checking image status: docker.io/kicbase/echo-server:1.0" id=102a97c2-480c-4197-942a-b64a9c876ef3 name=/runtime.v1.ImageService/ImageStatus
	Nov 21 23:53:50 addons-882841 crio[827]: time="2025-11-21T23:53:50.071087614Z" level=info msg="Error loading conmon cgroup of container 273bfc38d60cb0b9c99a162365b410090fafa11e0c75ee599f0b7a175ec83c6b: cgroup deleted" id=89be87ce-e76c-44c1-9559-7860414aa5ad name=/runtime.v1.RuntimeService/RemoveContainer
	Nov 21 23:53:50 addons-882841 crio[827]: time="2025-11-21T23:53:50.077992679Z" level=info msg="Checking image status: docker.io/kicbase/echo-server:1.0" id=ada1b21a-43a3-420a-8c32-5f2fdf4350be name=/runtime.v1.ImageService/ImageStatus
	Nov 21 23:53:50 addons-882841 crio[827]: time="2025-11-21T23:53:50.099123444Z" level=info msg="Creating container: default/hello-world-app-5d498dc89-4kgjp/hello-world-app" id=dff918be-1b54-4175-b41f-22e6784a773c name=/runtime.v1.RuntimeService/CreateContainer
	Nov 21 23:53:50 addons-882841 crio[827]: time="2025-11-21T23:53:50.099401533Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 21 23:53:50 addons-882841 crio[827]: time="2025-11-21T23:53:50.104823136Z" level=info msg="Removed container 273bfc38d60cb0b9c99a162365b410090fafa11e0c75ee599f0b7a175ec83c6b: kube-system/registry-creds-764b6fb674-8wv2f/registry-creds" id=89be87ce-e76c-44c1-9559-7860414aa5ad name=/runtime.v1.RuntimeService/RemoveContainer
	Nov 21 23:53:50 addons-882841 crio[827]: time="2025-11-21T23:53:50.112779708Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 21 23:53:50 addons-882841 crio[827]: time="2025-11-21T23:53:50.11315139Z" level=warning msg="Failed to open /etc/passwd: open /var/lib/containers/storage/overlay/982bebfb2d54c34ec1cf6a96e09dda78f85fdee8b5ac5ff7fba552ae6d47f4c5/merged/etc/passwd: no such file or directory"
	Nov 21 23:53:50 addons-882841 crio[827]: time="2025-11-21T23:53:50.1132775Z" level=warning msg="Failed to open /etc/group: open /var/lib/containers/storage/overlay/982bebfb2d54c34ec1cf6a96e09dda78f85fdee8b5ac5ff7fba552ae6d47f4c5/merged/etc/group: no such file or directory"
	Nov 21 23:53:50 addons-882841 crio[827]: time="2025-11-21T23:53:50.113648115Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 21 23:53:50 addons-882841 crio[827]: time="2025-11-21T23:53:50.133783413Z" level=info msg="Created container ec220c2c64b5a35983ae35f9648d8d942f64cda05f8635e7d0879c8bd989b8e0: default/hello-world-app-5d498dc89-4kgjp/hello-world-app" id=dff918be-1b54-4175-b41f-22e6784a773c name=/runtime.v1.RuntimeService/CreateContainer
	Nov 21 23:53:50 addons-882841 crio[827]: time="2025-11-21T23:53:50.143600086Z" level=info msg="Starting container: ec220c2c64b5a35983ae35f9648d8d942f64cda05f8635e7d0879c8bd989b8e0" id=bd96b039-b3cc-4d40-8f45-45b379b1c673 name=/runtime.v1.RuntimeService/StartContainer
	Nov 21 23:53:50 addons-882841 crio[827]: time="2025-11-21T23:53:50.148567728Z" level=info msg="Started container" PID=7221 containerID=ec220c2c64b5a35983ae35f9648d8d942f64cda05f8635e7d0879c8bd989b8e0 description=default/hello-world-app-5d498dc89-4kgjp/hello-world-app id=bd96b039-b3cc-4d40-8f45-45b379b1c673 name=/runtime.v1.RuntimeService/StartContainer sandboxID=e0a6a52ddf05c947be01659489e978468a0be0d54cef8b2d6246e80671403de3
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                                                        CREATED                  STATE               NAME                                     ATTEMPT             POD ID              POD                                        NAMESPACE
	ec220c2c64b5a       docker.io/kicbase/echo-server@sha256:42a89d9b22e5307cb88494990d5d929c401339f508c0a7e98a4d8ac52623fc5b                                        Less than a second ago   Running             hello-world-app                          0                   e0a6a52ddf05c       hello-world-app-5d498dc89-4kgjp            default
	59d531c0f0a7a       a2fd0654e5baeec8de2209bfade13a0034e942e708fd2bbfce69bb26a3c02e14                                                                             1 second ago             Exited              registry-creds                           2                   686d24dcd0091       registry-creds-764b6fb674-8wv2f            kube-system
	ea3c8650015b4       docker.io/library/nginx@sha256:7391b3732e7f7ccd23ff1d02fbeadcde496f374d7460ad8e79260f8f6d2c9f90                                              2 minutes ago            Running             nginx                                    0                   51619623fa10f       nginx                                      default
	2ea6072bf2123       gcr.io/k8s-minikube/busybox@sha256:580b0aa58b210f512f818b7b7ef4f63c803f7a8cd6baf571b1462b79f7b7719e                                          3 minutes ago            Running             busybox                                  0                   d3669523d5fc5       busybox                                    default
	bbd9358920805       registry.k8s.io/ingress-nginx/controller@sha256:655333e68deab34ee3701f400c4d5d9709000cdfdadb802e4bd7500b027e1259                             3 minutes ago            Running             controller                               0                   0559b46fe9974       ingress-nginx-controller-6c8bf45fb-tnj6x   ingress-nginx
	ea59934e5547e       gcr.io/k8s-minikube/gcp-auth-webhook@sha256:2de98fa4b397f92e5e8e05d73caf21787a1c72c41378f3eb7bad72b1e0f4e9ff                                 3 minutes ago            Running             gcp-auth                                 0                   aac08c4fcde57       gcp-auth-78565c9fb4-rpmr7                  gcp-auth
	d4288bfcc52ca       registry.k8s.io/sig-storage/csi-snapshotter@sha256:bd6b8417b2a83e66ab1d4c1193bb2774f027745bdebbd9e0c1a6518afdecc39a                          3 minutes ago            Running             csi-snapshotter                          0                   6fc2758cdfb87       csi-hostpathplugin-mkngh                   kube-system
	c2b953c3eb94b       registry.k8s.io/sig-storage/csi-provisioner@sha256:98ffd09c0784203d200e0f8c241501de31c8df79644caac7eed61bd6391e5d49                          3 minutes ago            Running             csi-provisioner                          0                   6fc2758cdfb87       csi-hostpathplugin-mkngh                   kube-system
	4b84c2719358c       registry.k8s.io/sig-storage/livenessprobe@sha256:8b00c6e8f52639ed9c6f866085893ab688e57879741b3089e3cfa9998502e158                            3 minutes ago            Running             liveness-probe                           0                   6fc2758cdfb87       csi-hostpathplugin-mkngh                   kube-system
	03e07c8bd5633       registry.k8s.io/sig-storage/hostpathplugin@sha256:7b1dfc90a367222067fc468442fdf952e20fc5961f25c1ad654300ddc34d7083                           3 minutes ago            Running             hostpath                                 0                   6fc2758cdfb87       csi-hostpathplugin-mkngh                   kube-system
	4f400d29139bb       registry.k8s.io/sig-storage/csi-node-driver-registrar@sha256:511b8c8ac828194a753909d26555ff08bc12f497dd8daeb83fe9d593693a26c1                3 minutes ago            Running             node-driver-registrar                    0                   6fc2758cdfb87       csi-hostpathplugin-mkngh                   kube-system
	d0aa8b937f2aa       ghcr.io/inspektor-gadget/inspektor-gadget@sha256:c2c5268a38de5c792beb84122c5350c644fbb9b85e04342ef72fa9a6d052f0b0                            3 minutes ago            Running             gadget                                   0                   18d998949e253       gadget-84krr                               gadget
	0ecc1bcf50504       registry.k8s.io/sig-storage/csi-external-health-monitor-controller@sha256:8b9df00898ded1bfb4d8f3672679f29cd9f88e651b76fef64121c8d347dd12c0   3 minutes ago            Running             csi-external-health-monitor-controller   0                   6fc2758cdfb87       csi-hostpathplugin-mkngh                   kube-system
	0d6392a88c56a       nvcr.io/nvidia/k8s-device-plugin@sha256:80924fc52384565a7c59f1e2f12319fb8f2b02a1c974bb3d73a9853fe01af874                                     3 minutes ago            Running             nvidia-device-plugin-ctr                 0                   6fdfcb727159f       nvidia-device-plugin-daemonset-4jvp9       kube-system
	8eba7b83af467       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:e733096c3a5b75504c6380083abc960c9627efd23e099df780adfb4eec197583                   3 minutes ago            Exited              patch                                    0                   77fe03c4dbab0       ingress-nginx-admission-patch-lfbrr        ingress-nginx
	492d7c5835fb8       registry.k8s.io/sig-storage/csi-attacher@sha256:4b5609c78455de45821910065281a368d5f760b41250f90cbde5110543bdc326                             3 minutes ago            Running             csi-attacher                             0                   c91237a67e54a       csi-hostpath-attacher-0                    kube-system
	438ba026464ea       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:e733096c3a5b75504c6380083abc960c9627efd23e099df780adfb4eec197583                   3 minutes ago            Exited              create                                   0                   1c3a1894461df       ingress-nginx-admission-create-f9tdh       ingress-nginx
	96794062627c7       registry.k8s.io/sig-storage/csi-resizer@sha256:82c1945463342884c05a5b2bc31319712ce75b154c279c2a10765f61e0f688af                              3 minutes ago            Running             csi-resizer                              0                   30c9f2ed0a3ba       csi-hostpath-resizer-0                     kube-system
	d6df5ea7f4eb5       gcr.io/k8s-minikube/kube-registry-proxy@sha256:26c84a64530a67aa4d749dd4356d67ea27a2576e4d25b640d21857b0574cfd4b                              3 minutes ago            Running             registry-proxy                           0                   68c26a2e4f7da       registry-proxy-rrtfc                       kube-system
	800239fcbfd60       registry.k8s.io/sig-storage/snapshot-controller@sha256:5d668e35c15df6e87e2530da25d557f543182cedbdb39d421b87076463ee9857                      4 minutes ago            Running             volume-snapshot-controller               0                   6f45d55fc6313       snapshot-controller-7d9fbc56b8-44w6b       kube-system
	1d9bfd16346a7       registry.k8s.io/sig-storage/snapshot-controller@sha256:5d668e35c15df6e87e2530da25d557f543182cedbdb39d421b87076463ee9857                      4 minutes ago            Running             volume-snapshot-controller               0                   9f2b3bc75d335       snapshot-controller-7d9fbc56b8-q99bt       kube-system
	cd8dbe13ad5f4       docker.io/marcnuri/yakd@sha256:1c961556224d57fc747de0b1874524208e5fb4f8386f23e9c1c4c18e97109f17                                              4 minutes ago            Running             yakd                                     0                   08e33db47f3a7       yakd-dashboard-5ff678cb9-x6sbv             yakd-dashboard
	b7eb954adbbab       registry.k8s.io/metrics-server/metrics-server@sha256:8f49cf1b0688bb0eae18437882dbf6de2c7a2baac71b1492bc4eca25439a1bf2                        4 minutes ago            Running             metrics-server                           0                   72fe85b281535       metrics-server-85b7d694d7-7tk8r            kube-system
	ba42295e49f9a       docker.io/library/registry@sha256:8715992817b2254fe61e74ffc6a4096d57a0cde36c95ea075676c05f7a94a630                                           4 minutes ago            Running             registry                                 0                   27f67d1dd7bf1       registry-6b586f9694-5jvr4                  kube-system
	2d039a17459cf       gcr.io/cloud-spanner-emulator/emulator@sha256:c2688dc4b7ecb4546084321d63c2b3b616a54263488137e18fcb7c7005aef086                               4 minutes ago            Running             cloud-spanner-emulator                   0                   1d3f95b027770       cloud-spanner-emulator-6f9fcf858b-z8s5r    default
	36f901d726865       docker.io/kicbase/minikube-ingress-dns@sha256:6d710af680d8a9b5a5b1f9047eb83ee4c9258efd3fcd962f938c00bcbb4c5958                               4 minutes ago            Running             minikube-ingress-dns                     0                   bf601ff2e8c9e       kube-ingress-dns-minikube                  kube-system
	e7957c170631a       docker.io/rancher/local-path-provisioner@sha256:689a2489a24e74426e4a4666e611c988202c5fa995908b0c60133aca3eb87d98                             4 minutes ago            Running             local-path-provisioner                   0                   444ccf0807ab2       local-path-provisioner-648f6765c9-sv9ds    local-path-storage
	561d110537c5c       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                                                             4 minutes ago            Running             storage-provisioner                      0                   bffa57760e853       storage-provisioner                        kube-system
	e6dc31e093068       138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc                                                                             4 minutes ago            Running             coredns                                  0                   849fa7ee214a9       coredns-66bc5c9577-zjrtb                   kube-system
	074654b9d6b9f       05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9                                                                             5 minutes ago            Running             kube-proxy                               0                   8debede13cde9       kube-proxy-gthqw                           kube-system
	970af788676bd       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c                                                                             5 minutes ago            Running             kindnet-cni                              0                   cdbadd866640c       kindnet-wghw5                              kube-system
	f6c2269669bcf       7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a                                                                             5 minutes ago            Running             kube-controller-manager                  0                   29706823db1fd       kube-controller-manager-addons-882841      kube-system
	415d7ebb38dbf       43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196                                                                             5 minutes ago            Running             kube-apiserver                           0                   81f16edd0c739       kube-apiserver-addons-882841               kube-system
	ba805611fb053       b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0                                                                             5 minutes ago            Running             kube-scheduler                           0                   2bf7c2eeb1e7e       kube-scheduler-addons-882841               kube-system
	0b539dfc17788       a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e                                                                             5 minutes ago            Running             etcd                                     0                   31bf5833591bf       etcd-addons-882841                         kube-system
	
	
	==> coredns [e6dc31e0930681fac6fbd625f4ec7a07e57c10d13a728a7ec163a4c66a6d4a2b] <==
	[INFO] 10.244.0.12:32983 - 42028 "AAAA IN registry.kube-system.svc.cluster.local.us-east-2.compute.internal. udp 94 false 1232" NXDOMAIN qr,rd,ra 83 0.003126023s
	[INFO] 10.244.0.12:32983 - 31053 "AAAA IN registry.kube-system.svc.cluster.local. udp 67 false 1232" NOERROR qr,aa,rd 149 0.001051014s
	[INFO] 10.244.0.12:32983 - 29123 "A IN registry.kube-system.svc.cluster.local. udp 67 false 1232" NOERROR qr,aa,rd 110 0.00043865s
	[INFO] 10.244.0.12:36008 - 5721 "AAAA IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000157812s
	[INFO] 10.244.0.12:36008 - 5508 "A IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000067887s
	[INFO] 10.244.0.12:34000 - 476 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000095013s
	[INFO] 10.244.0.12:34000 - 31 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000062439s
	[INFO] 10.244.0.12:50238 - 47165 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000080071s
	[INFO] 10.244.0.12:50238 - 46968 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000059994s
	[INFO] 10.244.0.12:46915 - 17370 "AAAA IN registry.kube-system.svc.cluster.local.us-east-2.compute.internal. udp 83 false 512" NXDOMAIN qr,rd,ra 83 0.002665852s
	[INFO] 10.244.0.12:46915 - 16909 "A IN registry.kube-system.svc.cluster.local.us-east-2.compute.internal. udp 83 false 512" NXDOMAIN qr,rd,ra 83 0.002678413s
	[INFO] 10.244.0.12:39562 - 34668 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000096055s
	[INFO] 10.244.0.12:39562 - 34512 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000066066s
	[INFO] 10.244.0.21:39683 - 37096 "AAAA IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000273296s
	[INFO] 10.244.0.21:44539 - 15643 "A IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000194464s
	[INFO] 10.244.0.21:33461 - 646 "AAAA IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000132944s
	[INFO] 10.244.0.21:53688 - 51532 "A IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000186579s
	[INFO] 10.244.0.21:44031 - 64469 "AAAA IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000126322s
	[INFO] 10.244.0.21:44494 - 39960 "A IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000077356s
	[INFO] 10.244.0.21:49021 - 36189 "A IN storage.googleapis.com.us-east-2.compute.internal. udp 78 false 1232" NXDOMAIN qr,rd,ra 67 0.001748667s
	[INFO] 10.244.0.21:35656 - 579 "AAAA IN storage.googleapis.com.us-east-2.compute.internal. udp 78 false 1232" NXDOMAIN qr,rd,ra 67 0.005246297s
	[INFO] 10.244.0.21:43206 - 41865 "A IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 648 0.001533741s
	[INFO] 10.244.0.21:49219 - 29542 "AAAA IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 240 0.0019691s
	[INFO] 10.244.0.23:52402 - 2 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000162802s
	[INFO] 10.244.0.23:46277 - 3 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000105893s
	
	
	==> describe nodes <==
	Name:               addons-882841
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=addons-882841
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=299bbe887a12c40541707cc636234f35f4ff1785
	                    minikube.k8s.io/name=addons-882841
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_11_21T23_48_37_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-882841
	Annotations:        csi.volume.kubernetes.io/nodeid: {"hostpath.csi.k8s.io":"addons-882841"}
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 21 Nov 2025 23:48:33 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-882841
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 21 Nov 2025 23:53:42 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 21 Nov 2025 23:53:43 +0000   Fri, 21 Nov 2025 23:48:30 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 21 Nov 2025 23:53:43 +0000   Fri, 21 Nov 2025 23:48:30 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 21 Nov 2025 23:53:43 +0000   Fri, 21 Nov 2025 23:48:30 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 21 Nov 2025 23:53:43 +0000   Fri, 21 Nov 2025 23:49:22 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    addons-882841
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	System Info:
	  Machine ID:                 c92eea4d3c03c0156e01fcf8691e3907
	  System UUID:                5694beba-5776-4cd9-a5e8-6657562a60ef
	  Boot ID:                    72ac7385-472f-47d1-a23e-bd80468d6e09
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.2
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (28 in total)
	  Namespace                   Name                                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m11s
	  default                     cloud-spanner-emulator-6f9fcf858b-z8s5r     0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m7s
	  default                     hello-world-app-5d498dc89-4kgjp             0 (0%)        0 (0%)      0 (0%)           0 (0%)         2s
	  default                     nginx                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m24s
	  gadget                      gadget-84krr                                0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m5s
	  gcp-auth                    gcp-auth-78565c9fb4-rpmr7                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m1s
	  ingress-nginx               ingress-nginx-controller-6c8bf45fb-tnj6x    100m (5%)     0 (0%)      90Mi (1%)        0 (0%)         5m4s
	  kube-system                 coredns-66bc5c9577-zjrtb                    100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     5m10s
	  kube-system                 csi-hostpath-attacher-0                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m4s
	  kube-system                 csi-hostpath-resizer-0                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m4s
	  kube-system                 csi-hostpathplugin-mkngh                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m29s
	  kube-system                 etcd-addons-882841                          100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         5m15s
	  kube-system                 kindnet-wghw5                               100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      5m10s
	  kube-system                 kube-apiserver-addons-882841                250m (12%)    0 (0%)      0 (0%)           0 (0%)         5m15s
	  kube-system                 kube-controller-manager-addons-882841       200m (10%)    0 (0%)      0 (0%)           0 (0%)         5m15s
	  kube-system                 kube-ingress-dns-minikube                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m6s
	  kube-system                 kube-proxy-gthqw                            0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m10s
	  kube-system                 kube-scheduler-addons-882841                100m (5%)     0 (0%)      0 (0%)           0 (0%)         5m15s
	  kube-system                 metrics-server-85b7d694d7-7tk8r             100m (5%)     0 (0%)      200Mi (2%)       0 (0%)         5m6s
	  kube-system                 nvidia-device-plugin-daemonset-4jvp9        0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m29s
	  kube-system                 registry-6b586f9694-5jvr4                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m6s
	  kube-system                 registry-creds-764b6fb674-8wv2f             0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m8s
	  kube-system                 registry-proxy-rrtfc                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m29s
	  kube-system                 snapshot-controller-7d9fbc56b8-44w6b        0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m5s
	  kube-system                 snapshot-controller-7d9fbc56b8-q99bt        0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m5s
	  kube-system                 storage-provisioner                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m6s
	  local-path-storage          local-path-provisioner-648f6765c9-sv9ds     0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m5s
	  yakd-dashboard              yakd-dashboard-5ff678cb9-x6sbv              0 (0%)        0 (0%)      128Mi (1%)       256Mi (3%)     5m5s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                1050m (52%)  100m (5%)
	  memory             638Mi (8%)   476Mi (6%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-1Gi      0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	  hugepages-32Mi     0 (0%)       0 (0%)
	  hugepages-64Ki     0 (0%)       0 (0%)
	Events:
	  Type     Reason                   Age                    From             Message
	  ----     ------                   ----                   ----             -------
	  Normal   Starting                 5m8s                   kube-proxy       
	  Normal   Starting                 5m22s                  kubelet          Starting kubelet.
	  Warning  CgroupV1                 5m22s                  kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  5m22s (x8 over 5m22s)  kubelet          Node addons-882841 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    5m22s (x8 over 5m22s)  kubelet          Node addons-882841 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     5m22s (x8 over 5m22s)  kubelet          Node addons-882841 status is now: NodeHasSufficientPID
	  Normal   Starting                 5m15s                  kubelet          Starting kubelet.
	  Warning  CgroupV1                 5m15s                  kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  5m15s                  kubelet          Node addons-882841 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    5m15s                  kubelet          Node addons-882841 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     5m15s                  kubelet          Node addons-882841 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           5m11s                  node-controller  Node addons-882841 event: Registered Node addons-882841 in Controller
	  Normal   NodeReady                4m29s                  kubelet          Node addons-882841 status is now: NodeReady
	
	
	==> dmesg <==
	[ +23.891724] overlayfs: idmapped layers are currently not supported
	[Nov21 23:06] overlayfs: idmapped layers are currently not supported
	[ +32.573452] overlayfs: idmapped layers are currently not supported
	[  +9.452963] overlayfs: idmapped layers are currently not supported
	[Nov21 23:08] overlayfs: idmapped layers are currently not supported
	[ +24.877472] overlayfs: idmapped layers are currently not supported
	[Nov21 23:11] overlayfs: idmapped layers are currently not supported
	[Nov21 23:13] overlayfs: idmapped layers are currently not supported
	[Nov21 23:14] overlayfs: idmapped layers are currently not supported
	[Nov21 23:15] overlayfs: idmapped layers are currently not supported
	[Nov21 23:16] overlayfs: idmapped layers are currently not supported
	[Nov21 23:17] overlayfs: idmapped layers are currently not supported
	[ +10.681159] overlayfs: idmapped layers are currently not supported
	[Nov21 23:19] overlayfs: idmapped layers are currently not supported
	[ +15.192296] overlayfs: idmapped layers are currently not supported
	[Nov21 23:20] overlayfs: idmapped layers are currently not supported
	[Nov21 23:21] overlayfs: idmapped layers are currently not supported
	[Nov21 23:22] overlayfs: idmapped layers are currently not supported
	[ +12.884842] overlayfs: idmapped layers are currently not supported
	[Nov21 23:23] overlayfs: idmapped layers are currently not supported
	[ +12.022080] overlayfs: idmapped layers are currently not supported
	[Nov21 23:25] overlayfs: idmapped layers are currently not supported
	[ +24.447615] overlayfs: idmapped layers are currently not supported
	[Nov21 23:46] kauditd_printk_skb: 8 callbacks suppressed
	[Nov21 23:48] overlayfs: idmapped layers are currently not supported
	
	
	==> etcd [0b539dfc17788b7400ee2eb5abadb008e5a8a9796bb112d8a69f14a34f2fd551] <==
	{"level":"warn","ts":"2025-11-21T23:48:32.341440Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34868","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-21T23:48:32.361337Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34886","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-21T23:48:32.372921Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34918","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-21T23:48:32.393724Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34922","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-21T23:48:32.416888Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34942","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-21T23:48:32.446862Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34956","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-21T23:48:32.470324Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34978","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-21T23:48:32.483436Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35002","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-21T23:48:32.507178Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35014","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-21T23:48:32.516352Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35028","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-21T23:48:32.535319Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35048","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-21T23:48:32.550711Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35066","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-21T23:48:32.567380Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35076","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-21T23:48:32.587652Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35094","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-21T23:48:32.622209Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35104","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-21T23:48:32.637397Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35110","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-21T23:48:32.663638Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35126","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-21T23:48:32.691732Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35146","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-21T23:48:32.859489Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35162","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-21T23:48:47.768274Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46672","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-21T23:48:47.782599Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46684","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-21T23:49:10.728841Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54846","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-21T23:49:10.744091Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54862","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-21T23:49:10.774047Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54890","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-21T23:49:10.788751Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54904","server-name":"","error":"EOF"}
	
	
	==> gcp-auth [ea59934e5547eaa1836042ec90c5594d31bad9d818cf377cf1a0fa06d816c2e9] <==
	2025/11/21 23:50:30 GCP Auth Webhook started!
	2025/11/21 23:50:39 Ready to marshal response ...
	2025/11/21 23:50:39 Ready to write response ...
	2025/11/21 23:50:40 Ready to marshal response ...
	2025/11/21 23:50:40 Ready to write response ...
	2025/11/21 23:50:40 Ready to marshal response ...
	2025/11/21 23:50:40 Ready to write response ...
	2025/11/21 23:51:02 Ready to marshal response ...
	2025/11/21 23:51:02 Ready to write response ...
	2025/11/21 23:51:02 Ready to marshal response ...
	2025/11/21 23:51:02 Ready to write response ...
	2025/11/21 23:51:02 Ready to marshal response ...
	2025/11/21 23:51:02 Ready to write response ...
	2025/11/21 23:51:11 Ready to marshal response ...
	2025/11/21 23:51:11 Ready to write response ...
	2025/11/21 23:51:24 Ready to marshal response ...
	2025/11/21 23:51:24 Ready to write response ...
	2025/11/21 23:51:27 Ready to marshal response ...
	2025/11/21 23:51:27 Ready to write response ...
	2025/11/21 23:51:46 Ready to marshal response ...
	2025/11/21 23:51:46 Ready to write response ...
	2025/11/21 23:53:49 Ready to marshal response ...
	2025/11/21 23:53:49 Ready to write response ...
	
	
	==> kernel <==
	 23:53:51 up  4:35,  0 user,  load average: 0.25, 0.89, 1.08
	Linux addons-882841 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [970af788676bddee24edc4dbf7882805510ac451d6658537c9d2152752c3ffee] <==
	I1121 23:51:42.415108       1 main.go:301] handling current node
	I1121 23:51:52.417875       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1121 23:51:52.417912       1 main.go:301] handling current node
	I1121 23:52:02.414978       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1121 23:52:02.415011       1 main.go:301] handling current node
	I1121 23:52:12.414957       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1121 23:52:12.414989       1 main.go:301] handling current node
	I1121 23:52:22.417940       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1121 23:52:22.417973       1 main.go:301] handling current node
	I1121 23:52:32.421876       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1121 23:52:32.421909       1 main.go:301] handling current node
	I1121 23:52:42.421970       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1121 23:52:42.422009       1 main.go:301] handling current node
	I1121 23:52:52.417906       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1121 23:52:52.417940       1 main.go:301] handling current node
	I1121 23:53:02.423184       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1121 23:53:02.423219       1 main.go:301] handling current node
	I1121 23:53:12.421932       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1121 23:53:12.421972       1 main.go:301] handling current node
	I1121 23:53:22.418264       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1121 23:53:22.418382       1 main.go:301] handling current node
	I1121 23:53:32.421881       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1121 23:53:32.421915       1 main.go:301] handling current node
	I1121 23:53:42.416708       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1121 23:53:42.416819       1 main.go:301] handling current node
	
	
	==> kube-apiserver [415d7ebb38dbfa2b139e79fcf924802ecf12d3ba74e075e30026fdd18353d343] <==
	 > logger="UnhandledError"
	E1121 23:49:45.966785       1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.108.241.191:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.108.241.191:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.108.241.191:443: connect: connection refused" logger="UnhandledError"
	E1121 23:49:45.967748       1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.108.241.191:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.108.241.191:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.108.241.191:443: connect: connection refused" logger="UnhandledError"
	W1121 23:49:46.733860       1 handler_proxy.go:99] no RequestInfo found in the context
	E1121 23:49:46.733903       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	I1121 23:49:46.733916       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W1121 23:49:46.735096       1 handler_proxy.go:99] no RequestInfo found in the context
	E1121 23:49:46.735170       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I1121 23:49:46.735180       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W1121 23:49:50.985013       1 handler_proxy.go:99] no RequestInfo found in the context
	E1121 23:49:50.985066       1 controller.go:146] "Unhandled Error" err=<
		Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	E1121 23:49:50.985124       1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.108.241.191:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.108.241.191:443/apis/metrics.k8s.io/v1beta1\": context deadline exceeded" logger="UnhandledError"
	I1121 23:49:51.041438       1 handler.go:285] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	E1121 23:50:50.348423       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8443->192.168.49.1:37206: use of closed network connection
	E1121 23:50:50.757015       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8443->192.168.49.1:37240: use of closed network connection
	I1121 23:51:26.738129       1 controller.go:667] quota admission added evaluator for: ingresses.networking.k8s.io
	I1121 23:51:27.125382       1 alloc.go:328] "allocated clusterIPs" service="default/nginx" clusterIPs={"IPv4":"10.104.31.225"}
	I1121 23:51:36.915751       1 controller.go:667] quota admission added evaluator for: volumesnapshots.snapshot.storage.k8s.io
	I1121 23:53:49.234020       1 alloc.go:328] "allocated clusterIPs" service="default/hello-world-app" clusterIPs={"IPv4":"10.96.219.97"}
	
	
	==> kube-controller-manager [f6c2269669bcf9942b43c38a6d80d882a37c12fba06a2b1b514b07dbd6183350] <==
	I1121 23:48:40.747766       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I1121 23:48:40.749782       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="addons-882841"
	I1121 23:48:40.749917       1 node_lifecycle_controller.go:1025] "Controller detected that all Nodes are not-Ready. Entering master disruption mode" logger="node-lifecycle-controller"
	I1121 23:48:40.749659       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I1121 23:48:40.749667       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I1121 23:48:40.749682       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I1121 23:48:40.749692       1 shared_informer.go:356] "Caches are synced" controller="GC"
	I1121 23:48:40.749593       1 shared_informer.go:356] "Caches are synced" controller="service-cidr-controller"
	I1121 23:48:40.749649       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I1121 23:48:40.751003       1 shared_informer.go:356] "Caches are synced" controller="ReplicationController"
	I1121 23:48:40.751058       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1121 23:48:40.758724       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1121 23:48:40.759985       1 shared_informer.go:356] "Caches are synced" controller="legacy-service-account-token-cleaner"
	I1121 23:48:40.765342       1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
	E1121 23:48:45.854083       1 replica_set.go:587] "Unhandled Error" err="sync \"kube-system/metrics-server-85b7d694d7\" failed with pods \"metrics-server-85b7d694d7-\" is forbidden: error looking up service account kube-system/metrics-server: serviceaccount \"metrics-server\" not found" logger="UnhandledError"
	E1121 23:49:10.721924       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1121 23:49:10.722169       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="volumesnapshots.snapshot.storage.k8s.io"
	I1121 23:49:10.722258       1 shared_informer.go:349] "Waiting for caches to sync" controller="resource quota"
	I1121 23:49:10.762605       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	I1121 23:49:10.766724       1 shared_informer.go:349] "Waiting for caches to sync" controller="garbage collector"
	I1121 23:49:10.822776       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1121 23:49:10.867906       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1121 23:49:25.759178       1 node_lifecycle_controller.go:1044] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	E1121 23:49:40.827929       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1121 23:49:40.892338       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	
	
	==> kube-proxy [074654b9d6b9f820e2f61d8ef839ef5ebec8673802a3e034c02530f243f023d0] <==
	I1121 23:48:42.215012       1 server_linux.go:53] "Using iptables proxy"
	I1121 23:48:42.323653       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1121 23:48:42.437409       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1121 23:48:42.437439       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.49.2"]
	E1121 23:48:42.437524       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1121 23:48:42.479624       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1121 23:48:42.479692       1 server_linux.go:132] "Using iptables Proxier"
	I1121 23:48:42.488132       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1121 23:48:42.488463       1 server.go:527] "Version info" version="v1.34.1"
	I1121 23:48:42.488479       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1121 23:48:42.489680       1 config.go:200] "Starting service config controller"
	I1121 23:48:42.489689       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1121 23:48:42.489705       1 config.go:106] "Starting endpoint slice config controller"
	I1121 23:48:42.489709       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1121 23:48:42.489720       1 config.go:403] "Starting serviceCIDR config controller"
	I1121 23:48:42.489724       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1121 23:48:42.491014       1 config.go:309] "Starting node config controller"
	I1121 23:48:42.495868       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1121 23:48:42.495890       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1121 23:48:42.590111       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1121 23:48:42.590178       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1121 23:48:42.590395       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	
	
	==> kube-scheduler [ba805611fb053ae520be5762cfc188a2dd5915f488bed9770376cb5e14b60936] <==
	I1121 23:48:34.335668       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1121 23:48:34.335757       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1121 23:48:34.336207       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1121 23:48:34.336252       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	E1121 23:48:34.346574       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_arm64.s:1223" type="*v1.ConfigMap"
	E1121 23:48:34.347178       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1121 23:48:34.347238       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1121 23:48:34.347371       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1121 23:48:34.347446       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1121 23:48:34.347479       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1121 23:48:34.347515       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1121 23:48:34.349293       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1121 23:48:34.349413       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1121 23:48:34.349493       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1121 23:48:34.351330       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1121 23:48:34.351446       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1121 23:48:34.351554       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1121 23:48:34.351786       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1121 23:48:34.351973       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1121 23:48:34.352013       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1121 23:48:34.352049       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1121 23:48:34.352084       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1121 23:48:34.352133       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1121 23:48:35.309012       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_arm64.s:1223" type="*v1.ConfigMap"
	I1121 23:48:37.836669       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Nov 21 23:52:41 addons-882841 kubelet[1277]: I1121 23:52:41.563504    1277 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/registry-proxy-rrtfc" secret="" err="secret \"gcp-auth\" not found"
	Nov 21 23:52:54 addons-882841 kubelet[1277]: I1121 23:52:54.562784    1277 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/coredns-66bc5c9577-zjrtb" secret="" err="secret \"gcp-auth\" not found"
	Nov 21 23:53:15 addons-882841 kubelet[1277]: I1121 23:53:15.563209    1277 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/registry-6b586f9694-5jvr4" secret="" err="secret \"gcp-auth\" not found"
	Nov 21 23:53:33 addons-882841 kubelet[1277]: I1121 23:53:33.962889    1277 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/registry-creds-764b6fb674-8wv2f" secret="" err="secret \"gcp-auth\" not found"
	Nov 21 23:53:35 addons-882841 kubelet[1277]: I1121 23:53:35.935045    1277 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/registry-creds-764b6fb674-8wv2f" secret="" err="secret \"gcp-auth\" not found"
	Nov 21 23:53:35 addons-882841 kubelet[1277]: I1121 23:53:35.935097    1277 scope.go:117] "RemoveContainer" containerID="0868748bb58a48379bf2d95eb9e20bee958416457d17f4bafc6d4545ee42099a"
	Nov 21 23:53:36 addons-882841 kubelet[1277]: I1121 23:53:36.695498    1277 scope.go:117] "RemoveContainer" containerID="0868748bb58a48379bf2d95eb9e20bee958416457d17f4bafc6d4545ee42099a"
	Nov 21 23:53:36 addons-882841 kubelet[1277]: I1121 23:53:36.941587    1277 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/registry-creds-764b6fb674-8wv2f" secret="" err="secret \"gcp-auth\" not found"
	Nov 21 23:53:36 addons-882841 kubelet[1277]: I1121 23:53:36.941653    1277 scope.go:117] "RemoveContainer" containerID="273bfc38d60cb0b9c99a162365b410090fafa11e0c75ee599f0b7a175ec83c6b"
	Nov 21 23:53:36 addons-882841 kubelet[1277]: E1121 23:53:36.941888    1277 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-creds\" with CrashLoopBackOff: \"back-off 10s restarting failed container=registry-creds pod=registry-creds-764b6fb674-8wv2f_kube-system(dfc3c5ef-fcf8-4a4c-908c-fa2a665d682c)\"" pod="kube-system/registry-creds-764b6fb674-8wv2f" podUID="dfc3c5ef-fcf8-4a4c-908c-fa2a665d682c"
	Nov 21 23:53:37 addons-882841 kubelet[1277]: I1121 23:53:37.945421    1277 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/registry-creds-764b6fb674-8wv2f" secret="" err="secret \"gcp-auth\" not found"
	Nov 21 23:53:37 addons-882841 kubelet[1277]: I1121 23:53:37.945480    1277 scope.go:117] "RemoveContainer" containerID="273bfc38d60cb0b9c99a162365b410090fafa11e0c75ee599f0b7a175ec83c6b"
	Nov 21 23:53:37 addons-882841 kubelet[1277]: E1121 23:53:37.945631    1277 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-creds\" with CrashLoopBackOff: \"back-off 10s restarting failed container=registry-creds pod=registry-creds-764b6fb674-8wv2f_kube-system(dfc3c5ef-fcf8-4a4c-908c-fa2a665d682c)\"" pod="kube-system/registry-creds-764b6fb674-8wv2f" podUID="dfc3c5ef-fcf8-4a4c-908c-fa2a665d682c"
	Nov 21 23:53:38 addons-882841 kubelet[1277]: I1121 23:53:38.563228    1277 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/nvidia-device-plugin-daemonset-4jvp9" secret="" err="secret \"gcp-auth\" not found"
	Nov 21 23:53:44 addons-882841 kubelet[1277]: I1121 23:53:44.563223    1277 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/registry-proxy-rrtfc" secret="" err="secret \"gcp-auth\" not found"
	Nov 21 23:53:49 addons-882841 kubelet[1277]: E1121 23:53:49.122175    1277 status_manager.go:1018] "Failed to get status for pod" err="pods \"hello-world-app-5d498dc89-4kgjp\" is forbidden: User \"system:node:addons-882841\" cannot get resource \"pods\" in API group \"\" in the namespace \"default\": no relationship found between node 'addons-882841' and this object" podUID="c846fd57-6d0a-4320-b7ab-02b893a92b2a" pod="default/hello-world-app-5d498dc89-4kgjp"
	Nov 21 23:53:49 addons-882841 kubelet[1277]: I1121 23:53:49.188079    1277 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sj5tx\" (UniqueName: \"kubernetes.io/projected/c846fd57-6d0a-4320-b7ab-02b893a92b2a-kube-api-access-sj5tx\") pod \"hello-world-app-5d498dc89-4kgjp\" (UID: \"c846fd57-6d0a-4320-b7ab-02b893a92b2a\") " pod="default/hello-world-app-5d498dc89-4kgjp"
	Nov 21 23:53:49 addons-882841 kubelet[1277]: I1121 23:53:49.188158    1277 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"gcp-creds\" (UniqueName: \"kubernetes.io/host-path/c846fd57-6d0a-4320-b7ab-02b893a92b2a-gcp-creds\") pod \"hello-world-app-5d498dc89-4kgjp\" (UID: \"c846fd57-6d0a-4320-b7ab-02b893a92b2a\") " pod="default/hello-world-app-5d498dc89-4kgjp"
	Nov 21 23:53:49 addons-882841 kubelet[1277]: W1121 23:53:49.431807    1277 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/cbf01a114cc53a8b6c72a0ed56d9776d5ffd3dfdacd5a45cb3e08babfb8e2033/crio-e0a6a52ddf05c947be01659489e978468a0be0d54cef8b2d6246e80671403de3 WatchSource:0}: Error finding container e0a6a52ddf05c947be01659489e978468a0be0d54cef8b2d6246e80671403de3: Status 404 returned error can't find the container with id e0a6a52ddf05c947be01659489e978468a0be0d54cef8b2d6246e80671403de3
	Nov 21 23:53:49 addons-882841 kubelet[1277]: I1121 23:53:49.563143    1277 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/registry-creds-764b6fb674-8wv2f" secret="" err="secret \"gcp-auth\" not found"
	Nov 21 23:53:49 addons-882841 kubelet[1277]: I1121 23:53:49.563224    1277 scope.go:117] "RemoveContainer" containerID="273bfc38d60cb0b9c99a162365b410090fafa11e0c75ee599f0b7a175ec83c6b"
	Nov 21 23:53:49 addons-882841 kubelet[1277]: I1121 23:53:49.995254    1277 scope.go:117] "RemoveContainer" containerID="273bfc38d60cb0b9c99a162365b410090fafa11e0c75ee599f0b7a175ec83c6b"
	Nov 21 23:53:49 addons-882841 kubelet[1277]: I1121 23:53:49.995425    1277 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/registry-creds-764b6fb674-8wv2f" secret="" err="secret \"gcp-auth\" not found"
	Nov 21 23:53:49 addons-882841 kubelet[1277]: I1121 23:53:49.995481    1277 scope.go:117] "RemoveContainer" containerID="59d531c0f0a7a7f8e8d693bbbf23dc2d087ef4a78aac796f830d21d42626a0f0"
	Nov 21 23:53:49 addons-882841 kubelet[1277]: E1121 23:53:49.995620    1277 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-creds\" with CrashLoopBackOff: \"back-off 20s restarting failed container=registry-creds pod=registry-creds-764b6fb674-8wv2f_kube-system(dfc3c5ef-fcf8-4a4c-908c-fa2a665d682c)\"" pod="kube-system/registry-creds-764b6fb674-8wv2f" podUID="dfc3c5ef-fcf8-4a4c-908c-fa2a665d682c"
	
	
	==> storage-provisioner [561d110537c5cbfb43c832086d9c8216a7180387df20a1b9c68b29a4b682f207] <==
	W1121 23:53:26.902976       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1121 23:53:28.906062       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1121 23:53:28.913967       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1121 23:53:30.916891       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1121 23:53:30.923234       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1121 23:53:32.925897       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1121 23:53:32.930264       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1121 23:53:34.933565       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1121 23:53:34.938341       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1121 23:53:36.944966       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1121 23:53:36.954191       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1121 23:53:38.957435       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1121 23:53:38.964562       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1121 23:53:40.967297       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1121 23:53:40.973898       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1121 23:53:42.976270       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1121 23:53:42.980917       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1121 23:53:44.984304       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1121 23:53:44.988815       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1121 23:53:46.991990       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1121 23:53:46.996325       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1121 23:53:49.010192       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1121 23:53:49.027073       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1121 23:53:51.042451       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1121 23:53:51.058113       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p addons-882841 -n addons-882841
helpers_test.go:269: (dbg) Run:  kubectl --context addons-882841 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:280: non-running pods: ingress-nginx-admission-create-f9tdh ingress-nginx-admission-patch-lfbrr
helpers_test.go:282: ======> post-mortem[TestAddons/parallel/Ingress]: describe non-running pods <======
helpers_test.go:285: (dbg) Run:  kubectl --context addons-882841 describe pod ingress-nginx-admission-create-f9tdh ingress-nginx-admission-patch-lfbrr
helpers_test.go:285: (dbg) Non-zero exit: kubectl --context addons-882841 describe pod ingress-nginx-admission-create-f9tdh ingress-nginx-admission-patch-lfbrr: exit status 1 (79.140343ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "ingress-nginx-admission-create-f9tdh" not found
	Error from server (NotFound): pods "ingress-nginx-admission-patch-lfbrr" not found

                                                
                                                
** /stderr **
helpers_test.go:287: kubectl --context addons-882841 describe pod ingress-nginx-admission-create-f9tdh ingress-nginx-admission-patch-lfbrr: exit status 1
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-882841 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-arm64 -p addons-882841 addons disable ingress-dns --alsologtostderr -v=1: exit status 11 (302.866512ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1121 23:53:52.423297  527298 out.go:360] Setting OutFile to fd 1 ...
	I1121 23:53:52.424116  527298 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1121 23:53:52.424152  527298 out.go:374] Setting ErrFile to fd 2...
	I1121 23:53:52.424170  527298 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1121 23:53:52.424465  527298 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21934-513600/.minikube/bin
	I1121 23:53:52.424781  527298 mustload.go:66] Loading cluster: addons-882841
	I1121 23:53:52.425288  527298 config.go:182] Loaded profile config "addons-882841": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1121 23:53:52.425328  527298 addons.go:622] checking whether the cluster is paused
	I1121 23:53:52.425497  527298 config.go:182] Loaded profile config "addons-882841": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1121 23:53:52.425543  527298 host.go:66] Checking if "addons-882841" exists ...
	I1121 23:53:52.426296  527298 cli_runner.go:164] Run: docker container inspect addons-882841 --format={{.State.Status}}
	I1121 23:53:52.448982  527298 ssh_runner.go:195] Run: systemctl --version
	I1121 23:53:52.449041  527298 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-882841
	I1121 23:53:52.481839  527298 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33495 SSHKeyPath:/home/jenkins/minikube-integration/21934-513600/.minikube/machines/addons-882841/id_rsa Username:docker}
	I1121 23:53:52.589374  527298 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1121 23:53:52.589480  527298 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1121 23:53:52.626769  527298 cri.go:89] found id: "59d531c0f0a7a7f8e8d693bbbf23dc2d087ef4a78aac796f830d21d42626a0f0"
	I1121 23:53:52.626792  527298 cri.go:89] found id: "d4288bfcc52cab787e4c57cd6f6ce8b5e4eab8e0f753f7b4b8c0dfbb6d7fcacf"
	I1121 23:53:52.626797  527298 cri.go:89] found id: "c2b953c3eb94bb9506b88f8f9082db05bab7bbad3c3c92fb83a61fd4148bcd7c"
	I1121 23:53:52.626801  527298 cri.go:89] found id: "4b84c2719358c69a49238d48c9737dea6a7f98672fde68733fbdfd0c5f76c519"
	I1121 23:53:52.626804  527298 cri.go:89] found id: "03e07c8bd5633a56d64051d6a63832c1a6fc109d661151083091d50ee1a7dfb7"
	I1121 23:53:52.626807  527298 cri.go:89] found id: "4f400d29139bbf4aeaa41d46055af4710e38ae5d6a844d3ee8b87ce4de3a0f3a"
	I1121 23:53:52.626810  527298 cri.go:89] found id: "0ecc1bcf505044ab1e7b917a3904e9d8ff652e08129c43483e6ff6f465bc7f48"
	I1121 23:53:52.626813  527298 cri.go:89] found id: "0d6392a88c56afc228d16b7659bcfa96628c1585bdb4b03af537ff609bf9f34a"
	I1121 23:53:52.626817  527298 cri.go:89] found id: "492d7c5835fb8502b60633915d9d3f885aa7bc3696e4febec5b394bab0a6773b"
	I1121 23:53:52.626827  527298 cri.go:89] found id: "96794062627c79a139839f726287bb12566d789fa0f4d5b1994cd88518a2e2eb"
	I1121 23:53:52.626830  527298 cri.go:89] found id: "d6df5ea7f4eb5e6b4852fd9cf791dda6c35753e420985175cbcc2a80b368d82b"
	I1121 23:53:52.626833  527298 cri.go:89] found id: "800239fcbfd600dbcec2ac03099f8200b62cf3769357e03ab2d40f672490913e"
	I1121 23:53:52.626836  527298 cri.go:89] found id: "1d9bfd16346a7b544d9696f5ac0700133b9c44dfb05b597e6ece14fdf7c1ee4d"
	I1121 23:53:52.626839  527298 cri.go:89] found id: "b7eb954adbbab5deddd57f625c7dce81a5fbc6e9ee1d2cb260d88e1fbd1482da"
	I1121 23:53:52.626842  527298 cri.go:89] found id: "ba42295e49f9af2999894c8bde53ee31c193600b80cc921d12a7b280aefbca13"
	I1121 23:53:52.626850  527298 cri.go:89] found id: "36f901d7268653e5e64e73d2f8c787b658cab9899bc17b2cc522fb984b5ae3f7"
	I1121 23:53:52.626853  527298 cri.go:89] found id: "561d110537c5cbfb43c832086d9c8216a7180387df20a1b9c68b29a4b682f207"
	I1121 23:53:52.626858  527298 cri.go:89] found id: "e6dc31e0930681fac6fbd625f4ec7a07e57c10d13a728a7ec163a4c66a6d4a2b"
	I1121 23:53:52.626861  527298 cri.go:89] found id: "074654b9d6b9f820e2f61d8ef839ef5ebec8673802a3e034c02530f243f023d0"
	I1121 23:53:52.626864  527298 cri.go:89] found id: "970af788676bddee24edc4dbf7882805510ac451d6658537c9d2152752c3ffee"
	I1121 23:53:52.626868  527298 cri.go:89] found id: "f6c2269669bcf9942b43c38a6d80d882a37c12fba06a2b1b514b07dbd6183350"
	I1121 23:53:52.626871  527298 cri.go:89] found id: "415d7ebb38dbfa2b139e79fcf924802ecf12d3ba74e075e30026fdd18353d343"
	I1121 23:53:52.626874  527298 cri.go:89] found id: "ba805611fb053ae520be5762cfc188a2dd5915f488bed9770376cb5e14b60936"
	I1121 23:53:52.626877  527298 cri.go:89] found id: "0b539dfc17788b7400ee2eb5abadb008e5a8a9796bb112d8a69f14a34f2fd551"
	I1121 23:53:52.626880  527298 cri.go:89] found id: ""
	I1121 23:53:52.626943  527298 ssh_runner.go:195] Run: sudo runc list -f json
	I1121 23:53:52.642802  527298 out.go:203] 
	W1121 23:53:52.645884  527298 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-21T23:53:52Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-21T23:53:52Z" level=error msg="open /run/runc: no such file or directory"
	
	W1121 23:53:52.645905  527298 out.go:285] * 
	* 
	W1121 23:53:52.652592  527298 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_4116e8848b7c0e6a40fa9061a5ca6da2e0eb6ead_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_4116e8848b7c0e6a40fa9061a5ca6da2e0eb6ead_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1121 23:53:52.655527  527298 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable ingress-dns addon: args "out/minikube-linux-arm64 -p addons-882841 addons disable ingress-dns --alsologtostderr -v=1": exit status 11
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-882841 addons disable ingress --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-arm64 -p addons-882841 addons disable ingress --alsologtostderr -v=1: exit status 11 (255.294038ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1121 23:53:52.709984  527409 out.go:360] Setting OutFile to fd 1 ...
	I1121 23:53:52.711715  527409 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1121 23:53:52.711736  527409 out.go:374] Setting ErrFile to fd 2...
	I1121 23:53:52.711742  527409 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1121 23:53:52.712066  527409 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21934-513600/.minikube/bin
	I1121 23:53:52.712413  527409 mustload.go:66] Loading cluster: addons-882841
	I1121 23:53:52.712909  527409 config.go:182] Loaded profile config "addons-882841": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1121 23:53:52.712933  527409 addons.go:622] checking whether the cluster is paused
	I1121 23:53:52.713082  527409 config.go:182] Loaded profile config "addons-882841": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1121 23:53:52.713102  527409 host.go:66] Checking if "addons-882841" exists ...
	I1121 23:53:52.713698  527409 cli_runner.go:164] Run: docker container inspect addons-882841 --format={{.State.Status}}
	I1121 23:53:52.731524  527409 ssh_runner.go:195] Run: systemctl --version
	I1121 23:53:52.731586  527409 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-882841
	I1121 23:53:52.750197  527409 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33495 SSHKeyPath:/home/jenkins/minikube-integration/21934-513600/.minikube/machines/addons-882841/id_rsa Username:docker}
	I1121 23:53:52.848518  527409 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1121 23:53:52.848596  527409 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1121 23:53:52.882078  527409 cri.go:89] found id: "59d531c0f0a7a7f8e8d693bbbf23dc2d087ef4a78aac796f830d21d42626a0f0"
	I1121 23:53:52.882099  527409 cri.go:89] found id: "d4288bfcc52cab787e4c57cd6f6ce8b5e4eab8e0f753f7b4b8c0dfbb6d7fcacf"
	I1121 23:53:52.882104  527409 cri.go:89] found id: "c2b953c3eb94bb9506b88f8f9082db05bab7bbad3c3c92fb83a61fd4148bcd7c"
	I1121 23:53:52.882109  527409 cri.go:89] found id: "4b84c2719358c69a49238d48c9737dea6a7f98672fde68733fbdfd0c5f76c519"
	I1121 23:53:52.882112  527409 cri.go:89] found id: "03e07c8bd5633a56d64051d6a63832c1a6fc109d661151083091d50ee1a7dfb7"
	I1121 23:53:52.882116  527409 cri.go:89] found id: "4f400d29139bbf4aeaa41d46055af4710e38ae5d6a844d3ee8b87ce4de3a0f3a"
	I1121 23:53:52.882119  527409 cri.go:89] found id: "0ecc1bcf505044ab1e7b917a3904e9d8ff652e08129c43483e6ff6f465bc7f48"
	I1121 23:53:52.882122  527409 cri.go:89] found id: "0d6392a88c56afc228d16b7659bcfa96628c1585bdb4b03af537ff609bf9f34a"
	I1121 23:53:52.882125  527409 cri.go:89] found id: "492d7c5835fb8502b60633915d9d3f885aa7bc3696e4febec5b394bab0a6773b"
	I1121 23:53:52.882132  527409 cri.go:89] found id: "96794062627c79a139839f726287bb12566d789fa0f4d5b1994cd88518a2e2eb"
	I1121 23:53:52.882135  527409 cri.go:89] found id: "d6df5ea7f4eb5e6b4852fd9cf791dda6c35753e420985175cbcc2a80b368d82b"
	I1121 23:53:52.882138  527409 cri.go:89] found id: "800239fcbfd600dbcec2ac03099f8200b62cf3769357e03ab2d40f672490913e"
	I1121 23:53:52.882141  527409 cri.go:89] found id: "1d9bfd16346a7b544d9696f5ac0700133b9c44dfb05b597e6ece14fdf7c1ee4d"
	I1121 23:53:52.882144  527409 cri.go:89] found id: "b7eb954adbbab5deddd57f625c7dce81a5fbc6e9ee1d2cb260d88e1fbd1482da"
	I1121 23:53:52.882147  527409 cri.go:89] found id: "ba42295e49f9af2999894c8bde53ee31c193600b80cc921d12a7b280aefbca13"
	I1121 23:53:52.882153  527409 cri.go:89] found id: "36f901d7268653e5e64e73d2f8c787b658cab9899bc17b2cc522fb984b5ae3f7"
	I1121 23:53:52.882164  527409 cri.go:89] found id: "561d110537c5cbfb43c832086d9c8216a7180387df20a1b9c68b29a4b682f207"
	I1121 23:53:52.882169  527409 cri.go:89] found id: "e6dc31e0930681fac6fbd625f4ec7a07e57c10d13a728a7ec163a4c66a6d4a2b"
	I1121 23:53:52.882172  527409 cri.go:89] found id: "074654b9d6b9f820e2f61d8ef839ef5ebec8673802a3e034c02530f243f023d0"
	I1121 23:53:52.882175  527409 cri.go:89] found id: "970af788676bddee24edc4dbf7882805510ac451d6658537c9d2152752c3ffee"
	I1121 23:53:52.882181  527409 cri.go:89] found id: "f6c2269669bcf9942b43c38a6d80d882a37c12fba06a2b1b514b07dbd6183350"
	I1121 23:53:52.882187  527409 cri.go:89] found id: "415d7ebb38dbfa2b139e79fcf924802ecf12d3ba74e075e30026fdd18353d343"
	I1121 23:53:52.882190  527409 cri.go:89] found id: "ba805611fb053ae520be5762cfc188a2dd5915f488bed9770376cb5e14b60936"
	I1121 23:53:52.882193  527409 cri.go:89] found id: "0b539dfc17788b7400ee2eb5abadb008e5a8a9796bb112d8a69f14a34f2fd551"
	I1121 23:53:52.882196  527409 cri.go:89] found id: ""
	I1121 23:53:52.882248  527409 ssh_runner.go:195] Run: sudo runc list -f json
	I1121 23:53:52.897395  527409 out.go:203] 
	W1121 23:53:52.900390  527409 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-21T23:53:52Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-21T23:53:52Z" level=error msg="open /run/runc: no such file or directory"
	
	W1121 23:53:52.900416  527409 out.go:285] * 
	* 
	W1121 23:53:52.907292  527409 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_62553deefc570c97f2052ef703df7b8905a654d6_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_62553deefc570c97f2052ef703df7b8905a654d6_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1121 23:53:52.910127  527409 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable ingress addon: args "out/minikube-linux-arm64 -p addons-882841 addons disable ingress --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/Ingress (146.62s)

                                                
                                    
x
+
TestAddons/parallel/InspektorGadget (5.46s)

                                                
                                                
=== RUN   TestAddons/parallel/InspektorGadget
=== PAUSE TestAddons/parallel/InspektorGadget

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/InspektorGadget
addons_test.go:823: (dbg) TestAddons/parallel/InspektorGadget: waiting 8m0s for pods matching "k8s-app=gadget" in namespace "gadget" ...
helpers_test.go:352: "gadget-84krr" [673b45f1-c330-47ec-a2ea-9d9490b0527d] Running
addons_test.go:823: (dbg) TestAddons/parallel/InspektorGadget: k8s-app=gadget healthy within 5.004235734s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-882841 addons disable inspektor-gadget --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-arm64 -p addons-882841 addons disable inspektor-gadget --alsologtostderr -v=1: exit status 11 (459.125092ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1121 23:51:25.931964  525256 out.go:360] Setting OutFile to fd 1 ...
	I1121 23:51:25.932821  525256 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1121 23:51:25.932835  525256 out.go:374] Setting ErrFile to fd 2...
	I1121 23:51:25.932841  525256 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1121 23:51:25.933118  525256 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21934-513600/.minikube/bin
	I1121 23:51:25.933390  525256 mustload.go:66] Loading cluster: addons-882841
	I1121 23:51:25.933794  525256 config.go:182] Loaded profile config "addons-882841": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1121 23:51:25.934441  525256 addons.go:622] checking whether the cluster is paused
	I1121 23:51:25.934599  525256 config.go:182] Loaded profile config "addons-882841": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1121 23:51:25.934613  525256 host.go:66] Checking if "addons-882841" exists ...
	I1121 23:51:25.935219  525256 cli_runner.go:164] Run: docker container inspect addons-882841 --format={{.State.Status}}
	I1121 23:51:25.988781  525256 ssh_runner.go:195] Run: systemctl --version
	I1121 23:51:25.988844  525256 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-882841
	I1121 23:51:26.048369  525256 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33495 SSHKeyPath:/home/jenkins/minikube-integration/21934-513600/.minikube/machines/addons-882841/id_rsa Username:docker}
	I1121 23:51:26.163569  525256 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1121 23:51:26.163664  525256 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1121 23:51:26.254341  525256 cri.go:89] found id: "d4288bfcc52cab787e4c57cd6f6ce8b5e4eab8e0f753f7b4b8c0dfbb6d7fcacf"
	I1121 23:51:26.254365  525256 cri.go:89] found id: "c2b953c3eb94bb9506b88f8f9082db05bab7bbad3c3c92fb83a61fd4148bcd7c"
	I1121 23:51:26.254370  525256 cri.go:89] found id: "4b84c2719358c69a49238d48c9737dea6a7f98672fde68733fbdfd0c5f76c519"
	I1121 23:51:26.254374  525256 cri.go:89] found id: "03e07c8bd5633a56d64051d6a63832c1a6fc109d661151083091d50ee1a7dfb7"
	I1121 23:51:26.254378  525256 cri.go:89] found id: "4f400d29139bbf4aeaa41d46055af4710e38ae5d6a844d3ee8b87ce4de3a0f3a"
	I1121 23:51:26.254381  525256 cri.go:89] found id: "0ecc1bcf505044ab1e7b917a3904e9d8ff652e08129c43483e6ff6f465bc7f48"
	I1121 23:51:26.254389  525256 cri.go:89] found id: "0d6392a88c56afc228d16b7659bcfa96628c1585bdb4b03af537ff609bf9f34a"
	I1121 23:51:26.254393  525256 cri.go:89] found id: "492d7c5835fb8502b60633915d9d3f885aa7bc3696e4febec5b394bab0a6773b"
	I1121 23:51:26.254396  525256 cri.go:89] found id: "96794062627c79a139839f726287bb12566d789fa0f4d5b1994cd88518a2e2eb"
	I1121 23:51:26.254404  525256 cri.go:89] found id: "d6df5ea7f4eb5e6b4852fd9cf791dda6c35753e420985175cbcc2a80b368d82b"
	I1121 23:51:26.254408  525256 cri.go:89] found id: "800239fcbfd600dbcec2ac03099f8200b62cf3769357e03ab2d40f672490913e"
	I1121 23:51:26.254411  525256 cri.go:89] found id: "1d9bfd16346a7b544d9696f5ac0700133b9c44dfb05b597e6ece14fdf7c1ee4d"
	I1121 23:51:26.254435  525256 cri.go:89] found id: "b7eb954adbbab5deddd57f625c7dce81a5fbc6e9ee1d2cb260d88e1fbd1482da"
	I1121 23:51:26.254443  525256 cri.go:89] found id: "ba42295e49f9af2999894c8bde53ee31c193600b80cc921d12a7b280aefbca13"
	I1121 23:51:26.254446  525256 cri.go:89] found id: "36f901d7268653e5e64e73d2f8c787b658cab9899bc17b2cc522fb984b5ae3f7"
	I1121 23:51:26.254451  525256 cri.go:89] found id: "561d110537c5cbfb43c832086d9c8216a7180387df20a1b9c68b29a4b682f207"
	I1121 23:51:26.254462  525256 cri.go:89] found id: "e6dc31e0930681fac6fbd625f4ec7a07e57c10d13a728a7ec163a4c66a6d4a2b"
	I1121 23:51:26.254473  525256 cri.go:89] found id: "074654b9d6b9f820e2f61d8ef839ef5ebec8673802a3e034c02530f243f023d0"
	I1121 23:51:26.254477  525256 cri.go:89] found id: "970af788676bddee24edc4dbf7882805510ac451d6658537c9d2152752c3ffee"
	I1121 23:51:26.254480  525256 cri.go:89] found id: "f6c2269669bcf9942b43c38a6d80d882a37c12fba06a2b1b514b07dbd6183350"
	I1121 23:51:26.254485  525256 cri.go:89] found id: "415d7ebb38dbfa2b139e79fcf924802ecf12d3ba74e075e30026fdd18353d343"
	I1121 23:51:26.254493  525256 cri.go:89] found id: "ba805611fb053ae520be5762cfc188a2dd5915f488bed9770376cb5e14b60936"
	I1121 23:51:26.254496  525256 cri.go:89] found id: "0b539dfc17788b7400ee2eb5abadb008e5a8a9796bb112d8a69f14a34f2fd551"
	I1121 23:51:26.254511  525256 cri.go:89] found id: ""
	I1121 23:51:26.254570  525256 ssh_runner.go:195] Run: sudo runc list -f json
	I1121 23:51:26.277017  525256 out.go:203] 
	W1121 23:51:26.280008  525256 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-21T23:51:26Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-21T23:51:26Z" level=error msg="open /run/runc: no such file or directory"
	
	W1121 23:51:26.280089  525256 out.go:285] * 
	* 
	W1121 23:51:26.287339  525256 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_07218961934993dd21acc63caaf1aa08873c018e_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_07218961934993dd21acc63caaf1aa08873c018e_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1121 23:51:26.291101  525256 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable inspektor-gadget addon: args "out/minikube-linux-arm64 -p addons-882841 addons disable inspektor-gadget --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/InspektorGadget (5.46s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (5.42s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:455: metrics-server stabilized in 3.97632ms
addons_test.go:457: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:352: "metrics-server-85b7d694d7-7tk8r" [99849e7c-e2a9-4b60-b8f9-7ed8bd487c73] Running
addons_test.go:457: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 5.003972512s
addons_test.go:463: (dbg) Run:  kubectl --context addons-882841 top pods -n kube-system
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-882841 addons disable metrics-server --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-arm64 -p addons-882841 addons disable metrics-server --alsologtostderr -v=1: exit status 11 (319.728193ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1121 23:51:20.561330  525112 out.go:360] Setting OutFile to fd 1 ...
	I1121 23:51:20.561938  525112 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1121 23:51:20.561957  525112 out.go:374] Setting ErrFile to fd 2...
	I1121 23:51:20.561964  525112 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1121 23:51:20.562276  525112 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21934-513600/.minikube/bin
	I1121 23:51:20.562613  525112 mustload.go:66] Loading cluster: addons-882841
	I1121 23:51:20.563058  525112 config.go:182] Loaded profile config "addons-882841": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1121 23:51:20.563103  525112 addons.go:622] checking whether the cluster is paused
	I1121 23:51:20.563248  525112 config.go:182] Loaded profile config "addons-882841": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1121 23:51:20.563278  525112 host.go:66] Checking if "addons-882841" exists ...
	I1121 23:51:20.563999  525112 cli_runner.go:164] Run: docker container inspect addons-882841 --format={{.State.Status}}
	I1121 23:51:20.595495  525112 ssh_runner.go:195] Run: systemctl --version
	I1121 23:51:20.595562  525112 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-882841
	I1121 23:51:20.647408  525112 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33495 SSHKeyPath:/home/jenkins/minikube-integration/21934-513600/.minikube/machines/addons-882841/id_rsa Username:docker}
	I1121 23:51:20.768463  525112 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1121 23:51:20.768548  525112 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1121 23:51:20.800382  525112 cri.go:89] found id: "d4288bfcc52cab787e4c57cd6f6ce8b5e4eab8e0f753f7b4b8c0dfbb6d7fcacf"
	I1121 23:51:20.800409  525112 cri.go:89] found id: "c2b953c3eb94bb9506b88f8f9082db05bab7bbad3c3c92fb83a61fd4148bcd7c"
	I1121 23:51:20.800415  525112 cri.go:89] found id: "4b84c2719358c69a49238d48c9737dea6a7f98672fde68733fbdfd0c5f76c519"
	I1121 23:51:20.800419  525112 cri.go:89] found id: "03e07c8bd5633a56d64051d6a63832c1a6fc109d661151083091d50ee1a7dfb7"
	I1121 23:51:20.800423  525112 cri.go:89] found id: "4f400d29139bbf4aeaa41d46055af4710e38ae5d6a844d3ee8b87ce4de3a0f3a"
	I1121 23:51:20.800427  525112 cri.go:89] found id: "0ecc1bcf505044ab1e7b917a3904e9d8ff652e08129c43483e6ff6f465bc7f48"
	I1121 23:51:20.800432  525112 cri.go:89] found id: "0d6392a88c56afc228d16b7659bcfa96628c1585bdb4b03af537ff609bf9f34a"
	I1121 23:51:20.800436  525112 cri.go:89] found id: "492d7c5835fb8502b60633915d9d3f885aa7bc3696e4febec5b394bab0a6773b"
	I1121 23:51:20.800439  525112 cri.go:89] found id: "96794062627c79a139839f726287bb12566d789fa0f4d5b1994cd88518a2e2eb"
	I1121 23:51:20.800445  525112 cri.go:89] found id: "d6df5ea7f4eb5e6b4852fd9cf791dda6c35753e420985175cbcc2a80b368d82b"
	I1121 23:51:20.800449  525112 cri.go:89] found id: "800239fcbfd600dbcec2ac03099f8200b62cf3769357e03ab2d40f672490913e"
	I1121 23:51:20.800452  525112 cri.go:89] found id: "1d9bfd16346a7b544d9696f5ac0700133b9c44dfb05b597e6ece14fdf7c1ee4d"
	I1121 23:51:20.800456  525112 cri.go:89] found id: "b7eb954adbbab5deddd57f625c7dce81a5fbc6e9ee1d2cb260d88e1fbd1482da"
	I1121 23:51:20.800459  525112 cri.go:89] found id: "ba42295e49f9af2999894c8bde53ee31c193600b80cc921d12a7b280aefbca13"
	I1121 23:51:20.800462  525112 cri.go:89] found id: "36f901d7268653e5e64e73d2f8c787b658cab9899bc17b2cc522fb984b5ae3f7"
	I1121 23:51:20.800467  525112 cri.go:89] found id: "561d110537c5cbfb43c832086d9c8216a7180387df20a1b9c68b29a4b682f207"
	I1121 23:51:20.800475  525112 cri.go:89] found id: "e6dc31e0930681fac6fbd625f4ec7a07e57c10d13a728a7ec163a4c66a6d4a2b"
	I1121 23:51:20.800480  525112 cri.go:89] found id: "074654b9d6b9f820e2f61d8ef839ef5ebec8673802a3e034c02530f243f023d0"
	I1121 23:51:20.800483  525112 cri.go:89] found id: "970af788676bddee24edc4dbf7882805510ac451d6658537c9d2152752c3ffee"
	I1121 23:51:20.800486  525112 cri.go:89] found id: "f6c2269669bcf9942b43c38a6d80d882a37c12fba06a2b1b514b07dbd6183350"
	I1121 23:51:20.800492  525112 cri.go:89] found id: "415d7ebb38dbfa2b139e79fcf924802ecf12d3ba74e075e30026fdd18353d343"
	I1121 23:51:20.800499  525112 cri.go:89] found id: "ba805611fb053ae520be5762cfc188a2dd5915f488bed9770376cb5e14b60936"
	I1121 23:51:20.800502  525112 cri.go:89] found id: "0b539dfc17788b7400ee2eb5abadb008e5a8a9796bb112d8a69f14a34f2fd551"
	I1121 23:51:20.800505  525112 cri.go:89] found id: ""
	I1121 23:51:20.800555  525112 ssh_runner.go:195] Run: sudo runc list -f json
	I1121 23:51:20.815443  525112 out.go:203] 
	W1121 23:51:20.818368  525112 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-21T23:51:20Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-21T23:51:20Z" level=error msg="open /run/runc: no such file or directory"
	
	W1121 23:51:20.818390  525112 out.go:285] * 
	* 
	W1121 23:51:20.825103  525112 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_9e377edc2b59264359e9c26f81b048e390fa608a_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_9e377edc2b59264359e9c26f81b048e390fa608a_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1121 23:51:20.827987  525112 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable metrics-server addon: args "out/minikube-linux-arm64 -p addons-882841 addons disable metrics-server --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/MetricsServer (5.42s)

                                                
                                    
x
+
TestAddons/parallel/CSI (43.51s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
I1121 23:51:12.348367  516937 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
I1121 23:51:12.352535  516937 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
I1121 23:51:12.352559  516937 kapi.go:107] duration metric: took 4.204039ms to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
addons_test.go:549: csi-hostpath-driver pods stabilized in 4.21514ms
addons_test.go:552: (dbg) Run:  kubectl --context addons-882841 create -f testdata/csi-hostpath-driver/pvc.yaml
addons_test.go:557: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:402: (dbg) Run:  kubectl --context addons-882841 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-882841 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-882841 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-882841 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-882841 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-882841 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-882841 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-882841 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-882841 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-882841 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-882841 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-882841 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-882841 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:562: (dbg) Run:  kubectl --context addons-882841 create -f testdata/csi-hostpath-driver/pv-pod.yaml
addons_test.go:567: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:352: "task-pv-pod" [e6f26874-b3b8-48cf-bb9c-7b2845436fd6] Pending
helpers_test.go:352: "task-pv-pod" [e6f26874-b3b8-48cf-bb9c-7b2845436fd6] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:352: "task-pv-pod" [e6f26874-b3b8-48cf-bb9c-7b2845436fd6] Running
addons_test.go:567: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 12.003406115s
addons_test.go:572: (dbg) Run:  kubectl --context addons-882841 create -f testdata/csi-hostpath-driver/snapshot.yaml
addons_test.go:577: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:427: (dbg) Run:  kubectl --context addons-882841 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:427: (dbg) Run:  kubectl --context addons-882841 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:582: (dbg) Run:  kubectl --context addons-882841 delete pod task-pv-pod
addons_test.go:588: (dbg) Run:  kubectl --context addons-882841 delete pvc hpvc
addons_test.go:594: (dbg) Run:  kubectl --context addons-882841 create -f testdata/csi-hostpath-driver/pvc-restore.yaml
addons_test.go:599: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:402: (dbg) Run:  kubectl --context addons-882841 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-882841 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-882841 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-882841 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-882841 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-882841 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-882841 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-882841 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:604: (dbg) Run:  kubectl --context addons-882841 create -f testdata/csi-hostpath-driver/pv-pod-restore.yaml
addons_test.go:609: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:352: "task-pv-pod-restore" [1e403de8-8f61-42a1-8b3a-d4e317f28c98] Pending
helpers_test.go:352: "task-pv-pod-restore" [1e403de8-8f61-42a1-8b3a-d4e317f28c98] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:352: "task-pv-pod-restore" [1e403de8-8f61-42a1-8b3a-d4e317f28c98] Running
addons_test.go:609: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 8.003253953s
addons_test.go:614: (dbg) Run:  kubectl --context addons-882841 delete pod task-pv-pod-restore
addons_test.go:618: (dbg) Run:  kubectl --context addons-882841 delete pvc hpvc-restore
addons_test.go:622: (dbg) Run:  kubectl --context addons-882841 delete volumesnapshot new-snapshot-demo
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-882841 addons disable volumesnapshots --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-arm64 -p addons-882841 addons disable volumesnapshots --alsologtostderr -v=1: exit status 11 (269.951768ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1121 23:51:55.360729  526077 out.go:360] Setting OutFile to fd 1 ...
	I1121 23:51:55.361384  526077 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1121 23:51:55.361426  526077 out.go:374] Setting ErrFile to fd 2...
	I1121 23:51:55.361448  526077 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1121 23:51:55.361725  526077 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21934-513600/.minikube/bin
	I1121 23:51:55.362076  526077 mustload.go:66] Loading cluster: addons-882841
	I1121 23:51:55.362614  526077 config.go:182] Loaded profile config "addons-882841": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1121 23:51:55.362662  526077 addons.go:622] checking whether the cluster is paused
	I1121 23:51:55.362800  526077 config.go:182] Loaded profile config "addons-882841": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1121 23:51:55.362836  526077 host.go:66] Checking if "addons-882841" exists ...
	I1121 23:51:55.363341  526077 cli_runner.go:164] Run: docker container inspect addons-882841 --format={{.State.Status}}
	I1121 23:51:55.382297  526077 ssh_runner.go:195] Run: systemctl --version
	I1121 23:51:55.382351  526077 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-882841
	I1121 23:51:55.400344  526077 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33495 SSHKeyPath:/home/jenkins/minikube-integration/21934-513600/.minikube/machines/addons-882841/id_rsa Username:docker}
	I1121 23:51:55.505839  526077 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1121 23:51:55.505918  526077 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1121 23:51:55.539707  526077 cri.go:89] found id: "d4288bfcc52cab787e4c57cd6f6ce8b5e4eab8e0f753f7b4b8c0dfbb6d7fcacf"
	I1121 23:51:55.539730  526077 cri.go:89] found id: "c2b953c3eb94bb9506b88f8f9082db05bab7bbad3c3c92fb83a61fd4148bcd7c"
	I1121 23:51:55.539735  526077 cri.go:89] found id: "4b84c2719358c69a49238d48c9737dea6a7f98672fde68733fbdfd0c5f76c519"
	I1121 23:51:55.539739  526077 cri.go:89] found id: "03e07c8bd5633a56d64051d6a63832c1a6fc109d661151083091d50ee1a7dfb7"
	I1121 23:51:55.539742  526077 cri.go:89] found id: "4f400d29139bbf4aeaa41d46055af4710e38ae5d6a844d3ee8b87ce4de3a0f3a"
	I1121 23:51:55.539746  526077 cri.go:89] found id: "0ecc1bcf505044ab1e7b917a3904e9d8ff652e08129c43483e6ff6f465bc7f48"
	I1121 23:51:55.539749  526077 cri.go:89] found id: "0d6392a88c56afc228d16b7659bcfa96628c1585bdb4b03af537ff609bf9f34a"
	I1121 23:51:55.539752  526077 cri.go:89] found id: "492d7c5835fb8502b60633915d9d3f885aa7bc3696e4febec5b394bab0a6773b"
	I1121 23:51:55.539755  526077 cri.go:89] found id: "96794062627c79a139839f726287bb12566d789fa0f4d5b1994cd88518a2e2eb"
	I1121 23:51:55.539762  526077 cri.go:89] found id: "d6df5ea7f4eb5e6b4852fd9cf791dda6c35753e420985175cbcc2a80b368d82b"
	I1121 23:51:55.539766  526077 cri.go:89] found id: "800239fcbfd600dbcec2ac03099f8200b62cf3769357e03ab2d40f672490913e"
	I1121 23:51:55.539769  526077 cri.go:89] found id: "1d9bfd16346a7b544d9696f5ac0700133b9c44dfb05b597e6ece14fdf7c1ee4d"
	I1121 23:51:55.539772  526077 cri.go:89] found id: "b7eb954adbbab5deddd57f625c7dce81a5fbc6e9ee1d2cb260d88e1fbd1482da"
	I1121 23:51:55.539775  526077 cri.go:89] found id: "ba42295e49f9af2999894c8bde53ee31c193600b80cc921d12a7b280aefbca13"
	I1121 23:51:55.539778  526077 cri.go:89] found id: "36f901d7268653e5e64e73d2f8c787b658cab9899bc17b2cc522fb984b5ae3f7"
	I1121 23:51:55.539786  526077 cri.go:89] found id: "561d110537c5cbfb43c832086d9c8216a7180387df20a1b9c68b29a4b682f207"
	I1121 23:51:55.539810  526077 cri.go:89] found id: "e6dc31e0930681fac6fbd625f4ec7a07e57c10d13a728a7ec163a4c66a6d4a2b"
	I1121 23:51:55.539815  526077 cri.go:89] found id: "074654b9d6b9f820e2f61d8ef839ef5ebec8673802a3e034c02530f243f023d0"
	I1121 23:51:55.539819  526077 cri.go:89] found id: "970af788676bddee24edc4dbf7882805510ac451d6658537c9d2152752c3ffee"
	I1121 23:51:55.539822  526077 cri.go:89] found id: "f6c2269669bcf9942b43c38a6d80d882a37c12fba06a2b1b514b07dbd6183350"
	I1121 23:51:55.539827  526077 cri.go:89] found id: "415d7ebb38dbfa2b139e79fcf924802ecf12d3ba74e075e30026fdd18353d343"
	I1121 23:51:55.539834  526077 cri.go:89] found id: "ba805611fb053ae520be5762cfc188a2dd5915f488bed9770376cb5e14b60936"
	I1121 23:51:55.539837  526077 cri.go:89] found id: "0b539dfc17788b7400ee2eb5abadb008e5a8a9796bb112d8a69f14a34f2fd551"
	I1121 23:51:55.539840  526077 cri.go:89] found id: ""
	I1121 23:51:55.539890  526077 ssh_runner.go:195] Run: sudo runc list -f json
	I1121 23:51:55.554528  526077 out.go:203] 
	W1121 23:51:55.557352  526077 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-21T23:51:55Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-21T23:51:55Z" level=error msg="open /run/runc: no such file or directory"
	
	W1121 23:51:55.557372  526077 out.go:285] * 
	* 
	W1121 23:51:55.564236  526077 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_f6150db7515caf82d8c4c5baeba9fd21f738a7e0_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_f6150db7515caf82d8c4c5baeba9fd21f738a7e0_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1121 23:51:55.567908  526077 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable volumesnapshots addon: args "out/minikube-linux-arm64 -p addons-882841 addons disable volumesnapshots --alsologtostderr -v=1": exit status 11
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-882841 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-arm64 -p addons-882841 addons disable csi-hostpath-driver --alsologtostderr -v=1: exit status 11 (283.042595ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1121 23:51:55.643936  526123 out.go:360] Setting OutFile to fd 1 ...
	I1121 23:51:55.644500  526123 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1121 23:51:55.644515  526123 out.go:374] Setting ErrFile to fd 2...
	I1121 23:51:55.644521  526123 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1121 23:51:55.644779  526123 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21934-513600/.minikube/bin
	I1121 23:51:55.645074  526123 mustload.go:66] Loading cluster: addons-882841
	I1121 23:51:55.645452  526123 config.go:182] Loaded profile config "addons-882841": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1121 23:51:55.645471  526123 addons.go:622] checking whether the cluster is paused
	I1121 23:51:55.645578  526123 config.go:182] Loaded profile config "addons-882841": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1121 23:51:55.645594  526123 host.go:66] Checking if "addons-882841" exists ...
	I1121 23:51:55.646173  526123 cli_runner.go:164] Run: docker container inspect addons-882841 --format={{.State.Status}}
	I1121 23:51:55.669361  526123 ssh_runner.go:195] Run: systemctl --version
	I1121 23:51:55.669412  526123 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-882841
	I1121 23:51:55.690444  526123 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33495 SSHKeyPath:/home/jenkins/minikube-integration/21934-513600/.minikube/machines/addons-882841/id_rsa Username:docker}
	I1121 23:51:55.792661  526123 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1121 23:51:55.792750  526123 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1121 23:51:55.823315  526123 cri.go:89] found id: "d4288bfcc52cab787e4c57cd6f6ce8b5e4eab8e0f753f7b4b8c0dfbb6d7fcacf"
	I1121 23:51:55.823337  526123 cri.go:89] found id: "c2b953c3eb94bb9506b88f8f9082db05bab7bbad3c3c92fb83a61fd4148bcd7c"
	I1121 23:51:55.823347  526123 cri.go:89] found id: "4b84c2719358c69a49238d48c9737dea6a7f98672fde68733fbdfd0c5f76c519"
	I1121 23:51:55.823351  526123 cri.go:89] found id: "03e07c8bd5633a56d64051d6a63832c1a6fc109d661151083091d50ee1a7dfb7"
	I1121 23:51:55.823354  526123 cri.go:89] found id: "4f400d29139bbf4aeaa41d46055af4710e38ae5d6a844d3ee8b87ce4de3a0f3a"
	I1121 23:51:55.823358  526123 cri.go:89] found id: "0ecc1bcf505044ab1e7b917a3904e9d8ff652e08129c43483e6ff6f465bc7f48"
	I1121 23:51:55.823361  526123 cri.go:89] found id: "0d6392a88c56afc228d16b7659bcfa96628c1585bdb4b03af537ff609bf9f34a"
	I1121 23:51:55.823364  526123 cri.go:89] found id: "492d7c5835fb8502b60633915d9d3f885aa7bc3696e4febec5b394bab0a6773b"
	I1121 23:51:55.823367  526123 cri.go:89] found id: "96794062627c79a139839f726287bb12566d789fa0f4d5b1994cd88518a2e2eb"
	I1121 23:51:55.823374  526123 cri.go:89] found id: "d6df5ea7f4eb5e6b4852fd9cf791dda6c35753e420985175cbcc2a80b368d82b"
	I1121 23:51:55.823377  526123 cri.go:89] found id: "800239fcbfd600dbcec2ac03099f8200b62cf3769357e03ab2d40f672490913e"
	I1121 23:51:55.823380  526123 cri.go:89] found id: "1d9bfd16346a7b544d9696f5ac0700133b9c44dfb05b597e6ece14fdf7c1ee4d"
	I1121 23:51:55.823383  526123 cri.go:89] found id: "b7eb954adbbab5deddd57f625c7dce81a5fbc6e9ee1d2cb260d88e1fbd1482da"
	I1121 23:51:55.823386  526123 cri.go:89] found id: "ba42295e49f9af2999894c8bde53ee31c193600b80cc921d12a7b280aefbca13"
	I1121 23:51:55.823389  526123 cri.go:89] found id: "36f901d7268653e5e64e73d2f8c787b658cab9899bc17b2cc522fb984b5ae3f7"
	I1121 23:51:55.823399  526123 cri.go:89] found id: "561d110537c5cbfb43c832086d9c8216a7180387df20a1b9c68b29a4b682f207"
	I1121 23:51:55.823407  526123 cri.go:89] found id: "e6dc31e0930681fac6fbd625f4ec7a07e57c10d13a728a7ec163a4c66a6d4a2b"
	I1121 23:51:55.823414  526123 cri.go:89] found id: "074654b9d6b9f820e2f61d8ef839ef5ebec8673802a3e034c02530f243f023d0"
	I1121 23:51:55.823417  526123 cri.go:89] found id: "970af788676bddee24edc4dbf7882805510ac451d6658537c9d2152752c3ffee"
	I1121 23:51:55.823420  526123 cri.go:89] found id: "f6c2269669bcf9942b43c38a6d80d882a37c12fba06a2b1b514b07dbd6183350"
	I1121 23:51:55.823425  526123 cri.go:89] found id: "415d7ebb38dbfa2b139e79fcf924802ecf12d3ba74e075e30026fdd18353d343"
	I1121 23:51:55.823432  526123 cri.go:89] found id: "ba805611fb053ae520be5762cfc188a2dd5915f488bed9770376cb5e14b60936"
	I1121 23:51:55.823435  526123 cri.go:89] found id: "0b539dfc17788b7400ee2eb5abadb008e5a8a9796bb112d8a69f14a34f2fd551"
	I1121 23:51:55.823438  526123 cri.go:89] found id: ""
	I1121 23:51:55.823491  526123 ssh_runner.go:195] Run: sudo runc list -f json
	I1121 23:51:55.838574  526123 out.go:203] 
	W1121 23:51:55.841563  526123 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-21T23:51:55Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-21T23:51:55Z" level=error msg="open /run/runc: no such file or directory"
	
	W1121 23:51:55.841585  526123 out.go:285] * 
	* 
	W1121 23:51:55.848334  526123 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_913eef9b964ccef8b5b536327192b81f4aff5da9_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_913eef9b964ccef8b5b536327192b81f4aff5da9_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1121 23:51:55.851279  526123 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable csi-hostpath-driver addon: args "out/minikube-linux-arm64 -p addons-882841 addons disable csi-hostpath-driver --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/CSI (43.51s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (3.38s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:808: (dbg) Run:  out/minikube-linux-arm64 addons enable headlamp -p addons-882841 --alsologtostderr -v=1
addons_test.go:808: (dbg) Non-zero exit: out/minikube-linux-arm64 addons enable headlamp -p addons-882841 --alsologtostderr -v=1: exit status 11 (395.787687ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1121 23:51:12.163470  524468 out.go:360] Setting OutFile to fd 1 ...
	I1121 23:51:12.164442  524468 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1121 23:51:12.164484  524468 out.go:374] Setting ErrFile to fd 2...
	I1121 23:51:12.164503  524468 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1121 23:51:12.165138  524468 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21934-513600/.minikube/bin
	I1121 23:51:12.165550  524468 mustload.go:66] Loading cluster: addons-882841
	I1121 23:51:12.166022  524468 config.go:182] Loaded profile config "addons-882841": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1121 23:51:12.166063  524468 addons.go:622] checking whether the cluster is paused
	I1121 23:51:12.166237  524468 config.go:182] Loaded profile config "addons-882841": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1121 23:51:12.166269  524468 host.go:66] Checking if "addons-882841" exists ...
	I1121 23:51:12.166851  524468 cli_runner.go:164] Run: docker container inspect addons-882841 --format={{.State.Status}}
	I1121 23:51:12.190587  524468 ssh_runner.go:195] Run: systemctl --version
	I1121 23:51:12.190652  524468 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-882841
	I1121 23:51:12.215354  524468 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33495 SSHKeyPath:/home/jenkins/minikube-integration/21934-513600/.minikube/machines/addons-882841/id_rsa Username:docker}
	I1121 23:51:12.330017  524468 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1121 23:51:12.330102  524468 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1121 23:51:12.377662  524468 cri.go:89] found id: "d4288bfcc52cab787e4c57cd6f6ce8b5e4eab8e0f753f7b4b8c0dfbb6d7fcacf"
	I1121 23:51:12.377726  524468 cri.go:89] found id: "c2b953c3eb94bb9506b88f8f9082db05bab7bbad3c3c92fb83a61fd4148bcd7c"
	I1121 23:51:12.377752  524468 cri.go:89] found id: "4b84c2719358c69a49238d48c9737dea6a7f98672fde68733fbdfd0c5f76c519"
	I1121 23:51:12.377770  524468 cri.go:89] found id: "03e07c8bd5633a56d64051d6a63832c1a6fc109d661151083091d50ee1a7dfb7"
	I1121 23:51:12.377789  524468 cri.go:89] found id: "4f400d29139bbf4aeaa41d46055af4710e38ae5d6a844d3ee8b87ce4de3a0f3a"
	I1121 23:51:12.377844  524468 cri.go:89] found id: "0ecc1bcf505044ab1e7b917a3904e9d8ff652e08129c43483e6ff6f465bc7f48"
	I1121 23:51:12.377864  524468 cri.go:89] found id: "0d6392a88c56afc228d16b7659bcfa96628c1585bdb4b03af537ff609bf9f34a"
	I1121 23:51:12.377882  524468 cri.go:89] found id: "492d7c5835fb8502b60633915d9d3f885aa7bc3696e4febec5b394bab0a6773b"
	I1121 23:51:12.377901  524468 cri.go:89] found id: "96794062627c79a139839f726287bb12566d789fa0f4d5b1994cd88518a2e2eb"
	I1121 23:51:12.377929  524468 cri.go:89] found id: "d6df5ea7f4eb5e6b4852fd9cf791dda6c35753e420985175cbcc2a80b368d82b"
	I1121 23:51:12.377948  524468 cri.go:89] found id: "800239fcbfd600dbcec2ac03099f8200b62cf3769357e03ab2d40f672490913e"
	I1121 23:51:12.377965  524468 cri.go:89] found id: "1d9bfd16346a7b544d9696f5ac0700133b9c44dfb05b597e6ece14fdf7c1ee4d"
	I1121 23:51:12.377982  524468 cri.go:89] found id: "b7eb954adbbab5deddd57f625c7dce81a5fbc6e9ee1d2cb260d88e1fbd1482da"
	I1121 23:51:12.378007  524468 cri.go:89] found id: "ba42295e49f9af2999894c8bde53ee31c193600b80cc921d12a7b280aefbca13"
	I1121 23:51:12.378025  524468 cri.go:89] found id: "36f901d7268653e5e64e73d2f8c787b658cab9899bc17b2cc522fb984b5ae3f7"
	I1121 23:51:12.378045  524468 cri.go:89] found id: "561d110537c5cbfb43c832086d9c8216a7180387df20a1b9c68b29a4b682f207"
	I1121 23:51:12.378072  524468 cri.go:89] found id: "e6dc31e0930681fac6fbd625f4ec7a07e57c10d13a728a7ec163a4c66a6d4a2b"
	I1121 23:51:12.378090  524468 cri.go:89] found id: "074654b9d6b9f820e2f61d8ef839ef5ebec8673802a3e034c02530f243f023d0"
	I1121 23:51:12.378106  524468 cri.go:89] found id: "970af788676bddee24edc4dbf7882805510ac451d6658537c9d2152752c3ffee"
	I1121 23:51:12.378124  524468 cri.go:89] found id: "f6c2269669bcf9942b43c38a6d80d882a37c12fba06a2b1b514b07dbd6183350"
	I1121 23:51:12.378145  524468 cri.go:89] found id: "415d7ebb38dbfa2b139e79fcf924802ecf12d3ba74e075e30026fdd18353d343"
	I1121 23:51:12.378171  524468 cri.go:89] found id: "ba805611fb053ae520be5762cfc188a2dd5915f488bed9770376cb5e14b60936"
	I1121 23:51:12.378273  524468 cri.go:89] found id: "0b539dfc17788b7400ee2eb5abadb008e5a8a9796bb112d8a69f14a34f2fd551"
	I1121 23:51:12.378292  524468 cri.go:89] found id: ""
	I1121 23:51:12.378378  524468 ssh_runner.go:195] Run: sudo runc list -f json
	I1121 23:51:12.394907  524468 out.go:203] 
	W1121 23:51:12.397978  524468 out.go:285] X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-21T23:51:12Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-21T23:51:12Z" level=error msg="open /run/runc: no such file or directory"
	
	W1121 23:51:12.398043  524468 out.go:285] * 
	* 
	W1121 23:51:12.410726  524468 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_af3b8a9ce4f102efc219f1404c9eed7a69cbf2d5_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_af3b8a9ce4f102efc219f1404c9eed7a69cbf2d5_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1121 23:51:12.417862  524468 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:810: failed to enable headlamp addon: args: "out/minikube-linux-arm64 addons enable headlamp -p addons-882841 --alsologtostderr -v=1": exit status 11
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestAddons/parallel/Headlamp]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestAddons/parallel/Headlamp]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect addons-882841
helpers_test.go:243: (dbg) docker inspect addons-882841:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "cbf01a114cc53a8b6c72a0ed56d9776d5ffd3dfdacd5a45cb3e08babfb8e2033",
	        "Created": "2025-11-21T23:48:11.665008112Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 518101,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-11-21T23:48:11.703162071Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:97061622515a85470dad13cb046ad5fc5021fcd2b05aa921a620abb52951cd5d",
	        "ResolvConfPath": "/var/lib/docker/containers/cbf01a114cc53a8b6c72a0ed56d9776d5ffd3dfdacd5a45cb3e08babfb8e2033/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/cbf01a114cc53a8b6c72a0ed56d9776d5ffd3dfdacd5a45cb3e08babfb8e2033/hostname",
	        "HostsPath": "/var/lib/docker/containers/cbf01a114cc53a8b6c72a0ed56d9776d5ffd3dfdacd5a45cb3e08babfb8e2033/hosts",
	        "LogPath": "/var/lib/docker/containers/cbf01a114cc53a8b6c72a0ed56d9776d5ffd3dfdacd5a45cb3e08babfb8e2033/cbf01a114cc53a8b6c72a0ed56d9776d5ffd3dfdacd5a45cb3e08babfb8e2033-json.log",
	        "Name": "/addons-882841",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "addons-882841:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "addons-882841",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "cbf01a114cc53a8b6c72a0ed56d9776d5ffd3dfdacd5a45cb3e08babfb8e2033",
	                "LowerDir": "/var/lib/docker/overlay2/6c3988b3528b3a3bf63b623a08f0a43fa28c9bfbdf23b4a999ec7d70676a8e42-init/diff:/var/lib/docker/overlay2/7e8788c6de692bc1c3758a2bb2c4b8da0fbba26855f855c0f3b655bfbdd92f8e/diff",
	                "MergedDir": "/var/lib/docker/overlay2/6c3988b3528b3a3bf63b623a08f0a43fa28c9bfbdf23b4a999ec7d70676a8e42/merged",
	                "UpperDir": "/var/lib/docker/overlay2/6c3988b3528b3a3bf63b623a08f0a43fa28c9bfbdf23b4a999ec7d70676a8e42/diff",
	                "WorkDir": "/var/lib/docker/overlay2/6c3988b3528b3a3bf63b623a08f0a43fa28c9bfbdf23b4a999ec7d70676a8e42/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "addons-882841",
	                "Source": "/var/lib/docker/volumes/addons-882841/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "addons-882841",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763588073-21934@sha256:19d3da0413e1bfa354cbb88004c6796f8e9772a083e0230b0f6e50212ee04c7e",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "addons-882841",
	                "name.minikube.sigs.k8s.io": "addons-882841",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "5cf314fe3031b36014fe97d0f307d0af8308642c7f1c4dbb4b3be2895bcb12b4",
	            "SandboxKey": "/var/run/docker/netns/5cf314fe3031",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33495"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33496"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33499"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33497"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33498"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "addons-882841": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "26:43:f7:c6:39:89",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "129d8f735ebe44960774442ba542960f928613e67c001d7be8766fc635e8e2ec",
	                    "EndpointID": "529b816e88f8f65b7e9d124edf03f4e2170d844d4d1cb2d5af6003ccf3f08c45",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "addons-882841",
	                        "cbf01a114cc5"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p addons-882841 -n addons-882841
helpers_test.go:252: <<< TestAddons/parallel/Headlamp FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestAddons/parallel/Headlamp]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p addons-882841 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p addons-882841 logs -n 25: (1.544513982s)
helpers_test.go:260: TestAddons/parallel/Headlamp logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                                                                                                                                                                   ARGS                                                                                                                                                                                                                                   │        PROFILE         │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ start   │ -o=json --download-only -p download-only-255607 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=docker  --container-runtime=crio                                                                                                                                                                                                                                                                                                │ download-only-255607   │ jenkins │ v1.37.0 │ 21 Nov 25 23:47 UTC │                     │
	│ delete  │ --all                                                                                                                                                                                                                                                                                                                                                                                                                                                                    │ minikube               │ jenkins │ v1.37.0 │ 21 Nov 25 23:47 UTC │ 21 Nov 25 23:47 UTC │
	│ delete  │ -p download-only-255607                                                                                                                                                                                                                                                                                                                                                                                                                                                  │ download-only-255607   │ jenkins │ v1.37.0 │ 21 Nov 25 23:47 UTC │ 21 Nov 25 23:47 UTC │
	│ start   │ -o=json --download-only -p download-only-454799 --force --alsologtostderr --kubernetes-version=v1.34.1 --container-runtime=crio --driver=docker  --container-runtime=crio                                                                                                                                                                                                                                                                                                │ download-only-454799   │ jenkins │ v1.37.0 │ 21 Nov 25 23:47 UTC │                     │
	│ delete  │ --all                                                                                                                                                                                                                                                                                                                                                                                                                                                                    │ minikube               │ jenkins │ v1.37.0 │ 21 Nov 25 23:47 UTC │ 21 Nov 25 23:47 UTC │
	│ delete  │ -p download-only-454799                                                                                                                                                                                                                                                                                                                                                                                                                                                  │ download-only-454799   │ jenkins │ v1.37.0 │ 21 Nov 25 23:47 UTC │ 21 Nov 25 23:47 UTC │
	│ delete  │ -p download-only-255607                                                                                                                                                                                                                                                                                                                                                                                                                                                  │ download-only-255607   │ jenkins │ v1.37.0 │ 21 Nov 25 23:47 UTC │ 21 Nov 25 23:47 UTC │
	│ delete  │ -p download-only-454799                                                                                                                                                                                                                                                                                                                                                                                                                                                  │ download-only-454799   │ jenkins │ v1.37.0 │ 21 Nov 25 23:47 UTC │ 21 Nov 25 23:47 UTC │
	│ start   │ --download-only -p download-docker-291874 --alsologtostderr --driver=docker  --container-runtime=crio                                                                                                                                                                                                                                                                                                                                                                    │ download-docker-291874 │ jenkins │ v1.37.0 │ 21 Nov 25 23:47 UTC │                     │
	│ delete  │ -p download-docker-291874                                                                                                                                                                                                                                                                                                                                                                                                                                                │ download-docker-291874 │ jenkins │ v1.37.0 │ 21 Nov 25 23:47 UTC │ 21 Nov 25 23:47 UTC │
	│ start   │ --download-only -p binary-mirror-343381 --alsologtostderr --binary-mirror http://127.0.0.1:44455 --driver=docker  --container-runtime=crio                                                                                                                                                                                                                                                                                                                               │ binary-mirror-343381   │ jenkins │ v1.37.0 │ 21 Nov 25 23:47 UTC │                     │
	│ delete  │ -p binary-mirror-343381                                                                                                                                                                                                                                                                                                                                                                                                                                                  │ binary-mirror-343381   │ jenkins │ v1.37.0 │ 21 Nov 25 23:47 UTC │ 21 Nov 25 23:47 UTC │
	│ addons  │ enable dashboard -p addons-882841                                                                                                                                                                                                                                                                                                                                                                                                                                        │ addons-882841          │ jenkins │ v1.37.0 │ 21 Nov 25 23:47 UTC │                     │
	│ addons  │ disable dashboard -p addons-882841                                                                                                                                                                                                                                                                                                                                                                                                                                       │ addons-882841          │ jenkins │ v1.37.0 │ 21 Nov 25 23:47 UTC │                     │
	│ start   │ -p addons-882841 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=docker  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher │ addons-882841          │ jenkins │ v1.37.0 │ 21 Nov 25 23:47 UTC │ 21 Nov 25 23:50 UTC │
	│ addons  │ addons-882841 addons disable volcano --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                              │ addons-882841          │ jenkins │ v1.37.0 │ 21 Nov 25 23:50 UTC │                     │
	│ addons  │ addons-882841 addons disable gcp-auth --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                             │ addons-882841          │ jenkins │ v1.37.0 │ 21 Nov 25 23:50 UTC │                     │
	│ addons  │ addons-882841 addons disable yakd --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                                 │ addons-882841          │ jenkins │ v1.37.0 │ 21 Nov 25 23:50 UTC │                     │
	│ addons  │ addons-882841 addons disable nvidia-device-plugin --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                 │ addons-882841          │ jenkins │ v1.37.0 │ 21 Nov 25 23:51 UTC │                     │
	│ ip      │ addons-882841 ip                                                                                                                                                                                                                                                                                                                                                                                                                                                         │ addons-882841          │ jenkins │ v1.37.0 │ 21 Nov 25 23:51 UTC │ 21 Nov 25 23:51 UTC │
	│ addons  │ addons-882841 addons disable registry --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                             │ addons-882841          │ jenkins │ v1.37.0 │ 21 Nov 25 23:51 UTC │                     │
	│ ssh     │ addons-882841 ssh cat /opt/local-path-provisioner/pvc-f4b21e23-9cc1-4889-8265-d98a837f9eda_default_test-pvc/file1                                                                                                                                                                                                                                                                                                                                                        │ addons-882841          │ jenkins │ v1.37.0 │ 21 Nov 25 23:51 UTC │ 21 Nov 25 23:51 UTC │
	│ addons  │ addons-882841 addons disable storage-provisioner-rancher --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                          │ addons-882841          │ jenkins │ v1.37.0 │ 21 Nov 25 23:51 UTC │                     │
	│ addons  │ addons-882841 addons disable cloud-spanner --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                        │ addons-882841          │ jenkins │ v1.37.0 │ 21 Nov 25 23:51 UTC │                     │
	│ addons  │ enable headlamp -p addons-882841 --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                                  │ addons-882841          │ jenkins │ v1.37.0 │ 21 Nov 25 23:51 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/21 23:47:47
	Running on machine: ip-172-31-21-244
	Binary: Built with gc go1.25.3 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1121 23:47:47.136572  517697 out.go:360] Setting OutFile to fd 1 ...
	I1121 23:47:47.136742  517697 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1121 23:47:47.136771  517697 out.go:374] Setting ErrFile to fd 2...
	I1121 23:47:47.136792  517697 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1121 23:47:47.137151  517697 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21934-513600/.minikube/bin
	I1121 23:47:47.137694  517697 out.go:368] Setting JSON to false
	I1121 23:47:47.139012  517697 start.go:133] hostinfo: {"hostname":"ip-172-31-21-244","uptime":16184,"bootTime":1763752684,"procs":145,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"da8ac1fd-6236-412a-a346-95873c98230d"}
	I1121 23:47:47.139091  517697 start.go:143] virtualization:  
	I1121 23:47:47.142241  517697 out.go:179] * [addons-882841] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1121 23:47:47.146046  517697 out.go:179]   - MINIKUBE_LOCATION=21934
	I1121 23:47:47.146123  517697 notify.go:221] Checking for updates...
	I1121 23:47:47.151861  517697 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1121 23:47:47.154703  517697 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21934-513600/kubeconfig
	I1121 23:47:47.157409  517697 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21934-513600/.minikube
	I1121 23:47:47.160274  517697 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1121 23:47:47.163198  517697 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1121 23:47:47.166192  517697 driver.go:422] Setting default libvirt URI to qemu:///system
	I1121 23:47:47.186586  517697 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1121 23:47:47.186710  517697 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1121 23:47:47.254013  517697 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:27 OomKillDisable:true NGoroutines:43 SystemTime:2025-11-21 23:47:47.237918784 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1121 23:47:47.254120  517697 docker.go:319] overlay module found
	I1121 23:47:47.257244  517697 out.go:179] * Using the docker driver based on user configuration
	I1121 23:47:47.260086  517697 start.go:309] selected driver: docker
	I1121 23:47:47.260103  517697 start.go:930] validating driver "docker" against <nil>
	I1121 23:47:47.260116  517697 start.go:941] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1121 23:47:47.260836  517697 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1121 23:47:47.313203  517697 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:23 OomKillDisable:true NGoroutines:43 SystemTime:2025-11-21 23:47:47.303843437 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1121 23:47:47.313367  517697 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1121 23:47:47.313592  517697 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1121 23:47:47.316566  517697 out.go:179] * Using Docker driver with root privileges
	I1121 23:47:47.319435  517697 cni.go:84] Creating CNI manager for ""
	I1121 23:47:47.319505  517697 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1121 23:47:47.319518  517697 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1121 23:47:47.319597  517697 start.go:353] cluster config:
	{Name:addons-882841 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763588073-21934@sha256:19d3da0413e1bfa354cbb88004c6796f8e9772a083e0230b0f6e50212ee04c7e Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:addons-882841 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime
:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:
AutoPauseInterval:1m0s}
	I1121 23:47:47.322725  517697 out.go:179] * Starting "addons-882841" primary control-plane node in "addons-882841" cluster
	I1121 23:47:47.325532  517697 cache.go:134] Beginning downloading kic base image for docker with crio
	I1121 23:47:47.328474  517697 out.go:179] * Pulling base image v0.0.48-1763588073-21934 ...
	I1121 23:47:47.331328  517697 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1121 23:47:47.331375  517697 preload.go:203] Found local preload: /home/jenkins/minikube-integration/21934-513600/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4
	I1121 23:47:47.331389  517697 cache.go:65] Caching tarball of preloaded images
	I1121 23:47:47.331397  517697 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763588073-21934@sha256:19d3da0413e1bfa354cbb88004c6796f8e9772a083e0230b0f6e50212ee04c7e in local docker daemon
	I1121 23:47:47.331480  517697 preload.go:238] Found /home/jenkins/minikube-integration/21934-513600/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1121 23:47:47.331506  517697 cache.go:68] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1121 23:47:47.331938  517697 profile.go:143] Saving config to /home/jenkins/minikube-integration/21934-513600/.minikube/profiles/addons-882841/config.json ...
	I1121 23:47:47.331961  517697 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21934-513600/.minikube/profiles/addons-882841/config.json: {Name:mk942f8f2ad4834012eb7442332ef1f177632391 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1121 23:47:47.347262  517697 cache.go:163] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763588073-21934@sha256:19d3da0413e1bfa354cbb88004c6796f8e9772a083e0230b0f6e50212ee04c7e to local cache
	I1121 23:47:47.347417  517697 image.go:65] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763588073-21934@sha256:19d3da0413e1bfa354cbb88004c6796f8e9772a083e0230b0f6e50212ee04c7e in local cache directory
	I1121 23:47:47.347438  517697 image.go:68] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763588073-21934@sha256:19d3da0413e1bfa354cbb88004c6796f8e9772a083e0230b0f6e50212ee04c7e in local cache directory, skipping pull
	I1121 23:47:47.347442  517697 image.go:137] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763588073-21934@sha256:19d3da0413e1bfa354cbb88004c6796f8e9772a083e0230b0f6e50212ee04c7e exists in cache, skipping pull
	I1121 23:47:47.347449  517697 cache.go:166] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763588073-21934@sha256:19d3da0413e1bfa354cbb88004c6796f8e9772a083e0230b0f6e50212ee04c7e as a tarball
	I1121 23:47:47.347454  517697 cache.go:176] Loading gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763588073-21934@sha256:19d3da0413e1bfa354cbb88004c6796f8e9772a083e0230b0f6e50212ee04c7e from local cache
	I1121 23:48:05.149534  517697 cache.go:178] successfully loaded and using gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763588073-21934@sha256:19d3da0413e1bfa354cbb88004c6796f8e9772a083e0230b0f6e50212ee04c7e from cached tarball
	I1121 23:48:05.149571  517697 cache.go:243] Successfully downloaded all kic artifacts
	I1121 23:48:05.149605  517697 start.go:360] acquireMachinesLock for addons-882841: {Name:mk32b69fee55935d27dd144fc65beab88981c1d2 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1121 23:48:05.149734  517697 start.go:364] duration metric: took 105.876µs to acquireMachinesLock for "addons-882841"
	I1121 23:48:05.149776  517697 start.go:93] Provisioning new machine with config: &{Name:addons-882841 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763588073-21934@sha256:19d3da0413e1bfa354cbb88004c6796f8e9772a083e0230b0f6e50212ee04c7e Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:addons-882841 Namespace:default APIServerHAVIP: APIServerName:min
ikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath:
SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1121 23:48:05.149875  517697 start.go:125] createHost starting for "" (driver="docker")
	I1121 23:48:05.151566  517697 out.go:252] * Creating docker container (CPUs=2, Memory=4096MB) ...
	I1121 23:48:05.151811  517697 start.go:159] libmachine.API.Create for "addons-882841" (driver="docker")
	I1121 23:48:05.151847  517697 client.go:173] LocalClient.Create starting
	I1121 23:48:05.151964  517697 main.go:143] libmachine: Creating CA: /home/jenkins/minikube-integration/21934-513600/.minikube/certs/ca.pem
	I1121 23:48:05.331380  517697 main.go:143] libmachine: Creating client certificate: /home/jenkins/minikube-integration/21934-513600/.minikube/certs/cert.pem
	I1121 23:48:05.458382  517697 cli_runner.go:164] Run: docker network inspect addons-882841 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1121 23:48:05.473231  517697 cli_runner.go:211] docker network inspect addons-882841 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1121 23:48:05.473326  517697 network_create.go:284] running [docker network inspect addons-882841] to gather additional debugging logs...
	I1121 23:48:05.473345  517697 cli_runner.go:164] Run: docker network inspect addons-882841
	W1121 23:48:05.494850  517697 cli_runner.go:211] docker network inspect addons-882841 returned with exit code 1
	I1121 23:48:05.494879  517697 network_create.go:287] error running [docker network inspect addons-882841]: docker network inspect addons-882841: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network addons-882841 not found
	I1121 23:48:05.494893  517697 network_create.go:289] output of [docker network inspect addons-882841]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network addons-882841 not found
	
	** /stderr **
	I1121 23:48:05.494993  517697 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1121 23:48:05.511973  517697 network.go:206] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x40019cc4e0}
	I1121 23:48:05.512012  517697 network_create.go:124] attempt to create docker network addons-882841 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I1121 23:48:05.512064  517697 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=addons-882841 addons-882841
	I1121 23:48:05.564350  517697 network_create.go:108] docker network addons-882841 192.168.49.0/24 created
	I1121 23:48:05.564385  517697 kic.go:121] calculated static IP "192.168.49.2" for the "addons-882841" container
	I1121 23:48:05.564464  517697 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1121 23:48:05.579858  517697 cli_runner.go:164] Run: docker volume create addons-882841 --label name.minikube.sigs.k8s.io=addons-882841 --label created_by.minikube.sigs.k8s.io=true
	I1121 23:48:05.597413  517697 oci.go:103] Successfully created a docker volume addons-882841
	I1121 23:48:05.597509  517697 cli_runner.go:164] Run: docker run --rm --name addons-882841-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-882841 --entrypoint /usr/bin/test -v addons-882841:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763588073-21934@sha256:19d3da0413e1bfa354cbb88004c6796f8e9772a083e0230b0f6e50212ee04c7e -d /var/lib
	I1121 23:48:07.230877  517697 cli_runner.go:217] Completed: docker run --rm --name addons-882841-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-882841 --entrypoint /usr/bin/test -v addons-882841:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763588073-21934@sha256:19d3da0413e1bfa354cbb88004c6796f8e9772a083e0230b0f6e50212ee04c7e -d /var/lib: (1.633328928s)
	I1121 23:48:07.230916  517697 oci.go:107] Successfully prepared a docker volume addons-882841
	I1121 23:48:07.230969  517697 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1121 23:48:07.230982  517697 kic.go:194] Starting extracting preloaded images to volume ...
	I1121 23:48:07.231044  517697 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21934-513600/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v addons-882841:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763588073-21934@sha256:19d3da0413e1bfa354cbb88004c6796f8e9772a083e0230b0f6e50212ee04c7e -I lz4 -xf /preloaded.tar -C /extractDir
	I1121 23:48:11.595058  517697 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21934-513600/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v addons-882841:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763588073-21934@sha256:19d3da0413e1bfa354cbb88004c6796f8e9772a083e0230b0f6e50212ee04c7e -I lz4 -xf /preloaded.tar -C /extractDir: (4.363977578s)
	I1121 23:48:11.595090  517697 kic.go:203] duration metric: took 4.364104442s to extract preloaded images to volume ...
	W1121 23:48:11.595235  517697 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I1121 23:48:11.595344  517697 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1121 23:48:11.650987  517697 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname addons-882841 --name addons-882841 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-882841 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=addons-882841 --network addons-882841 --ip 192.168.49.2 --volume addons-882841:/var --security-opt apparmor=unconfined --memory=4096mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763588073-21934@sha256:19d3da0413e1bfa354cbb88004c6796f8e9772a083e0230b0f6e50212ee04c7e
	I1121 23:48:11.905285  517697 cli_runner.go:164] Run: docker container inspect addons-882841 --format={{.State.Running}}
	I1121 23:48:11.934176  517697 cli_runner.go:164] Run: docker container inspect addons-882841 --format={{.State.Status}}
	I1121 23:48:11.955140  517697 cli_runner.go:164] Run: docker exec addons-882841 stat /var/lib/dpkg/alternatives/iptables
	I1121 23:48:12.005438  517697 oci.go:144] the created container "addons-882841" has a running status.
	I1121 23:48:12.005473  517697 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21934-513600/.minikube/machines/addons-882841/id_rsa...
	I1121 23:48:12.410732  517697 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21934-513600/.minikube/machines/addons-882841/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1121 23:48:12.435339  517697 cli_runner.go:164] Run: docker container inspect addons-882841 --format={{.State.Status}}
	I1121 23:48:12.459635  517697 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1121 23:48:12.459660  517697 kic_runner.go:114] Args: [docker exec --privileged addons-882841 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1121 23:48:12.517192  517697 cli_runner.go:164] Run: docker container inspect addons-882841 --format={{.State.Status}}
	I1121 23:48:12.546138  517697 machine.go:94] provisionDockerMachine start ...
	I1121 23:48:12.546236  517697 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-882841
	I1121 23:48:12.581125  517697 main.go:143] libmachine: Using SSH client type: native
	I1121 23:48:12.581442  517697 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33495 <nil> <nil>}
	I1121 23:48:12.581452  517697 main.go:143] libmachine: About to run SSH command:
	hostname
	I1121 23:48:12.582477  517697 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:45812->127.0.0.1:33495: read: connection reset by peer
	I1121 23:48:15.721412  517697 main.go:143] libmachine: SSH cmd err, output: <nil>: addons-882841
	
	I1121 23:48:15.721439  517697 ubuntu.go:182] provisioning hostname "addons-882841"
	I1121 23:48:15.721502  517697 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-882841
	I1121 23:48:15.738594  517697 main.go:143] libmachine: Using SSH client type: native
	I1121 23:48:15.738917  517697 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33495 <nil> <nil>}
	I1121 23:48:15.738936  517697 main.go:143] libmachine: About to run SSH command:
	sudo hostname addons-882841 && echo "addons-882841" | sudo tee /etc/hostname
	I1121 23:48:15.890656  517697 main.go:143] libmachine: SSH cmd err, output: <nil>: addons-882841
	
	I1121 23:48:15.890752  517697 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-882841
	I1121 23:48:15.907558  517697 main.go:143] libmachine: Using SSH client type: native
	I1121 23:48:15.907885  517697 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33495 <nil> <nil>}
	I1121 23:48:15.907913  517697 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-882841' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-882841/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-882841' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1121 23:48:16.050194  517697 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1121 23:48:16.050286  517697 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21934-513600/.minikube CaCertPath:/home/jenkins/minikube-integration/21934-513600/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21934-513600/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21934-513600/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21934-513600/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21934-513600/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21934-513600/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21934-513600/.minikube}
	I1121 23:48:16.050350  517697 ubuntu.go:190] setting up certificates
	I1121 23:48:16.050381  517697 provision.go:84] configureAuth start
	I1121 23:48:16.050473  517697 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-882841
	I1121 23:48:16.067951  517697 provision.go:143] copyHostCerts
	I1121 23:48:16.068036  517697 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21934-513600/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21934-513600/.minikube/ca.pem (1078 bytes)
	I1121 23:48:16.068202  517697 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21934-513600/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21934-513600/.minikube/cert.pem (1123 bytes)
	I1121 23:48:16.068260  517697 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21934-513600/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21934-513600/.minikube/key.pem (1675 bytes)
	I1121 23:48:16.068312  517697 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21934-513600/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21934-513600/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21934-513600/.minikube/certs/ca-key.pem org=jenkins.addons-882841 san=[127.0.0.1 192.168.49.2 addons-882841 localhost minikube]
	I1121 23:48:16.302165  517697 provision.go:177] copyRemoteCerts
	I1121 23:48:16.302232  517697 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1121 23:48:16.302280  517697 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-882841
	I1121 23:48:16.318408  517697 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33495 SSHKeyPath:/home/jenkins/minikube-integration/21934-513600/.minikube/machines/addons-882841/id_rsa Username:docker}
	I1121 23:48:16.417159  517697 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21934-513600/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1121 23:48:16.433274  517697 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21934-513600/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1121 23:48:16.449582  517697 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21934-513600/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1121 23:48:16.466913  517697 provision.go:87] duration metric: took 416.496161ms to configureAuth
	I1121 23:48:16.466943  517697 ubuntu.go:206] setting minikube options for container-runtime
	I1121 23:48:16.467121  517697 config.go:182] Loaded profile config "addons-882841": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1121 23:48:16.467231  517697 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-882841
	I1121 23:48:16.483442  517697 main.go:143] libmachine: Using SSH client type: native
	I1121 23:48:16.483801  517697 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33495 <nil> <nil>}
	I1121 23:48:16.483820  517697 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1121 23:48:16.752746  517697 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1121 23:48:16.752766  517697 machine.go:97] duration metric: took 4.206600004s to provisionDockerMachine
	I1121 23:48:16.752777  517697 client.go:176] duration metric: took 11.600920915s to LocalClient.Create
	I1121 23:48:16.752790  517697 start.go:167] duration metric: took 11.60097999s to libmachine.API.Create "addons-882841"
	I1121 23:48:16.752798  517697 start.go:293] postStartSetup for "addons-882841" (driver="docker")
	I1121 23:48:16.752807  517697 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1121 23:48:16.752875  517697 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1121 23:48:16.752934  517697 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-882841
	I1121 23:48:16.769771  517697 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33495 SSHKeyPath:/home/jenkins/minikube-integration/21934-513600/.minikube/machines/addons-882841/id_rsa Username:docker}
	I1121 23:48:16.869846  517697 ssh_runner.go:195] Run: cat /etc/os-release
	I1121 23:48:16.872921  517697 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1121 23:48:16.872950  517697 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1121 23:48:16.872962  517697 filesync.go:126] Scanning /home/jenkins/minikube-integration/21934-513600/.minikube/addons for local assets ...
	I1121 23:48:16.873024  517697 filesync.go:126] Scanning /home/jenkins/minikube-integration/21934-513600/.minikube/files for local assets ...
	I1121 23:48:16.873052  517697 start.go:296] duration metric: took 120.248926ms for postStartSetup
	I1121 23:48:16.873361  517697 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-882841
	I1121 23:48:16.889496  517697 profile.go:143] Saving config to /home/jenkins/minikube-integration/21934-513600/.minikube/profiles/addons-882841/config.json ...
	I1121 23:48:16.889779  517697 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1121 23:48:16.889960  517697 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-882841
	I1121 23:48:16.906306  517697 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33495 SSHKeyPath:/home/jenkins/minikube-integration/21934-513600/.minikube/machines/addons-882841/id_rsa Username:docker}
	I1121 23:48:17.004031  517697 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1121 23:48:17.009063  517697 start.go:128] duration metric: took 11.859173381s to createHost
	I1121 23:48:17.009089  517697 start.go:83] releasing machines lock for "addons-882841", held for 11.859342714s
	I1121 23:48:17.009190  517697 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-882841
	I1121 23:48:17.028832  517697 ssh_runner.go:195] Run: cat /version.json
	I1121 23:48:17.028889  517697 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-882841
	I1121 23:48:17.029147  517697 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1121 23:48:17.029208  517697 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-882841
	I1121 23:48:17.048237  517697 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33495 SSHKeyPath:/home/jenkins/minikube-integration/21934-513600/.minikube/machines/addons-882841/id_rsa Username:docker}
	I1121 23:48:17.052561  517697 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33495 SSHKeyPath:/home/jenkins/minikube-integration/21934-513600/.minikube/machines/addons-882841/id_rsa Username:docker}
	I1121 23:48:17.230903  517697 ssh_runner.go:195] Run: systemctl --version
	I1121 23:48:17.237108  517697 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1121 23:48:17.271256  517697 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1121 23:48:17.275463  517697 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1121 23:48:17.275532  517697 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1121 23:48:17.298265  517697 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/10-crio-bridge.conflist.disabled] bridge cni config(s)
	I1121 23:48:17.298285  517697 start.go:496] detecting cgroup driver to use...
	I1121 23:48:17.298315  517697 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1121 23:48:17.298364  517697 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1121 23:48:17.314136  517697 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1121 23:48:17.326555  517697 docker.go:218] disabling cri-docker service (if available) ...
	I1121 23:48:17.326667  517697 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1121 23:48:17.343626  517697 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1121 23:48:17.361525  517697 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1121 23:48:17.482750  517697 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1121 23:48:17.607661  517697 docker.go:234] disabling docker service ...
	I1121 23:48:17.607779  517697 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1121 23:48:17.627288  517697 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1121 23:48:17.640236  517697 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1121 23:48:17.765937  517697 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1121 23:48:17.880912  517697 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1121 23:48:17.893686  517697 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1121 23:48:17.907952  517697 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1121 23:48:17.908026  517697 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1121 23:48:17.916599  517697 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1121 23:48:17.916665  517697 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1121 23:48:17.925668  517697 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1121 23:48:17.935119  517697 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1121 23:48:17.943902  517697 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1121 23:48:17.951889  517697 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1121 23:48:17.960462  517697 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1121 23:48:17.973487  517697 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1121 23:48:17.982021  517697 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1121 23:48:17.989407  517697 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1121 23:48:17.996629  517697 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1121 23:48:18.115499  517697 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1121 23:48:18.279685  517697 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1121 23:48:18.279789  517697 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1121 23:48:18.283660  517697 start.go:564] Will wait 60s for crictl version
	I1121 23:48:18.283729  517697 ssh_runner.go:195] Run: which crictl
	I1121 23:48:18.287221  517697 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1121 23:48:18.314920  517697 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.2
	RuntimeApiVersion:  v1
	I1121 23:48:18.315041  517697 ssh_runner.go:195] Run: crio --version
	I1121 23:48:18.345078  517697 ssh_runner.go:195] Run: crio --version
	I1121 23:48:18.373136  517697 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.2 ...
	I1121 23:48:18.374363  517697 cli_runner.go:164] Run: docker network inspect addons-882841 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1121 23:48:18.389968  517697 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1121 23:48:18.393792  517697 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1121 23:48:18.403095  517697 kubeadm.go:884] updating cluster {Name:addons-882841 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763588073-21934@sha256:19d3da0413e1bfa354cbb88004c6796f8e9772a083e0230b0f6e50212ee04c7e Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:addons-882841 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNa
mes:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketV
MnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1121 23:48:18.403222  517697 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1121 23:48:18.403275  517697 ssh_runner.go:195] Run: sudo crictl images --output json
	I1121 23:48:18.441897  517697 crio.go:514] all images are preloaded for cri-o runtime.
	I1121 23:48:18.441922  517697 crio.go:433] Images already preloaded, skipping extraction
	I1121 23:48:18.441977  517697 ssh_runner.go:195] Run: sudo crictl images --output json
	I1121 23:48:18.466095  517697 crio.go:514] all images are preloaded for cri-o runtime.
	I1121 23:48:18.466122  517697 cache_images.go:86] Images are preloaded, skipping loading
	I1121 23:48:18.466130  517697 kubeadm.go:935] updating node { 192.168.49.2 8443 v1.34.1 crio true true} ...
	I1121 23:48:18.466213  517697 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=addons-882841 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:addons-882841 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1121 23:48:18.466306  517697 ssh_runner.go:195] Run: crio config
	I1121 23:48:18.519668  517697 cni.go:84] Creating CNI manager for ""
	I1121 23:48:18.519692  517697 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1121 23:48:18.519719  517697 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1121 23:48:18.519744  517697 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-882841 NodeName:addons-882841 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kuberne
tes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1121 23:48:18.519870  517697 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "addons-882841"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1121 23:48:18.519943  517697 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1121 23:48:18.527814  517697 binaries.go:51] Found k8s binaries, skipping transfer
	I1121 23:48:18.527917  517697 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1121 23:48:18.535677  517697 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (363 bytes)
	I1121 23:48:18.548725  517697 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1121 23:48:18.561762  517697 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2210 bytes)
	I1121 23:48:18.574841  517697 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I1121 23:48:18.578495  517697 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1121 23:48:18.588813  517697 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1121 23:48:18.710443  517697 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1121 23:48:18.727312  517697 certs.go:69] Setting up /home/jenkins/minikube-integration/21934-513600/.minikube/profiles/addons-882841 for IP: 192.168.49.2
	I1121 23:48:18.727333  517697 certs.go:195] generating shared ca certs ...
	I1121 23:48:18.727348  517697 certs.go:227] acquiring lock for ca certs: {Name:mkaf4c79493334cb2058b349c36be7473837f9b7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1121 23:48:18.727472  517697 certs.go:241] generating "minikubeCA" ca cert: /home/jenkins/minikube-integration/21934-513600/.minikube/ca.key
	I1121 23:48:18.911581  517697 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21934-513600/.minikube/ca.crt ...
	I1121 23:48:18.911614  517697 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21934-513600/.minikube/ca.crt: {Name:mk9aa55453fcf9a5a4c30ab97d8e3cf50d149db9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1121 23:48:18.911819  517697 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21934-513600/.minikube/ca.key ...
	I1121 23:48:18.911832  517697 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21934-513600/.minikube/ca.key: {Name:mka98daf7e34c04048cf452042bef2d442adadb8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1121 23:48:18.911919  517697 certs.go:241] generating "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21934-513600/.minikube/proxy-client-ca.key
	I1121 23:48:19.140262  517697 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21934-513600/.minikube/proxy-client-ca.crt ...
	I1121 23:48:19.140292  517697 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21934-513600/.minikube/proxy-client-ca.crt: {Name:mkaf717a1819d0db70b6e4130ef58174f05fbada Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1121 23:48:19.140472  517697 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21934-513600/.minikube/proxy-client-ca.key ...
	I1121 23:48:19.140484  517697 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21934-513600/.minikube/proxy-client-ca.key: {Name:mkd8ffc55dc4383da1bb533ba0063c89b86f7eda Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1121 23:48:19.140594  517697 certs.go:257] generating profile certs ...
	I1121 23:48:19.140660  517697 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21934-513600/.minikube/profiles/addons-882841/client.key
	I1121 23:48:19.140678  517697 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21934-513600/.minikube/profiles/addons-882841/client.crt with IP's: []
	I1121 23:48:19.509316  517697 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21934-513600/.minikube/profiles/addons-882841/client.crt ...
	I1121 23:48:19.509358  517697 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21934-513600/.minikube/profiles/addons-882841/client.crt: {Name:mk846505275ee80b58d909ce5fd9b6d3a3629ebb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1121 23:48:19.509541  517697 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21934-513600/.minikube/profiles/addons-882841/client.key ...
	I1121 23:48:19.509554  517697 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21934-513600/.minikube/profiles/addons-882841/client.key: {Name:mk1565060f77005f003a53864b1e37ed589f4b5c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1121 23:48:19.509634  517697 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21934-513600/.minikube/profiles/addons-882841/apiserver.key.696983bf
	I1121 23:48:19.509656  517697 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21934-513600/.minikube/profiles/addons-882841/apiserver.crt.696983bf with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2]
	I1121 23:48:19.801808  517697 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21934-513600/.minikube/profiles/addons-882841/apiserver.crt.696983bf ...
	I1121 23:48:19.801840  517697 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21934-513600/.minikube/profiles/addons-882841/apiserver.crt.696983bf: {Name:mk50ea577b93205edaa13b5cdd71cddb9428b381 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1121 23:48:19.802021  517697 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21934-513600/.minikube/profiles/addons-882841/apiserver.key.696983bf ...
	I1121 23:48:19.802038  517697 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21934-513600/.minikube/profiles/addons-882841/apiserver.key.696983bf: {Name:mk33f735dfbc5b7a4a68736b59562f6821940f4b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1121 23:48:19.802117  517697 certs.go:382] copying /home/jenkins/minikube-integration/21934-513600/.minikube/profiles/addons-882841/apiserver.crt.696983bf -> /home/jenkins/minikube-integration/21934-513600/.minikube/profiles/addons-882841/apiserver.crt
	I1121 23:48:19.802197  517697 certs.go:386] copying /home/jenkins/minikube-integration/21934-513600/.minikube/profiles/addons-882841/apiserver.key.696983bf -> /home/jenkins/minikube-integration/21934-513600/.minikube/profiles/addons-882841/apiserver.key
	I1121 23:48:19.802248  517697 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21934-513600/.minikube/profiles/addons-882841/proxy-client.key
	I1121 23:48:19.802271  517697 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21934-513600/.minikube/profiles/addons-882841/proxy-client.crt with IP's: []
	I1121 23:48:19.967783  517697 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21934-513600/.minikube/profiles/addons-882841/proxy-client.crt ...
	I1121 23:48:19.967812  517697 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21934-513600/.minikube/profiles/addons-882841/proxy-client.crt: {Name:mk2a9aab16ec6d745447f7af0a56129168b939be Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1121 23:48:19.967980  517697 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21934-513600/.minikube/profiles/addons-882841/proxy-client.key ...
	I1121 23:48:19.967994  517697 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21934-513600/.minikube/profiles/addons-882841/proxy-client.key: {Name:mk7e74e0c51b6b1ebff112ae6f72f2251877ef76 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1121 23:48:19.968182  517697 certs.go:484] found cert: /home/jenkins/minikube-integration/21934-513600/.minikube/certs/ca-key.pem (1679 bytes)
	I1121 23:48:19.968224  517697 certs.go:484] found cert: /home/jenkins/minikube-integration/21934-513600/.minikube/certs/ca.pem (1078 bytes)
	I1121 23:48:19.968252  517697 certs.go:484] found cert: /home/jenkins/minikube-integration/21934-513600/.minikube/certs/cert.pem (1123 bytes)
	I1121 23:48:19.968286  517697 certs.go:484] found cert: /home/jenkins/minikube-integration/21934-513600/.minikube/certs/key.pem (1675 bytes)
	I1121 23:48:19.968934  517697 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21934-513600/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1121 23:48:19.986622  517697 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21934-513600/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1121 23:48:20.007319  517697 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21934-513600/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1121 23:48:20.030364  517697 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21934-513600/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1121 23:48:20.049721  517697 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21934-513600/.minikube/profiles/addons-882841/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I1121 23:48:20.067831  517697 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21934-513600/.minikube/profiles/addons-882841/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1121 23:48:20.086649  517697 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21934-513600/.minikube/profiles/addons-882841/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1121 23:48:20.105563  517697 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21934-513600/.minikube/profiles/addons-882841/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1121 23:48:20.124429  517697 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21934-513600/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1121 23:48:20.142128  517697 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1121 23:48:20.156162  517697 ssh_runner.go:195] Run: openssl version
	I1121 23:48:20.162869  517697 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1121 23:48:20.171978  517697 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1121 23:48:20.175908  517697 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov 21 23:48 /usr/share/ca-certificates/minikubeCA.pem
	I1121 23:48:20.175977  517697 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1121 23:48:20.217381  517697 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1121 23:48:20.225893  517697 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1121 23:48:20.229383  517697 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1121 23:48:20.229433  517697 kubeadm.go:401] StartCluster: {Name:addons-882841 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763588073-21934@sha256:19d3da0413e1bfa354cbb88004c6796f8e9772a083e0230b0f6e50212ee04c7e Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:addons-882841 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames
:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMne
tClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1121 23:48:20.229506  517697 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1121 23:48:20.229571  517697 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1121 23:48:20.258235  517697 cri.go:89] found id: ""
	I1121 23:48:20.258353  517697 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1121 23:48:20.266115  517697 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1121 23:48:20.275056  517697 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1121 23:48:20.275129  517697 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1121 23:48:20.286902  517697 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1121 23:48:20.286923  517697 kubeadm.go:158] found existing configuration files:
	
	I1121 23:48:20.286980  517697 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1121 23:48:20.296202  517697 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1121 23:48:20.296316  517697 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1121 23:48:20.304464  517697 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1121 23:48:20.313524  517697 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1121 23:48:20.313587  517697 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1121 23:48:20.321705  517697 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1121 23:48:20.331182  517697 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1121 23:48:20.331270  517697 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1121 23:48:20.338363  517697 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1121 23:48:20.345664  517697 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1121 23:48:20.345748  517697 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1121 23:48:20.353173  517697 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1121 23:48:20.416064  517697 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is in maintenance mode, please migrate to cgroups v2
	I1121 23:48:20.416373  517697 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1121 23:48:20.482942  517697 kubeadm.go:319] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1121 23:48:37.128530  517697 kubeadm.go:319] [init] Using Kubernetes version: v1.34.1
	I1121 23:48:37.128587  517697 kubeadm.go:319] [preflight] Running pre-flight checks
	I1121 23:48:37.128676  517697 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1121 23:48:37.128767  517697 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1121 23:48:37.128802  517697 kubeadm.go:319] OS: Linux
	I1121 23:48:37.128852  517697 kubeadm.go:319] CGROUPS_CPU: enabled
	I1121 23:48:37.128901  517697 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1121 23:48:37.128948  517697 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1121 23:48:37.128996  517697 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1121 23:48:37.129045  517697 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1121 23:48:37.129093  517697 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1121 23:48:37.129138  517697 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1121 23:48:37.129194  517697 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1121 23:48:37.129240  517697 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1121 23:48:37.129312  517697 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1121 23:48:37.129406  517697 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1121 23:48:37.129496  517697 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1121 23:48:37.129558  517697 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1121 23:48:37.130973  517697 out.go:252]   - Generating certificates and keys ...
	I1121 23:48:37.131073  517697 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1121 23:48:37.131161  517697 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1121 23:48:37.131245  517697 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1121 23:48:37.131319  517697 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1121 23:48:37.131421  517697 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1121 23:48:37.131492  517697 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1121 23:48:37.131551  517697 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1121 23:48:37.131675  517697 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [addons-882841 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1121 23:48:37.131755  517697 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1121 23:48:37.131898  517697 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [addons-882841 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1121 23:48:37.131975  517697 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1121 23:48:37.132048  517697 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1121 23:48:37.132108  517697 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1121 23:48:37.132191  517697 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1121 23:48:37.132257  517697 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1121 23:48:37.132322  517697 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1121 23:48:37.132399  517697 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1121 23:48:37.132499  517697 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1121 23:48:37.132582  517697 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1121 23:48:37.132677  517697 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1121 23:48:37.132755  517697 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1121 23:48:37.134298  517697 out.go:252]   - Booting up control plane ...
	I1121 23:48:37.134399  517697 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1121 23:48:37.134504  517697 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1121 23:48:37.134611  517697 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1121 23:48:37.134723  517697 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1121 23:48:37.134864  517697 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1121 23:48:37.135004  517697 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1121 23:48:37.135105  517697 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1121 23:48:37.135163  517697 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1121 23:48:37.135316  517697 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1121 23:48:37.135449  517697 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1121 23:48:37.135529  517697 kubeadm.go:319] [kubelet-check] The kubelet is healthy after 1.001578574s
	I1121 23:48:37.135648  517697 kubeadm.go:319] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1121 23:48:37.135741  517697 kubeadm.go:319] [control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez
	I1121 23:48:37.135889  517697 kubeadm.go:319] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1121 23:48:37.136026  517697 kubeadm.go:319] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1121 23:48:37.136122  517697 kubeadm.go:319] [control-plane-check] kube-controller-manager is healthy after 3.762253005s
	I1121 23:48:37.136207  517697 kubeadm.go:319] [control-plane-check] kube-scheduler is healthy after 4.63050742s
	I1121 23:48:37.136321  517697 kubeadm.go:319] [control-plane-check] kube-apiserver is healthy after 6.001177086s
	I1121 23:48:37.136456  517697 kubeadm.go:319] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1121 23:48:37.136587  517697 kubeadm.go:319] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1121 23:48:37.136671  517697 kubeadm.go:319] [upload-certs] Skipping phase. Please see --upload-certs
	I1121 23:48:37.136871  517697 kubeadm.go:319] [mark-control-plane] Marking the node addons-882841 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1121 23:48:37.136946  517697 kubeadm.go:319] [bootstrap-token] Using token: aisjqn.1bv21k6igtg6gyat
	I1121 23:48:37.138344  517697 out.go:252]   - Configuring RBAC rules ...
	I1121 23:48:37.138495  517697 kubeadm.go:319] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1121 23:48:37.138604  517697 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1121 23:48:37.138815  517697 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1121 23:48:37.138977  517697 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1121 23:48:37.139120  517697 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1121 23:48:37.139217  517697 kubeadm.go:319] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1121 23:48:37.139345  517697 kubeadm.go:319] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1121 23:48:37.139431  517697 kubeadm.go:319] [addons] Applied essential addon: CoreDNS
	I1121 23:48:37.139488  517697 kubeadm.go:319] [addons] Applied essential addon: kube-proxy
	I1121 23:48:37.139496  517697 kubeadm.go:319] 
	I1121 23:48:37.139557  517697 kubeadm.go:319] Your Kubernetes control-plane has initialized successfully!
	I1121 23:48:37.139565  517697 kubeadm.go:319] 
	I1121 23:48:37.139654  517697 kubeadm.go:319] To start using your cluster, you need to run the following as a regular user:
	I1121 23:48:37.139673  517697 kubeadm.go:319] 
	I1121 23:48:37.139732  517697 kubeadm.go:319]   mkdir -p $HOME/.kube
	I1121 23:48:37.139842  517697 kubeadm.go:319]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1121 23:48:37.139917  517697 kubeadm.go:319]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1121 23:48:37.139930  517697 kubeadm.go:319] 
	I1121 23:48:37.139995  517697 kubeadm.go:319] Alternatively, if you are the root user, you can run:
	I1121 23:48:37.140004  517697 kubeadm.go:319] 
	I1121 23:48:37.140052  517697 kubeadm.go:319]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1121 23:48:37.140069  517697 kubeadm.go:319] 
	I1121 23:48:37.140122  517697 kubeadm.go:319] You should now deploy a pod network to the cluster.
	I1121 23:48:37.140200  517697 kubeadm.go:319] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1121 23:48:37.140276  517697 kubeadm.go:319]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1121 23:48:37.140284  517697 kubeadm.go:319] 
	I1121 23:48:37.140368  517697 kubeadm.go:319] You can now join any number of control-plane nodes by copying certificate authorities
	I1121 23:48:37.140447  517697 kubeadm.go:319] and service account keys on each node and then running the following as root:
	I1121 23:48:37.140456  517697 kubeadm.go:319] 
	I1121 23:48:37.140541  517697 kubeadm.go:319]   kubeadm join control-plane.minikube.internal:8443 --token aisjqn.1bv21k6igtg6gyat \
	I1121 23:48:37.140678  517697 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:ecfebb5fda4f065a571cf90106e71e452abce05aaa4d3155b81d7383977d6854 \
	I1121 23:48:37.140725  517697 kubeadm.go:319] 	--control-plane 
	I1121 23:48:37.140736  517697 kubeadm.go:319] 
	I1121 23:48:37.140837  517697 kubeadm.go:319] Then you can join any number of worker nodes by running the following on each as root:
	I1121 23:48:37.140847  517697 kubeadm.go:319] 
	I1121 23:48:37.140939  517697 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8443 --token aisjqn.1bv21k6igtg6gyat \
	I1121 23:48:37.141073  517697 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:ecfebb5fda4f065a571cf90106e71e452abce05aaa4d3155b81d7383977d6854 
	I1121 23:48:37.141088  517697 cni.go:84] Creating CNI manager for ""
	I1121 23:48:37.141096  517697 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1121 23:48:37.142711  517697 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I1121 23:48:37.144311  517697 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1121 23:48:37.149645  517697 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.1/kubectl ...
	I1121 23:48:37.149668  517697 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1121 23:48:37.163740  517697 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1121 23:48:37.470381  517697 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1121 23:48:37.470532  517697 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1121 23:48:37.470610  517697 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-882841 minikube.k8s.io/updated_at=2025_11_21T23_48_37_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=299bbe887a12c40541707cc636234f35f4ff1785 minikube.k8s.io/name=addons-882841 minikube.k8s.io/primary=true
	I1121 23:48:37.701514  517697 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1121 23:48:37.701567  517697 ops.go:34] apiserver oom_adj: -16
	I1121 23:48:38.201667  517697 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1121 23:48:38.702267  517697 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1121 23:48:39.201920  517697 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1121 23:48:39.701633  517697 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1121 23:48:40.201654  517697 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1121 23:48:40.702501  517697 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1121 23:48:40.799912  517697 kubeadm.go:1114] duration metric: took 3.329431907s to wait for elevateKubeSystemPrivileges
	I1121 23:48:40.799948  517697 kubeadm.go:403] duration metric: took 20.570518195s to StartCluster
	I1121 23:48:40.799965  517697 settings.go:142] acquiring lock: {Name:mk6c31eb57ec65b047b78b4e1046e03fe7cc77bb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1121 23:48:40.800089  517697 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21934-513600/kubeconfig
	I1121 23:48:40.800515  517697 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21934-513600/kubeconfig: {Name:mkcd094d8a765e5a81369c0fb33cb5e696b17bb8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1121 23:48:40.800700  517697 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1121 23:48:40.800726  517697 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1121 23:48:40.800965  517697 config.go:182] Loaded profile config "addons-882841": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1121 23:48:40.801005  517697 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:true auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:true storage-provisioner:true storage-provisioner-rancher:true volcano:true volumesnapshots:true yakd:true]
	I1121 23:48:40.801076  517697 addons.go:70] Setting yakd=true in profile "addons-882841"
	I1121 23:48:40.801090  517697 addons.go:239] Setting addon yakd=true in "addons-882841"
	I1121 23:48:40.801114  517697 host.go:66] Checking if "addons-882841" exists ...
	I1121 23:48:40.801556  517697 cli_runner.go:164] Run: docker container inspect addons-882841 --format={{.State.Status}}
	I1121 23:48:40.801760  517697 addons.go:70] Setting inspektor-gadget=true in profile "addons-882841"
	I1121 23:48:40.801786  517697 addons.go:239] Setting addon inspektor-gadget=true in "addons-882841"
	I1121 23:48:40.801847  517697 host.go:66] Checking if "addons-882841" exists ...
	I1121 23:48:40.802135  517697 addons.go:70] Setting metrics-server=true in profile "addons-882841"
	I1121 23:48:40.802158  517697 addons.go:239] Setting addon metrics-server=true in "addons-882841"
	I1121 23:48:40.802183  517697 host.go:66] Checking if "addons-882841" exists ...
	I1121 23:48:40.802436  517697 cli_runner.go:164] Run: docker container inspect addons-882841 --format={{.State.Status}}
	I1121 23:48:40.802611  517697 cli_runner.go:164] Run: docker container inspect addons-882841 --format={{.State.Status}}
	I1121 23:48:40.804764  517697 addons.go:70] Setting amd-gpu-device-plugin=true in profile "addons-882841"
	I1121 23:48:40.804922  517697 addons.go:239] Setting addon amd-gpu-device-plugin=true in "addons-882841"
	I1121 23:48:40.804954  517697 host.go:66] Checking if "addons-882841" exists ...
	I1121 23:48:40.806700  517697 addons.go:70] Setting nvidia-device-plugin=true in profile "addons-882841"
	I1121 23:48:40.806759  517697 addons.go:239] Setting addon nvidia-device-plugin=true in "addons-882841"
	I1121 23:48:40.806804  517697 host.go:66] Checking if "addons-882841" exists ...
	I1121 23:48:40.807250  517697 cli_runner.go:164] Run: docker container inspect addons-882841 --format={{.State.Status}}
	I1121 23:48:40.807310  517697 cli_runner.go:164] Run: docker container inspect addons-882841 --format={{.State.Status}}
	I1121 23:48:40.807453  517697 addons.go:70] Setting cloud-spanner=true in profile "addons-882841"
	I1121 23:48:40.807979  517697 addons.go:239] Setting addon cloud-spanner=true in "addons-882841"
	I1121 23:48:40.808003  517697 host.go:66] Checking if "addons-882841" exists ...
	I1121 23:48:40.808403  517697 cli_runner.go:164] Run: docker container inspect addons-882841 --format={{.State.Status}}
	I1121 23:48:40.807465  517697 addons.go:70] Setting csi-hostpath-driver=true in profile "addons-882841"
	I1121 23:48:40.807472  517697 addons.go:70] Setting default-storageclass=true in profile "addons-882841"
	I1121 23:48:40.821208  517697 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "addons-882841"
	I1121 23:48:40.842078  517697 addons.go:239] Setting addon csi-hostpath-driver=true in "addons-882841"
	I1121 23:48:40.842205  517697 host.go:66] Checking if "addons-882841" exists ...
	I1121 23:48:40.842772  517697 cli_runner.go:164] Run: docker container inspect addons-882841 --format={{.State.Status}}
	I1121 23:48:40.807479  517697 addons.go:70] Setting gcp-auth=true in profile "addons-882841"
	I1121 23:48:40.843040  517697 mustload.go:66] Loading cluster: addons-882841
	I1121 23:48:40.843240  517697 config.go:182] Loaded profile config "addons-882841": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1121 23:48:40.843538  517697 cli_runner.go:164] Run: docker container inspect addons-882841 --format={{.State.Status}}
	I1121 23:48:40.857251  517697 cli_runner.go:164] Run: docker container inspect addons-882841 --format={{.State.Status}}
	I1121 23:48:40.807484  517697 addons.go:70] Setting ingress=true in profile "addons-882841"
	I1121 23:48:40.807492  517697 addons.go:70] Setting ingress-dns=true in profile "addons-882841"
	I1121 23:48:40.857554  517697 addons.go:239] Setting addon ingress-dns=true in "addons-882841"
	I1121 23:48:40.807566  517697 out.go:179] * Verifying Kubernetes components...
	I1121 23:48:40.807714  517697 addons.go:70] Setting volcano=true in profile "addons-882841"
	I1121 23:48:40.807722  517697 addons.go:70] Setting registry=true in profile "addons-882841"
	I1121 23:48:40.807727  517697 addons.go:70] Setting registry-creds=true in profile "addons-882841"
	I1121 23:48:40.807733  517697 addons.go:70] Setting storage-provisioner=true in profile "addons-882841"
	I1121 23:48:40.807738  517697 addons.go:70] Setting storage-provisioner-rancher=true in profile "addons-882841"
	I1121 23:48:40.807769  517697 addons.go:70] Setting volumesnapshots=true in profile "addons-882841"
	I1121 23:48:40.877166  517697 addons.go:239] Setting addon volumesnapshots=true in "addons-882841"
	I1121 23:48:40.877210  517697 host.go:66] Checking if "addons-882841" exists ...
	I1121 23:48:40.877662  517697 cli_runner.go:164] Run: docker container inspect addons-882841 --format={{.State.Status}}
	I1121 23:48:40.893171  517697 addons.go:239] Setting addon ingress=true in "addons-882841"
	I1121 23:48:40.893243  517697 host.go:66] Checking if "addons-882841" exists ...
	I1121 23:48:40.893710  517697 cli_runner.go:164] Run: docker container inspect addons-882841 --format={{.State.Status}}
	I1121 23:48:40.917467  517697 host.go:66] Checking if "addons-882841" exists ...
	I1121 23:48:40.918136  517697 cli_runner.go:164] Run: docker container inspect addons-882841 --format={{.State.Status}}
	I1121 23:48:40.920523  517697 addons.go:239] Setting addon volcano=true in "addons-882841"
	I1121 23:48:40.920612  517697 host.go:66] Checking if "addons-882841" exists ...
	I1121 23:48:40.921203  517697 cli_runner.go:164] Run: docker container inspect addons-882841 --format={{.State.Status}}
	I1121 23:48:40.939318  517697 addons.go:239] Setting addon registry=true in "addons-882841"
	I1121 23:48:40.939449  517697 host.go:66] Checking if "addons-882841" exists ...
	I1121 23:48:40.940007  517697 cli_runner.go:164] Run: docker container inspect addons-882841 --format={{.State.Status}}
	I1121 23:48:40.947191  517697 addons.go:239] Setting addon registry-creds=true in "addons-882841"
	I1121 23:48:40.947248  517697 host.go:66] Checking if "addons-882841" exists ...
	I1121 23:48:40.947742  517697 cli_runner.go:164] Run: docker container inspect addons-882841 --format={{.State.Status}}
	I1121 23:48:40.959642  517697 addons.go:239] Setting addon storage-provisioner=true in "addons-882841"
	I1121 23:48:40.959695  517697 host.go:66] Checking if "addons-882841" exists ...
	I1121 23:48:40.960178  517697 cli_runner.go:164] Run: docker container inspect addons-882841 --format={{.State.Status}}
	I1121 23:48:40.962094  517697 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1121 23:48:40.974852  517697 addons_storage_classes.go:34] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-882841"
	I1121 23:48:40.975199  517697 cli_runner.go:164] Run: docker container inspect addons-882841 --format={{.State.Status}}
	I1121 23:48:40.992633  517697 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1121 23:48:40.992961  517697 out.go:179]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.46.0
	I1121 23:48:41.004129  517697 out.go:179]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I1121 23:48:41.008526  517697 out.go:179]   - Using image docker.io/rocm/k8s-device-plugin:1.25.2.8
	I1121 23:48:41.008688  517697 out.go:179]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.18.0
	I1121 23:48:41.008696  517697 out.go:179]   - Using image docker.io/marcnuri/yakd:0.0.5
	I1121 23:48:41.008701  517697 out.go:179]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.43
	I1121 23:48:41.008704  517697 out.go:179]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.8.0
	I1121 23:48:41.008719  517697 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I1121 23:48:41.008726  517697 out.go:179]   - Using image docker.io/registry:3.0.0
	I1121 23:48:41.008747  517697 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I1121 23:48:41.008807  517697 addons.go:436] installing /etc/kubernetes/addons/ig-deployment.yaml
	I1121 23:48:41.011284  517697 host.go:66] Checking if "addons-882841" exists ...
	I1121 23:48:41.016902  517697 addons.go:436] installing /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I1121 23:48:41.021285  517697 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/amd-gpu-device-plugin.yaml (1868 bytes)
	I1121 23:48:41.021627  517697 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-882841
	I1121 23:48:41.017899  517697 ssh_runner.go:362] scp volumesnapshots/csi-hostpath-snapshotclass.yaml --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I1121 23:48:41.017908  517697 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-deployment.yaml (15034 bytes)
	I1121 23:48:41.018008  517697 addons.go:436] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I1121 23:48:41.021140  517697 addons.go:436] installing /etc/kubernetes/addons/yakd-ns.yaml
	I1121 23:48:41.035236  517697 ssh_runner.go:362] scp yakd/yakd-ns.yaml --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I1121 23:48:41.035341  517697 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-882841
	I1121 23:48:41.038425  517697 addons.go:436] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1121 23:48:41.038452  517697 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1121 23:48:41.038516  517697 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-882841
	I1121 23:48:41.053925  517697 addons.go:436] installing /etc/kubernetes/addons/deployment.yaml
	I1121 23:48:41.053991  517697 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I1121 23:48:41.054094  517697 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-882841
	I1121 23:48:41.062608  517697 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I1121 23:48:41.065935  517697 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I1121 23:48:41.070064  517697 out.go:179]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I1121 23:48:41.076137  517697 out.go:179]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I1121 23:48:41.079015  517697 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I1121 23:48:41.090942  517697 out.go:179]   - Using image docker.io/upmcenterprises/registry-creds:1.10
	I1121 23:48:41.091924  517697 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-882841
	I1121 23:48:41.093239  517697 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-882841
	I1121 23:48:41.105192  517697 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I1121 23:48:41.105272  517697 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-882841
	I1121 23:48:41.110181  517697 addons.go:239] Setting addon default-storageclass=true in "addons-882841"
	I1121 23:48:41.110219  517697 host.go:66] Checking if "addons-882841" exists ...
	I1121 23:48:41.110619  517697 cli_runner.go:164] Run: docker container inspect addons-882841 --format={{.State.Status}}
	I1121 23:48:41.122857  517697 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.4
	I1121 23:48:41.129932  517697 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.4
	I1121 23:48:41.132891  517697 out.go:179]   - Using image registry.k8s.io/ingress-nginx/controller:v1.14.0
	I1121 23:48:41.135856  517697 addons.go:436] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I1121 23:48:41.135880  517697 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
	I1121 23:48:41.135956  517697 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-882841
	I1121 23:48:41.156495  517697 addons.go:436] installing /etc/kubernetes/addons/registry-creds-rc.yaml
	I1121 23:48:41.156514  517697 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-creds-rc.yaml (3306 bytes)
	I1121 23:48:41.156572  517697 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-882841
	I1121 23:48:41.157390  517697 out.go:179]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.9
	I1121 23:48:41.183218  517697 addons.go:436] installing /etc/kubernetes/addons/registry-rc.yaml
	I1121 23:48:41.183239  517697 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (860 bytes)
	I1121 23:48:41.183301  517697 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-882841
	I1121 23:48:41.185958  517697 out.go:179]   - Using image docker.io/kicbase/minikube-ingress-dns:0.0.4
	I1121 23:48:41.191268  517697 addons.go:436] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I1121 23:48:41.191354  517697 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2889 bytes)
	I1121 23:48:41.191473  517697 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-882841
	I1121 23:48:41.205598  517697 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1121 23:48:41.208504  517697 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1121 23:48:41.208527  517697 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1121 23:48:41.208587  517697 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-882841
	I1121 23:48:41.215711  517697 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I1121 23:48:41.221977  517697 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I1121 23:48:41.225680  517697 addons.go:436] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I1121 23:48:41.225751  517697 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-attacher.yaml --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I1121 23:48:41.226075  517697 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-882841
	I1121 23:48:41.260681  517697 addons.go:239] Setting addon storage-provisioner-rancher=true in "addons-882841"
	I1121 23:48:41.260723  517697 host.go:66] Checking if "addons-882841" exists ...
	I1121 23:48:41.261117  517697 cli_runner.go:164] Run: docker container inspect addons-882841 --format={{.State.Status}}
	W1121 23:48:41.284444  517697 out.go:285] ! Enabling 'volcano' returned an error: running callbacks: [volcano addon does not support crio]
	I1121 23:48:41.292278  517697 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33495 SSHKeyPath:/home/jenkins/minikube-integration/21934-513600/.minikube/machines/addons-882841/id_rsa Username:docker}
	I1121 23:48:41.315521  517697 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33495 SSHKeyPath:/home/jenkins/minikube-integration/21934-513600/.minikube/machines/addons-882841/id_rsa Username:docker}
	I1121 23:48:41.335825  517697 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33495 SSHKeyPath:/home/jenkins/minikube-integration/21934-513600/.minikube/machines/addons-882841/id_rsa Username:docker}
	I1121 23:48:41.358169  517697 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33495 SSHKeyPath:/home/jenkins/minikube-integration/21934-513600/.minikube/machines/addons-882841/id_rsa Username:docker}
	I1121 23:48:41.381915  517697 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33495 SSHKeyPath:/home/jenkins/minikube-integration/21934-513600/.minikube/machines/addons-882841/id_rsa Username:docker}
	I1121 23:48:41.410063  517697 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33495 SSHKeyPath:/home/jenkins/minikube-integration/21934-513600/.minikube/machines/addons-882841/id_rsa Username:docker}
	I1121 23:48:41.416459  517697 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1121 23:48:41.416477  517697 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1121 23:48:41.416547  517697 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-882841
	I1121 23:48:41.420960  517697 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33495 SSHKeyPath:/home/jenkins/minikube-integration/21934-513600/.minikube/machines/addons-882841/id_rsa Username:docker}
	I1121 23:48:41.422054  517697 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33495 SSHKeyPath:/home/jenkins/minikube-integration/21934-513600/.minikube/machines/addons-882841/id_rsa Username:docker}
	I1121 23:48:41.427521  517697 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33495 SSHKeyPath:/home/jenkins/minikube-integration/21934-513600/.minikube/machines/addons-882841/id_rsa Username:docker}
	I1121 23:48:41.428867  517697 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1121 23:48:41.441926  517697 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33495 SSHKeyPath:/home/jenkins/minikube-integration/21934-513600/.minikube/machines/addons-882841/id_rsa Username:docker}
	I1121 23:48:41.442740  517697 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33495 SSHKeyPath:/home/jenkins/minikube-integration/21934-513600/.minikube/machines/addons-882841/id_rsa Username:docker}
	I1121 23:48:41.443356  517697 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33495 SSHKeyPath:/home/jenkins/minikube-integration/21934-513600/.minikube/machines/addons-882841/id_rsa Username:docker}
	I1121 23:48:41.462979  517697 out.go:179]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I1121 23:48:41.465197  517697 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33495 SSHKeyPath:/home/jenkins/minikube-integration/21934-513600/.minikube/machines/addons-882841/id_rsa Username:docker}
	W1121 23:48:41.468442  517697 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I1121 23:48:41.468469  517697 retry.go:31] will retry after 227.362888ms: ssh: handshake failed: EOF
	I1121 23:48:41.472466  517697 out.go:179]   - Using image docker.io/busybox:stable
	I1121 23:48:41.475275  517697 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I1121 23:48:41.475296  517697 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I1121 23:48:41.475361  517697 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-882841
	I1121 23:48:41.493329  517697 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33495 SSHKeyPath:/home/jenkins/minikube-integration/21934-513600/.minikube/machines/addons-882841/id_rsa Username:docker}
	I1121 23:48:41.504926  517697 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33495 SSHKeyPath:/home/jenkins/minikube-integration/21934-513600/.minikube/machines/addons-882841/id_rsa Username:docker}
	W1121 23:48:41.505995  517697 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I1121 23:48:41.506019  517697 retry.go:31] will retry after 341.176085ms: ssh: handshake failed: EOF
	W1121 23:48:41.697448  517697 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I1121 23:48:41.697527  517697 retry.go:31] will retry after 327.994212ms: ssh: handshake failed: EOF
	I1121 23:48:42.200413  517697 addons.go:436] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I1121 23:48:42.200500  517697 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-hostpath.yaml --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I1121 23:48:42.275539  517697 addons.go:436] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1121 23:48:42.275622  517697 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I1121 23:48:42.297953  517697 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1121 23:48:42.309739  517697 addons.go:436] installing /etc/kubernetes/addons/yakd-sa.yaml
	I1121 23:48:42.309834  517697 ssh_runner.go:362] scp yakd/yakd-sa.yaml --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I1121 23:48:42.341174  517697 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I1121 23:48:42.342395  517697 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ig-deployment.yaml
	I1121 23:48:42.365927  517697 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/registry-creds-rc.yaml
	I1121 23:48:42.368550  517697 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1121 23:48:42.399231  517697 addons.go:436] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I1121 23:48:42.399254  517697 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-health-monitor-controller.yaml --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I1121 23:48:42.418555  517697 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I1121 23:48:42.425233  517697 addons.go:436] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I1121 23:48:42.425312  517697 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotclasses.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I1121 23:48:42.429285  517697 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I1121 23:48:42.437173  517697 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I1121 23:48:42.451364  517697 addons.go:436] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I1121 23:48:42.451441  517697 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-provisioner.yaml --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I1121 23:48:42.492612  517697 addons.go:436] installing /etc/kubernetes/addons/registry-svc.yaml
	I1121 23:48:42.492689  517697 ssh_runner.go:362] scp registry/registry-svc.yaml --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I1121 23:48:42.548252  517697 addons.go:436] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1121 23:48:42.548330  517697 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1121 23:48:42.558540  517697 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (1.129612068s)
	I1121 23:48:42.558707  517697 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (1.566050239s)
	I1121 23:48:42.558741  517697 start.go:977] {"host.minikube.internal": 192.168.49.1} host record injected into CoreDNS's ConfigMap
	I1121 23:48:42.560162  517697 node_ready.go:35] waiting up to 6m0s for node "addons-882841" to be "Ready" ...
	I1121 23:48:42.585313  517697 addons.go:436] installing /etc/kubernetes/addons/yakd-crb.yaml
	I1121 23:48:42.585334  517697 ssh_runner.go:362] scp yakd/yakd-crb.yaml --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I1121 23:48:42.600048  517697 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I1121 23:48:42.606452  517697 addons.go:436] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I1121 23:48:42.606519  517697 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-resizer.yaml --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I1121 23:48:42.622931  517697 addons.go:436] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I1121 23:48:42.623007  517697 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotcontents.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I1121 23:48:42.638911  517697 addons.go:436] installing /etc/kubernetes/addons/registry-proxy.yaml
	I1121 23:48:42.638935  517697 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I1121 23:48:42.791446  517697 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I1121 23:48:42.797349  517697 addons.go:436] installing /etc/kubernetes/addons/yakd-svc.yaml
	I1121 23:48:42.797376  517697 ssh_runner.go:362] scp yakd/yakd-svc.yaml --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I1121 23:48:42.842188  517697 addons.go:436] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I1121 23:48:42.842214  517697 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-snapshotter.yaml --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I1121 23:48:42.862812  517697 addons.go:436] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1121 23:48:42.862835  517697 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1121 23:48:42.873710  517697 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I1121 23:48:42.878867  517697 addons.go:436] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I1121 23:48:42.878896  517697 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshots.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I1121 23:48:42.949932  517697 addons.go:436] installing /etc/kubernetes/addons/yakd-dp.yaml
	I1121 23:48:42.949956  517697 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I1121 23:48:43.036741  517697 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1121 23:48:43.041664  517697 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I1121 23:48:43.041688  517697 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I1121 23:48:43.065098  517697 kapi.go:214] "coredns" deployment in "kube-system" namespace and "addons-882841" context rescaled to 1 replicas
	I1121 23:48:43.104097  517697 addons.go:436] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I1121 23:48:43.104130  517697 ssh_runner.go:362] scp volumesnapshots/rbac-volume-snapshot-controller.yaml --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I1121 23:48:43.131184  517697 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I1121 23:48:43.131209  517697 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-driverinfo.yaml --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I1121 23:48:43.187217  517697 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I1121 23:48:43.191056  517697 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I1121 23:48:43.191078  517697 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I1121 23:48:43.196607  517697 addons.go:436] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1121 23:48:43.196632  517697 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I1121 23:48:43.261080  517697 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I1121 23:48:43.261104  517697 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I1121 23:48:43.282692  517697 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1121 23:48:43.572640  517697 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I1121 23:48:43.572667  517697 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-storageclass.yaml --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I1121 23:48:43.726081  517697 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.428031103s)
	I1121 23:48:43.932650  517697 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	W1121 23:48:44.622106  517697 node_ready.go:57] node "addons-882841" has "Ready":"False" status (will retry)
	W1121 23:48:47.065403  517697 node_ready.go:57] node "addons-882841" has "Ready":"False" status (will retry)
	I1121 23:48:47.212765  517697 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (4.871485315s)
	I1121 23:48:47.212799  517697 addons.go:495] Verifying addon ingress=true in "addons-882841"
	I1121 23:48:47.212973  517697 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ig-deployment.yaml: (4.870514611s)
	I1121 23:48:47.213015  517697 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/registry-creds-rc.yaml: (4.847018641s)
	I1121 23:48:47.213064  517697 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (4.844448098s)
	I1121 23:48:47.213099  517697 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/amd-gpu-device-plugin.yaml: (4.794474042s)
	I1121 23:48:47.213132  517697 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (4.783785874s)
	I1121 23:48:47.213159  517697 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (4.775914766s)
	I1121 23:48:47.213213  517697 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (4.613096772s)
	I1121 23:48:47.213356  517697 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (4.421879992s)
	I1121 23:48:47.213425  517697 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (4.339689257s)
	I1121 23:48:47.213443  517697 addons.go:495] Verifying addon registry=true in "addons-882841"
	I1121 23:48:47.213530  517697 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (4.176760227s)
	I1121 23:48:47.213545  517697 addons.go:495] Verifying addon metrics-server=true in "addons-882841"
	I1121 23:48:47.213579  517697 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (4.026337395s)
	I1121 23:48:47.213882  517697 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (3.931156996s)
	W1121 23:48:47.213911  517697 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I1121 23:48:47.213929  517697 retry.go:31] will retry after 137.248039ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I1121 23:48:47.216039  517697 out.go:179] * Verifying ingress addon...
	I1121 23:48:47.218009  517697 out.go:179] * Verifying registry addon...
	I1121 23:48:47.218010  517697 out.go:179] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-882841 service yakd-dashboard -n yakd-dashboard
	
	I1121 23:48:47.222278  517697 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I1121 23:48:47.223070  517697 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I1121 23:48:47.228113  517697 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=registry
	I1121 23:48:47.228131  517697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1121 23:48:47.233595  517697 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I1121 23:48:47.233613  517697 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 23:48:47.352282  517697 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1121 23:48:47.551727  517697 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (3.619015174s)
	I1121 23:48:47.551764  517697 addons.go:495] Verifying addon csi-hostpath-driver=true in "addons-882841"
	I1121 23:48:47.554633  517697 out.go:179] * Verifying csi-hostpath-driver addon...
	I1121 23:48:47.558397  517697 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I1121 23:48:47.569660  517697 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I1121 23:48:47.569729  517697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 23:48:47.737586  517697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1121 23:48:47.738405  517697 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 23:48:48.061918  517697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 23:48:48.225916  517697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1121 23:48:48.226248  517697 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 23:48:48.562211  517697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 23:48:48.625190  517697 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I1121 23:48:48.625301  517697 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-882841
	I1121 23:48:48.644632  517697 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33495 SSHKeyPath:/home/jenkins/minikube-integration/21934-513600/.minikube/machines/addons-882841/id_rsa Username:docker}
	I1121 23:48:48.726251  517697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1121 23:48:48.726446  517697 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 23:48:48.750941  517697 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I1121 23:48:48.763430  517697 addons.go:239] Setting addon gcp-auth=true in "addons-882841"
	I1121 23:48:48.763476  517697 host.go:66] Checking if "addons-882841" exists ...
	I1121 23:48:48.763939  517697 cli_runner.go:164] Run: docker container inspect addons-882841 --format={{.State.Status}}
	I1121 23:48:48.779975  517697 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I1121 23:48:48.780025  517697 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-882841
	I1121 23:48:48.796885  517697 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33495 SSHKeyPath:/home/jenkins/minikube-integration/21934-513600/.minikube/machines/addons-882841/id_rsa Username:docker}
	I1121 23:48:49.061896  517697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 23:48:49.226404  517697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1121 23:48:49.226545  517697 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 23:48:49.562094  517697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1121 23:48:49.563918  517697 node_ready.go:57] node "addons-882841" has "Ready":"False" status (will retry)
	I1121 23:48:49.726115  517697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1121 23:48:49.726369  517697 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 23:48:50.064918  517697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 23:48:50.141656  517697 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (2.78932831s)
	I1121 23:48:50.141793  517697 ssh_runner.go:235] Completed: cat /var/lib/minikube/google_application_credentials.json: (1.361792432s)
	I1121 23:48:50.144974  517697 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.4
	I1121 23:48:50.147969  517697 out.go:179]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.3
	I1121 23:48:50.150894  517697 addons.go:436] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I1121 23:48:50.150915  517697 ssh_runner.go:362] scp gcp-auth/gcp-auth-ns.yaml --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I1121 23:48:50.168601  517697 addons.go:436] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I1121 23:48:50.168627  517697 ssh_runner.go:362] scp gcp-auth/gcp-auth-service.yaml --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I1121 23:48:50.182828  517697 addons.go:436] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I1121 23:48:50.182853  517697 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I1121 23:48:50.197267  517697 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I1121 23:48:50.227251  517697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1121 23:48:50.227799  517697 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 23:48:50.569789  517697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 23:48:50.676131  517697 addons.go:495] Verifying addon gcp-auth=true in "addons-882841"
	I1121 23:48:50.680174  517697 out.go:179] * Verifying gcp-auth addon...
	I1121 23:48:50.683738  517697 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I1121 23:48:50.697275  517697 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I1121 23:48:50.697345  517697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 23:48:50.798280  517697 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 23:48:50.798388  517697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1121 23:48:51.061547  517697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 23:48:51.186844  517697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 23:48:51.225706  517697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1121 23:48:51.226050  517697 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 23:48:51.563019  517697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1121 23:48:51.564754  517697 node_ready.go:57] node "addons-882841" has "Ready":"False" status (will retry)
	I1121 23:48:51.687027  517697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 23:48:51.725980  517697 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 23:48:51.726125  517697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1121 23:48:52.061508  517697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 23:48:52.187273  517697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 23:48:52.225188  517697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1121 23:48:52.226134  517697 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 23:48:52.562351  517697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 23:48:52.686783  517697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 23:48:52.726106  517697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1121 23:48:52.726291  517697 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 23:48:53.061251  517697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 23:48:53.187139  517697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 23:48:53.226699  517697 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 23:48:53.226763  517697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1121 23:48:53.563539  517697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1121 23:48:53.565561  517697 node_ready.go:57] node "addons-882841" has "Ready":"False" status (will retry)
	I1121 23:48:53.687980  517697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 23:48:53.726016  517697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1121 23:48:53.726578  517697 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 23:48:54.061824  517697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 23:48:54.186807  517697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 23:48:54.226561  517697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1121 23:48:54.226747  517697 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 23:48:54.563124  517697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 23:48:54.687570  517697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 23:48:54.726097  517697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1121 23:48:54.726986  517697 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 23:48:55.063073  517697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 23:48:55.187204  517697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 23:48:55.226823  517697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1121 23:48:55.227161  517697 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 23:48:55.562710  517697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 23:48:55.688139  517697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 23:48:55.726509  517697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1121 23:48:55.726782  517697 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 23:48:56.062614  517697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1121 23:48:56.063923  517697 node_ready.go:57] node "addons-882841" has "Ready":"False" status (will retry)
	I1121 23:48:56.187003  517697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 23:48:56.225895  517697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1121 23:48:56.226667  517697 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 23:48:56.562961  517697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 23:48:56.687069  517697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 23:48:56.726608  517697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1121 23:48:56.727051  517697 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 23:48:57.063599  517697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 23:48:57.186794  517697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 23:48:57.225554  517697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1121 23:48:57.226029  517697 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 23:48:57.563267  517697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 23:48:57.687499  517697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 23:48:57.725119  517697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1121 23:48:57.726256  517697 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 23:48:58.061478  517697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 23:48:58.186451  517697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 23:48:58.225058  517697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1121 23:48:58.226205  517697 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 23:48:58.561761  517697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1121 23:48:58.565599  517697 node_ready.go:57] node "addons-882841" has "Ready":"False" status (will retry)
	I1121 23:48:58.687677  517697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 23:48:58.726602  517697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1121 23:48:58.727014  517697 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 23:48:59.062247  517697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 23:48:59.187169  517697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 23:48:59.226299  517697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1121 23:48:59.226608  517697 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 23:48:59.561553  517697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 23:48:59.687264  517697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 23:48:59.726299  517697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1121 23:48:59.726492  517697 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 23:49:00.076039  517697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 23:49:00.196691  517697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 23:49:00.239729  517697 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 23:49:00.244383  517697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1121 23:49:00.561728  517697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 23:49:00.686747  517697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 23:49:00.725892  517697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1121 23:49:00.726202  517697 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 23:49:01.061697  517697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1121 23:49:01.063632  517697 node_ready.go:57] node "addons-882841" has "Ready":"False" status (will retry)
	I1121 23:49:01.186874  517697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 23:49:01.225673  517697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1121 23:49:01.226085  517697 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 23:49:01.562097  517697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 23:49:01.688322  517697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 23:49:01.726889  517697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1121 23:49:01.727325  517697 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 23:49:02.062222  517697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 23:49:02.186562  517697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 23:49:02.226082  517697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1121 23:49:02.226253  517697 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 23:49:02.561882  517697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 23:49:02.687128  517697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 23:49:02.725633  517697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1121 23:49:02.725788  517697 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 23:49:03.063308  517697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1121 23:49:03.064255  517697 node_ready.go:57] node "addons-882841" has "Ready":"False" status (will retry)
	I1121 23:49:03.186917  517697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 23:49:03.226137  517697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1121 23:49:03.226321  517697 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 23:49:03.561513  517697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 23:49:03.686947  517697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 23:49:03.726091  517697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1121 23:49:03.726942  517697 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 23:49:04.062007  517697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 23:49:04.186997  517697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 23:49:04.226157  517697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1121 23:49:04.226333  517697 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 23:49:04.561975  517697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 23:49:04.687072  517697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 23:49:04.726155  517697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1121 23:49:04.726387  517697 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 23:49:05.062040  517697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 23:49:05.192916  517697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 23:49:05.231624  517697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1121 23:49:05.232209  517697 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 23:49:05.564761  517697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1121 23:49:05.565157  517697 node_ready.go:57] node "addons-882841" has "Ready":"False" status (will retry)
	I1121 23:49:05.687452  517697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 23:49:05.724971  517697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1121 23:49:05.726226  517697 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 23:49:06.061488  517697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 23:49:06.186597  517697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 23:49:06.225841  517697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1121 23:49:06.225942  517697 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 23:49:06.561885  517697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 23:49:06.687415  517697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 23:49:06.725334  517697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1121 23:49:06.725705  517697 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 23:49:07.061687  517697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 23:49:07.186991  517697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 23:49:07.226382  517697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1121 23:49:07.226531  517697 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 23:49:07.561770  517697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 23:49:07.687534  517697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 23:49:07.726183  517697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1121 23:49:07.726998  517697 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 23:49:08.062329  517697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1121 23:49:08.063658  517697 node_ready.go:57] node "addons-882841" has "Ready":"False" status (will retry)
	I1121 23:49:08.186447  517697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 23:49:08.225429  517697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1121 23:49:08.226019  517697 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 23:49:08.561941  517697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 23:49:08.686781  517697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 23:49:08.725936  517697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1121 23:49:08.726216  517697 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 23:49:09.062730  517697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 23:49:09.186894  517697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 23:49:09.226053  517697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1121 23:49:09.226370  517697 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 23:49:09.561763  517697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 23:49:09.687353  517697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 23:49:09.725387  517697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1121 23:49:09.726540  517697 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 23:49:10.062277  517697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1121 23:49:10.064037  517697 node_ready.go:57] node "addons-882841" has "Ready":"False" status (will retry)
	I1121 23:49:10.187005  517697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 23:49:10.226247  517697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1121 23:49:10.226506  517697 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 23:49:10.561590  517697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 23:49:10.686959  517697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 23:49:10.731027  517697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1121 23:49:10.731447  517697 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 23:49:11.061677  517697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 23:49:11.186865  517697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 23:49:11.225907  517697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1121 23:49:11.226220  517697 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 23:49:11.561064  517697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 23:49:11.687483  517697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 23:49:11.726400  517697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1121 23:49:11.726909  517697 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 23:49:12.062146  517697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1121 23:49:12.064143  517697 node_ready.go:57] node "addons-882841" has "Ready":"False" status (will retry)
	I1121 23:49:12.187347  517697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 23:49:12.225612  517697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1121 23:49:12.226251  517697 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 23:49:12.563342  517697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 23:49:12.687336  517697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 23:49:12.725312  517697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1121 23:49:12.726375  517697 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 23:49:13.061260  517697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 23:49:13.187283  517697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 23:49:13.226420  517697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1121 23:49:13.226871  517697 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 23:49:13.562089  517697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 23:49:13.687007  517697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 23:49:13.726165  517697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1121 23:49:13.726301  517697 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 23:49:14.062473  517697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1121 23:49:14.064308  517697 node_ready.go:57] node "addons-882841" has "Ready":"False" status (will retry)
	I1121 23:49:14.187177  517697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 23:49:14.226479  517697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1121 23:49:14.226766  517697 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 23:49:14.562288  517697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 23:49:14.686823  517697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 23:49:14.726126  517697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1121 23:49:14.726930  517697 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 23:49:15.061756  517697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 23:49:15.187536  517697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 23:49:15.229283  517697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1121 23:49:15.229534  517697 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 23:49:15.562613  517697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 23:49:15.687244  517697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 23:49:15.724929  517697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1121 23:49:15.726124  517697 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 23:49:16.061275  517697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 23:49:16.187309  517697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 23:49:16.226590  517697 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 23:49:16.226705  517697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1121 23:49:16.561153  517697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1121 23:49:16.562888  517697 node_ready.go:57] node "addons-882841" has "Ready":"False" status (will retry)
	I1121 23:49:16.686897  517697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 23:49:16.726452  517697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1121 23:49:16.726896  517697 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 23:49:17.061964  517697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 23:49:17.187095  517697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 23:49:17.226224  517697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1121 23:49:17.228108  517697 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 23:49:17.561718  517697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 23:49:17.687317  517697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 23:49:17.726552  517697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1121 23:49:17.726915  517697 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 23:49:18.062285  517697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 23:49:18.186578  517697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 23:49:18.225353  517697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1121 23:49:18.226376  517697 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 23:49:18.561880  517697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1121 23:49:18.563940  517697 node_ready.go:57] node "addons-882841" has "Ready":"False" status (will retry)
	I1121 23:49:18.687080  517697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 23:49:18.726038  517697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1121 23:49:18.726183  517697 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 23:49:19.061317  517697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 23:49:19.187314  517697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 23:49:19.226257  517697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1121 23:49:19.226396  517697 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 23:49:19.561080  517697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 23:49:19.686947  517697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 23:49:19.725411  517697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1121 23:49:19.726276  517697 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 23:49:20.062498  517697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 23:49:20.186997  517697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 23:49:20.225881  517697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1121 23:49:20.225948  517697 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 23:49:20.561867  517697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 23:49:20.686786  517697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 23:49:20.725545  517697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1121 23:49:20.726314  517697 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 23:49:21.061452  517697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1121 23:49:21.063459  517697 node_ready.go:57] node "addons-882841" has "Ready":"False" status (will retry)
	I1121 23:49:21.187379  517697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 23:49:21.225793  517697 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 23:49:21.225840  517697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1121 23:49:21.561845  517697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 23:49:21.686625  517697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 23:49:21.726416  517697 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 23:49:21.726415  517697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1121 23:49:22.061295  517697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 23:49:22.187096  517697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 23:49:22.226372  517697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1121 23:49:22.226460  517697 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 23:49:22.561144  517697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 23:49:22.686865  517697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 23:49:22.725639  517697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1121 23:49:22.726303  517697 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 23:49:23.091883  517697 node_ready.go:49] node "addons-882841" is "Ready"
	I1121 23:49:23.091915  517697 node_ready.go:38] duration metric: took 40.531603726s for node "addons-882841" to be "Ready" ...
	I1121 23:49:23.091932  517697 api_server.go:52] waiting for apiserver process to appear ...
	I1121 23:49:23.091991  517697 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1121 23:49:23.092710  517697 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I1121 23:49:23.092736  517697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 23:49:23.107070  517697 api_server.go:72] duration metric: took 42.306317438s to wait for apiserver process to appear ...
	I1121 23:49:23.107095  517697 api_server.go:88] waiting for apiserver healthz status ...
	I1121 23:49:23.107117  517697 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1121 23:49:23.118923  517697 api_server.go:279] https://192.168.49.2:8443/healthz returned 200:
	ok
	I1121 23:49:23.121086  517697 api_server.go:141] control plane version: v1.34.1
	I1121 23:49:23.121114  517697 api_server.go:131] duration metric: took 14.011805ms to wait for apiserver health ...
	I1121 23:49:23.121124  517697 system_pods.go:43] waiting for kube-system pods to appear ...
	I1121 23:49:23.144501  517697 system_pods.go:59] 19 kube-system pods found
	I1121 23:49:23.144537  517697 system_pods.go:61] "coredns-66bc5c9577-zjrtb" [98eb0f4e-21c8-4403-adb4-1d0f4decde4b] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1121 23:49:23.144544  517697 system_pods.go:61] "csi-hostpath-attacher-0" [974f6c76-34db-4887-a36d-ef4b2ccc1e37] Pending
	I1121 23:49:23.144551  517697 system_pods.go:61] "csi-hostpath-resizer-0" [b719458e-8db2-43dc-8896-8fd232b5bc58] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1121 23:49:23.144556  517697 system_pods.go:61] "csi-hostpathplugin-mkngh" [083d366a-f53b-4a51-b7ee-7acd56800894] Pending
	I1121 23:49:23.144560  517697 system_pods.go:61] "etcd-addons-882841" [5565d49c-434d-4db8-94fc-d88d8f8e9bd2] Running
	I1121 23:49:23.144564  517697 system_pods.go:61] "kindnet-wghw5" [f4454a98-7446-4179-a382-982d231fb9a7] Running
	I1121 23:49:23.144568  517697 system_pods.go:61] "kube-apiserver-addons-882841" [6bc0f536-d888-4818-9e4b-597d98d3edb4] Running
	I1121 23:49:23.144572  517697 system_pods.go:61] "kube-controller-manager-addons-882841" [1a2214c6-e2e0-4bb0-8c36-3571a5fda69c] Running
	I1121 23:49:23.144582  517697 system_pods.go:61] "kube-ingress-dns-minikube" [05451ec4-2e91-4a5d-8d8e-29b8f3931ab2] Pending
	I1121 23:49:23.144586  517697 system_pods.go:61] "kube-proxy-gthqw" [05b79d7f-9659-444f-946f-88f641a45731] Running
	I1121 23:49:23.144593  517697 system_pods.go:61] "kube-scheduler-addons-882841" [4160616a-418b-48a6-8c7c-3dc4f43ace3c] Running
	I1121 23:49:23.144600  517697 system_pods.go:61] "metrics-server-85b7d694d7-7tk8r" [99849e7c-e2a9-4b60-b8f9-7ed8bd487c73] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1121 23:49:23.144611  517697 system_pods.go:61] "nvidia-device-plugin-daemonset-4jvp9" [54878aa0-88b5-4a6b-ad02-91d34115cc3d] Pending
	I1121 23:49:23.144618  517697 system_pods.go:61] "registry-6b586f9694-5jvr4" [7a29be8b-519d-4b81-81ff-bac494b2ea86] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1121 23:49:23.144625  517697 system_pods.go:61] "registry-creds-764b6fb674-8wv2f" [dfc3c5ef-fcf8-4a4c-908c-fa2a665d682c] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1121 23:49:23.144630  517697 system_pods.go:61] "registry-proxy-rrtfc" [1d8939ca-bf48-4609-94de-6b5ca07c973f] Pending
	I1121 23:49:23.144635  517697 system_pods.go:61] "snapshot-controller-7d9fbc56b8-44w6b" [9fceaa9e-21a1-46a5-acea-1901a3b30539] Pending
	I1121 23:49:23.144648  517697 system_pods.go:61] "snapshot-controller-7d9fbc56b8-q99bt" [bb9f8fcb-0d34-489e-b7f3-e8c20fc906bb] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1121 23:49:23.144652  517697 system_pods.go:61] "storage-provisioner" [0ff2b406-8d5a-4cf0-a6a5-c79a4614dcf6] Pending
	I1121 23:49:23.144658  517697 system_pods.go:74] duration metric: took 23.528822ms to wait for pod list to return data ...
	I1121 23:49:23.144668  517697 default_sa.go:34] waiting for default service account to be created ...
	I1121 23:49:23.153980  517697 default_sa.go:45] found service account: "default"
	I1121 23:49:23.154008  517697 default_sa.go:55] duration metric: took 9.332784ms for default service account to be created ...
	I1121 23:49:23.154018  517697 system_pods.go:116] waiting for k8s-apps to be running ...
	I1121 23:49:23.257937  517697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 23:49:23.260230  517697 system_pods.go:86] 19 kube-system pods found
	I1121 23:49:23.260261  517697 system_pods.go:89] "coredns-66bc5c9577-zjrtb" [98eb0f4e-21c8-4403-adb4-1d0f4decde4b] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1121 23:49:23.260268  517697 system_pods.go:89] "csi-hostpath-attacher-0" [974f6c76-34db-4887-a36d-ef4b2ccc1e37] Pending
	I1121 23:49:23.260275  517697 system_pods.go:89] "csi-hostpath-resizer-0" [b719458e-8db2-43dc-8896-8fd232b5bc58] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1121 23:49:23.260279  517697 system_pods.go:89] "csi-hostpathplugin-mkngh" [083d366a-f53b-4a51-b7ee-7acd56800894] Pending
	I1121 23:49:23.260284  517697 system_pods.go:89] "etcd-addons-882841" [5565d49c-434d-4db8-94fc-d88d8f8e9bd2] Running
	I1121 23:49:23.260288  517697 system_pods.go:89] "kindnet-wghw5" [f4454a98-7446-4179-a382-982d231fb9a7] Running
	I1121 23:49:23.260292  517697 system_pods.go:89] "kube-apiserver-addons-882841" [6bc0f536-d888-4818-9e4b-597d98d3edb4] Running
	I1121 23:49:23.260297  517697 system_pods.go:89] "kube-controller-manager-addons-882841" [1a2214c6-e2e0-4bb0-8c36-3571a5fda69c] Running
	I1121 23:49:23.260303  517697 system_pods.go:89] "kube-ingress-dns-minikube" [05451ec4-2e91-4a5d-8d8e-29b8f3931ab2] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1121 23:49:23.260310  517697 system_pods.go:89] "kube-proxy-gthqw" [05b79d7f-9659-444f-946f-88f641a45731] Running
	I1121 23:49:23.260315  517697 system_pods.go:89] "kube-scheduler-addons-882841" [4160616a-418b-48a6-8c7c-3dc4f43ace3c] Running
	I1121 23:49:23.260323  517697 system_pods.go:89] "metrics-server-85b7d694d7-7tk8r" [99849e7c-e2a9-4b60-b8f9-7ed8bd487c73] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1121 23:49:23.260327  517697 system_pods.go:89] "nvidia-device-plugin-daemonset-4jvp9" [54878aa0-88b5-4a6b-ad02-91d34115cc3d] Pending
	I1121 23:49:23.260341  517697 system_pods.go:89] "registry-6b586f9694-5jvr4" [7a29be8b-519d-4b81-81ff-bac494b2ea86] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1121 23:49:23.260348  517697 system_pods.go:89] "registry-creds-764b6fb674-8wv2f" [dfc3c5ef-fcf8-4a4c-908c-fa2a665d682c] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1121 23:49:23.260358  517697 system_pods.go:89] "registry-proxy-rrtfc" [1d8939ca-bf48-4609-94de-6b5ca07c973f] Pending
	I1121 23:49:23.260363  517697 system_pods.go:89] "snapshot-controller-7d9fbc56b8-44w6b" [9fceaa9e-21a1-46a5-acea-1901a3b30539] Pending
	I1121 23:49:23.260368  517697 system_pods.go:89] "snapshot-controller-7d9fbc56b8-q99bt" [bb9f8fcb-0d34-489e-b7f3-e8c20fc906bb] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1121 23:49:23.260372  517697 system_pods.go:89] "storage-provisioner" [0ff2b406-8d5a-4cf0-a6a5-c79a4614dcf6] Pending
	I1121 23:49:23.260391  517697 retry.go:31] will retry after 292.015422ms: missing components: kube-dns
	I1121 23:49:23.287719  517697 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 23:49:23.288033  517697 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I1121 23:49:23.288051  517697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1121 23:49:23.558948  517697 system_pods.go:86] 19 kube-system pods found
	I1121 23:49:23.558982  517697 system_pods.go:89] "coredns-66bc5c9577-zjrtb" [98eb0f4e-21c8-4403-adb4-1d0f4decde4b] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1121 23:49:23.558991  517697 system_pods.go:89] "csi-hostpath-attacher-0" [974f6c76-34db-4887-a36d-ef4b2ccc1e37] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1121 23:49:23.558998  517697 system_pods.go:89] "csi-hostpath-resizer-0" [b719458e-8db2-43dc-8896-8fd232b5bc58] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1121 23:49:23.559006  517697 system_pods.go:89] "csi-hostpathplugin-mkngh" [083d366a-f53b-4a51-b7ee-7acd56800894] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1121 23:49:23.559011  517697 system_pods.go:89] "etcd-addons-882841" [5565d49c-434d-4db8-94fc-d88d8f8e9bd2] Running
	I1121 23:49:23.559016  517697 system_pods.go:89] "kindnet-wghw5" [f4454a98-7446-4179-a382-982d231fb9a7] Running
	I1121 23:49:23.559026  517697 system_pods.go:89] "kube-apiserver-addons-882841" [6bc0f536-d888-4818-9e4b-597d98d3edb4] Running
	I1121 23:49:23.559031  517697 system_pods.go:89] "kube-controller-manager-addons-882841" [1a2214c6-e2e0-4bb0-8c36-3571a5fda69c] Running
	I1121 23:49:23.559040  517697 system_pods.go:89] "kube-ingress-dns-minikube" [05451ec4-2e91-4a5d-8d8e-29b8f3931ab2] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1121 23:49:23.559044  517697 system_pods.go:89] "kube-proxy-gthqw" [05b79d7f-9659-444f-946f-88f641a45731] Running
	I1121 23:49:23.559060  517697 system_pods.go:89] "kube-scheduler-addons-882841" [4160616a-418b-48a6-8c7c-3dc4f43ace3c] Running
	I1121 23:49:23.559066  517697 system_pods.go:89] "metrics-server-85b7d694d7-7tk8r" [99849e7c-e2a9-4b60-b8f9-7ed8bd487c73] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1121 23:49:23.559070  517697 system_pods.go:89] "nvidia-device-plugin-daemonset-4jvp9" [54878aa0-88b5-4a6b-ad02-91d34115cc3d] Pending
	I1121 23:49:23.559083  517697 system_pods.go:89] "registry-6b586f9694-5jvr4" [7a29be8b-519d-4b81-81ff-bac494b2ea86] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1121 23:49:23.559089  517697 system_pods.go:89] "registry-creds-764b6fb674-8wv2f" [dfc3c5ef-fcf8-4a4c-908c-fa2a665d682c] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1121 23:49:23.559093  517697 system_pods.go:89] "registry-proxy-rrtfc" [1d8939ca-bf48-4609-94de-6b5ca07c973f] Pending
	I1121 23:49:23.559099  517697 system_pods.go:89] "snapshot-controller-7d9fbc56b8-44w6b" [9fceaa9e-21a1-46a5-acea-1901a3b30539] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1121 23:49:23.559106  517697 system_pods.go:89] "snapshot-controller-7d9fbc56b8-q99bt" [bb9f8fcb-0d34-489e-b7f3-e8c20fc906bb] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1121 23:49:23.559114  517697 system_pods.go:89] "storage-provisioner" [0ff2b406-8d5a-4cf0-a6a5-c79a4614dcf6] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1121 23:49:23.559130  517697 retry.go:31] will retry after 316.889207ms: missing components: kube-dns
	I1121 23:49:23.562696  517697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 23:49:23.687425  517697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 23:49:23.779408  517697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1121 23:49:23.779565  517697 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 23:49:23.883709  517697 system_pods.go:86] 19 kube-system pods found
	I1121 23:49:23.883755  517697 system_pods.go:89] "coredns-66bc5c9577-zjrtb" [98eb0f4e-21c8-4403-adb4-1d0f4decde4b] Running
	I1121 23:49:23.883765  517697 system_pods.go:89] "csi-hostpath-attacher-0" [974f6c76-34db-4887-a36d-ef4b2ccc1e37] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1121 23:49:23.883772  517697 system_pods.go:89] "csi-hostpath-resizer-0" [b719458e-8db2-43dc-8896-8fd232b5bc58] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1121 23:49:23.883782  517697 system_pods.go:89] "csi-hostpathplugin-mkngh" [083d366a-f53b-4a51-b7ee-7acd56800894] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1121 23:49:23.883787  517697 system_pods.go:89] "etcd-addons-882841" [5565d49c-434d-4db8-94fc-d88d8f8e9bd2] Running
	I1121 23:49:23.883792  517697 system_pods.go:89] "kindnet-wghw5" [f4454a98-7446-4179-a382-982d231fb9a7] Running
	I1121 23:49:23.883808  517697 system_pods.go:89] "kube-apiserver-addons-882841" [6bc0f536-d888-4818-9e4b-597d98d3edb4] Running
	I1121 23:49:23.883813  517697 system_pods.go:89] "kube-controller-manager-addons-882841" [1a2214c6-e2e0-4bb0-8c36-3571a5fda69c] Running
	I1121 23:49:23.883833  517697 system_pods.go:89] "kube-ingress-dns-minikube" [05451ec4-2e91-4a5d-8d8e-29b8f3931ab2] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1121 23:49:23.883837  517697 system_pods.go:89] "kube-proxy-gthqw" [05b79d7f-9659-444f-946f-88f641a45731] Running
	I1121 23:49:23.883842  517697 system_pods.go:89] "kube-scheduler-addons-882841" [4160616a-418b-48a6-8c7c-3dc4f43ace3c] Running
	I1121 23:49:23.883854  517697 system_pods.go:89] "metrics-server-85b7d694d7-7tk8r" [99849e7c-e2a9-4b60-b8f9-7ed8bd487c73] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1121 23:49:23.883861  517697 system_pods.go:89] "nvidia-device-plugin-daemonset-4jvp9" [54878aa0-88b5-4a6b-ad02-91d34115cc3d] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1121 23:49:23.883871  517697 system_pods.go:89] "registry-6b586f9694-5jvr4" [7a29be8b-519d-4b81-81ff-bac494b2ea86] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1121 23:49:23.883878  517697 system_pods.go:89] "registry-creds-764b6fb674-8wv2f" [dfc3c5ef-fcf8-4a4c-908c-fa2a665d682c] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1121 23:49:23.883884  517697 system_pods.go:89] "registry-proxy-rrtfc" [1d8939ca-bf48-4609-94de-6b5ca07c973f] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1121 23:49:23.883891  517697 system_pods.go:89] "snapshot-controller-7d9fbc56b8-44w6b" [9fceaa9e-21a1-46a5-acea-1901a3b30539] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1121 23:49:23.883900  517697 system_pods.go:89] "snapshot-controller-7d9fbc56b8-q99bt" [bb9f8fcb-0d34-489e-b7f3-e8c20fc906bb] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1121 23:49:23.883904  517697 system_pods.go:89] "storage-provisioner" [0ff2b406-8d5a-4cf0-a6a5-c79a4614dcf6] Running
	I1121 23:49:23.883927  517697 system_pods.go:126] duration metric: took 729.893104ms to wait for k8s-apps to be running ...
	I1121 23:49:23.883939  517697 system_svc.go:44] waiting for kubelet service to be running ....
	I1121 23:49:23.884004  517697 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1121 23:49:23.907047  517697 system_svc.go:56] duration metric: took 23.097745ms WaitForService to wait for kubelet
	I1121 23:49:23.907076  517697 kubeadm.go:587] duration metric: took 43.106328361s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1121 23:49:23.907100  517697 node_conditions.go:102] verifying NodePressure condition ...
	I1121 23:49:23.910894  517697 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1121 23:49:23.910927  517697 node_conditions.go:123] node cpu capacity is 2
	I1121 23:49:23.910948  517697 node_conditions.go:105] duration metric: took 3.838306ms to run NodePressure ...
	I1121 23:49:23.910968  517697 start.go:242] waiting for startup goroutines ...
	I1121 23:49:24.062817  517697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 23:49:24.186814  517697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 23:49:24.227544  517697 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 23:49:24.228325  517697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1121 23:49:24.561932  517697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 23:49:24.688700  517697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 23:49:24.733242  517697 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 23:49:24.733629  517697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1121 23:49:25.062555  517697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 23:49:25.187223  517697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 23:49:25.227140  517697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1121 23:49:25.227449  517697 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 23:49:25.561986  517697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 23:49:25.687540  517697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 23:49:25.726756  517697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1121 23:49:25.727615  517697 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 23:49:26.062167  517697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 23:49:26.187520  517697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 23:49:26.225860  517697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1121 23:49:26.227830  517697 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 23:49:26.562917  517697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 23:49:26.688047  517697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 23:49:26.789052  517697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1121 23:49:26.789289  517697 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 23:49:27.062102  517697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 23:49:27.187542  517697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 23:49:27.227312  517697 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 23:49:27.227503  517697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1121 23:49:27.562246  517697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 23:49:27.687740  517697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 23:49:27.726586  517697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1121 23:49:27.727240  517697 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 23:49:28.063082  517697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 23:49:28.187265  517697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 23:49:28.227395  517697 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 23:49:28.227987  517697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1121 23:49:28.562660  517697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 23:49:28.686665  517697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 23:49:28.726555  517697 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 23:49:28.727197  517697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1121 23:49:29.062941  517697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 23:49:29.187278  517697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 23:49:29.227551  517697 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 23:49:29.228218  517697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1121 23:49:29.561870  517697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 23:49:29.687358  517697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 23:49:29.727107  517697 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 23:49:29.727350  517697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1121 23:49:30.062266  517697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 23:49:30.187810  517697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 23:49:30.227770  517697 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 23:49:30.227900  517697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1121 23:49:30.562480  517697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 23:49:30.687643  517697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 23:49:30.731367  517697 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 23:49:30.731694  517697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1121 23:49:31.062417  517697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 23:49:31.187731  517697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 23:49:31.227662  517697 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 23:49:31.228003  517697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1121 23:49:31.564337  517697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 23:49:31.688418  517697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 23:49:31.726863  517697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1121 23:49:31.727030  517697 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 23:49:32.063185  517697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 23:49:32.187200  517697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 23:49:32.226317  517697 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 23:49:32.226749  517697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1121 23:49:32.565751  517697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 23:49:32.691172  517697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 23:49:32.728699  517697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1121 23:49:32.736937  517697 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 23:49:33.062920  517697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 23:49:33.187650  517697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 23:49:33.228365  517697 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 23:49:33.228646  517697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1121 23:49:33.564268  517697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 23:49:33.688202  517697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 23:49:33.727579  517697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1121 23:49:33.727813  517697 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 23:49:34.062949  517697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 23:49:34.187375  517697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 23:49:34.225725  517697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1121 23:49:34.226398  517697 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 23:49:34.562741  517697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 23:49:34.688385  517697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 23:49:34.728882  517697 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 23:49:34.729273  517697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1121 23:49:35.062729  517697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 23:49:35.187399  517697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 23:49:35.228380  517697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1121 23:49:35.228827  517697 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 23:49:35.562451  517697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 23:49:35.687595  517697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 23:49:35.788767  517697 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 23:49:35.788721  517697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1121 23:49:36.062187  517697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 23:49:36.187024  517697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 23:49:36.227244  517697 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 23:49:36.227384  517697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1121 23:49:36.561639  517697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 23:49:36.686801  517697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 23:49:36.727237  517697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1121 23:49:36.727445  517697 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 23:49:37.062848  517697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 23:49:37.187493  517697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 23:49:37.227419  517697 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 23:49:37.227618  517697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1121 23:49:37.562745  517697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 23:49:37.687004  517697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 23:49:37.726276  517697 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 23:49:37.727040  517697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1121 23:49:38.062993  517697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 23:49:38.187243  517697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 23:49:38.227530  517697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1121 23:49:38.228295  517697 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 23:49:38.562077  517697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 23:49:38.687132  517697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 23:49:38.726423  517697 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 23:49:38.727363  517697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1121 23:49:39.062066  517697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 23:49:39.187605  517697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 23:49:39.228471  517697 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 23:49:39.228751  517697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1121 23:49:39.562917  517697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 23:49:39.693116  517697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 23:49:39.728033  517697 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 23:49:39.728456  517697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1121 23:49:40.062685  517697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 23:49:40.186735  517697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 23:49:40.226654  517697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1121 23:49:40.226693  517697 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 23:49:40.561890  517697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 23:49:40.687342  517697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 23:49:40.735286  517697 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 23:49:40.736416  517697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1121 23:49:41.062233  517697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 23:49:41.187405  517697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 23:49:41.288710  517697 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 23:49:41.289033  517697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1121 23:49:41.562904  517697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 23:49:41.686714  517697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 23:49:41.726845  517697 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 23:49:41.726958  517697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1121 23:49:42.063444  517697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 23:49:42.187410  517697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 23:49:42.226561  517697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1121 23:49:42.228088  517697 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 23:49:42.561869  517697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 23:49:42.687551  517697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 23:49:42.725965  517697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1121 23:49:42.726852  517697 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 23:49:43.068039  517697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 23:49:43.187655  517697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 23:49:43.228111  517697 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 23:49:43.228505  517697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1121 23:49:43.562270  517697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 23:49:43.687608  517697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 23:49:43.736256  517697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1121 23:49:43.736719  517697 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 23:49:44.065521  517697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 23:49:44.187726  517697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 23:49:44.228549  517697 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 23:49:44.229538  517697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1121 23:49:44.562842  517697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 23:49:44.687893  517697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 23:49:44.728348  517697 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 23:49:44.728756  517697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1121 23:49:45.064666  517697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 23:49:45.189958  517697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 23:49:45.232713  517697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1121 23:49:45.233222  517697 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 23:49:45.562565  517697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 23:49:45.687729  517697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 23:49:45.728652  517697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1121 23:49:45.728828  517697 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 23:49:46.062829  517697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 23:49:46.187133  517697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 23:49:46.227957  517697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1121 23:49:46.228583  517697 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 23:49:46.562410  517697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 23:49:46.687602  517697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 23:49:46.727915  517697 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 23:49:46.728345  517697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1121 23:49:47.062156  517697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 23:49:47.186843  517697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 23:49:47.226563  517697 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 23:49:47.227434  517697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1121 23:49:47.562511  517697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 23:49:47.687267  517697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 23:49:47.725543  517697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1121 23:49:47.726242  517697 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 23:49:48.062228  517697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 23:49:48.187485  517697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 23:49:48.227470  517697 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 23:49:48.227859  517697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1121 23:49:48.562620  517697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 23:49:48.687421  517697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 23:49:48.727207  517697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1121 23:49:48.727510  517697 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 23:49:49.066594  517697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 23:49:49.186748  517697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 23:49:49.226331  517697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1121 23:49:49.226512  517697 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 23:49:49.562185  517697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 23:49:49.686716  517697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 23:49:49.726308  517697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1121 23:49:49.726661  517697 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 23:49:50.062650  517697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 23:49:50.186477  517697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 23:49:50.225439  517697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1121 23:49:50.227030  517697 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 23:49:50.562458  517697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 23:49:50.690472  517697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 23:49:50.725724  517697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1121 23:49:50.726954  517697 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 23:49:51.064161  517697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 23:49:51.188247  517697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 23:49:51.227598  517697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1121 23:49:51.229215  517697 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 23:49:51.561529  517697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 23:49:51.687299  517697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 23:49:51.726249  517697 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 23:49:51.726396  517697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1121 23:49:52.062447  517697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 23:49:52.186636  517697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 23:49:52.227279  517697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1121 23:49:52.228716  517697 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 23:49:52.562053  517697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 23:49:52.687745  517697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 23:49:52.728468  517697 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 23:49:52.729474  517697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1121 23:49:53.062572  517697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 23:49:53.187945  517697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 23:49:53.227110  517697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1121 23:49:53.227773  517697 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 23:49:53.563038  517697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 23:49:53.686884  517697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 23:49:53.726752  517697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1121 23:49:53.726929  517697 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 23:49:54.062680  517697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 23:49:54.187182  517697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 23:49:54.227599  517697 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 23:49:54.228016  517697 kapi.go:107] duration metric: took 1m7.005739653s to wait for kubernetes.io/minikube-addons=registry ...
	I1121 23:49:54.562356  517697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 23:49:54.688624  517697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 23:49:54.726988  517697 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 23:49:55.062844  517697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 23:49:55.186873  517697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 23:49:55.227003  517697 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 23:49:55.562969  517697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 23:49:55.686995  517697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 23:49:55.727451  517697 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 23:49:56.062214  517697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 23:49:56.187023  517697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 23:49:56.227016  517697 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 23:49:56.562660  517697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 23:49:56.689743  517697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 23:49:56.726846  517697 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 23:49:57.074853  517697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 23:49:57.187826  517697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 23:49:57.227365  517697 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 23:49:57.562076  517697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 23:49:57.687001  517697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 23:49:57.727166  517697 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 23:49:58.061930  517697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 23:49:58.187333  517697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 23:49:58.226876  517697 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 23:49:58.562587  517697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 23:49:58.686498  517697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 23:49:58.727089  517697 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 23:49:59.062674  517697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 23:49:59.187012  517697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 23:49:59.226358  517697 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 23:49:59.562139  517697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 23:49:59.687429  517697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 23:49:59.726781  517697 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 23:50:00.106593  517697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 23:50:00.215454  517697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 23:50:00.240663  517697 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 23:50:00.563131  517697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 23:50:00.687514  517697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 23:50:00.727276  517697 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 23:50:01.061705  517697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 23:50:01.187688  517697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 23:50:01.227437  517697 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 23:50:01.563064  517697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 23:50:01.687137  517697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 23:50:01.726952  517697 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 23:50:02.065222  517697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 23:50:02.188113  517697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 23:50:02.228380  517697 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 23:50:02.562625  517697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 23:50:02.686556  517697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 23:50:02.726543  517697 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 23:50:03.062657  517697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 23:50:03.187096  517697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 23:50:03.287602  517697 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 23:50:03.562081  517697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 23:50:03.687899  517697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 23:50:03.727116  517697 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 23:50:04.063303  517697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 23:50:04.187766  517697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 23:50:04.227386  517697 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 23:50:04.562216  517697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 23:50:04.689316  517697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 23:50:04.726665  517697 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 23:50:05.062705  517697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 23:50:05.187631  517697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 23:50:05.226985  517697 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 23:50:05.563175  517697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 23:50:05.687215  517697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 23:50:05.726557  517697 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 23:50:06.062824  517697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 23:50:06.187263  517697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 23:50:06.226588  517697 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 23:50:06.562928  517697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 23:50:06.686910  517697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 23:50:06.727761  517697 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 23:50:07.071286  517697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 23:50:07.188061  517697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 23:50:07.226053  517697 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 23:50:07.565143  517697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 23:50:07.687205  517697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 23:50:07.726865  517697 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 23:50:08.062595  517697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 23:50:08.188417  517697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 23:50:08.227203  517697 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 23:50:08.566899  517697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 23:50:08.686595  517697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 23:50:08.729651  517697 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 23:50:09.062177  517697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 23:50:09.187632  517697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 23:50:09.227024  517697 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 23:50:09.562312  517697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 23:50:09.688679  517697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 23:50:09.792517  517697 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 23:50:10.063121  517697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 23:50:10.187037  517697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 23:50:10.227775  517697 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 23:50:10.562414  517697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 23:50:10.690316  517697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 23:50:10.728869  517697 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 23:50:11.062711  517697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 23:50:11.186982  517697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 23:50:11.227175  517697 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 23:50:11.563447  517697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 23:50:11.687691  517697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 23:50:11.727131  517697 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 23:50:12.063221  517697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 23:50:12.187423  517697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 23:50:12.226480  517697 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 23:50:12.561561  517697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 23:50:12.687692  517697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 23:50:12.726512  517697 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 23:50:13.062627  517697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 23:50:13.187882  517697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 23:50:13.227581  517697 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 23:50:13.571068  517697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 23:50:13.686834  517697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 23:50:13.726643  517697 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 23:50:14.062610  517697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1121 23:50:14.188398  517697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 23:50:14.226713  517697 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 23:50:14.562128  517697 kapi.go:107] duration metric: took 1m27.003730955s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I1121 23:50:14.686868  517697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 23:50:14.727257  517697 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 23:50:15.187581  517697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 23:50:15.226687  517697 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 23:50:15.686956  517697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 23:50:15.727146  517697 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 23:50:16.188092  517697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 23:50:16.227262  517697 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 23:50:16.686951  517697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 23:50:16.727123  517697 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 23:50:17.187477  517697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 23:50:17.226519  517697 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 23:50:17.687614  517697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 23:50:17.726347  517697 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 23:50:18.187147  517697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 23:50:18.226262  517697 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 23:50:18.687743  517697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 23:50:18.726912  517697 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 23:50:19.187344  517697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 23:50:19.288198  517697 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 23:50:19.687996  517697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 23:50:19.789297  517697 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 23:50:20.187301  517697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 23:50:20.226458  517697 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 23:50:20.686817  517697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 23:50:20.727027  517697 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 23:50:21.191351  517697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 23:50:21.226528  517697 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 23:50:21.686658  517697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 23:50:21.726738  517697 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 23:50:22.193334  517697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 23:50:22.235000  517697 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 23:50:22.687252  517697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 23:50:22.726107  517697 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 23:50:23.187541  517697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 23:50:23.226559  517697 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 23:50:23.686817  517697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 23:50:23.727177  517697 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 23:50:24.197364  517697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 23:50:24.226484  517697 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 23:50:24.686992  517697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 23:50:24.727468  517697 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 23:50:25.187335  517697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 23:50:25.226153  517697 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 23:50:25.687975  517697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 23:50:25.727087  517697 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 23:50:26.187790  517697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 23:50:26.226901  517697 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 23:50:26.687629  517697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 23:50:26.727002  517697 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 23:50:27.188138  517697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 23:50:27.228646  517697 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 23:50:27.687097  517697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 23:50:27.726079  517697 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 23:50:28.187963  517697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 23:50:28.226402  517697 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 23:50:28.687955  517697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 23:50:28.727008  517697 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 23:50:29.188310  517697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 23:50:29.227458  517697 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 23:50:29.688709  517697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 23:50:29.728614  517697 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 23:50:30.191373  517697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 23:50:30.226549  517697 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 23:50:30.690001  517697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 23:50:30.727161  517697 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 23:50:31.195490  517697 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1121 23:50:31.231331  517697 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 23:50:31.687865  517697 kapi.go:107] duration metric: took 1m41.004128252s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I1121 23:50:31.691505  517697 out.go:179] * Your GCP credentials will now be mounted into every pod created in the addons-882841 cluster.
	I1121 23:50:31.694749  517697 out.go:179] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I1121 23:50:31.698143  517697 out.go:179] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I1121 23:50:31.727783  517697 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 23:50:32.227784  517697 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 23:50:32.726638  517697 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 23:50:33.227306  517697 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 23:50:33.726413  517697 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 23:50:34.229660  517697 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 23:50:34.727146  517697 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 23:50:35.227335  517697 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 23:50:35.727107  517697 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 23:50:36.233629  517697 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 23:50:36.727762  517697 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 23:50:37.226669  517697 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1121 23:50:37.726800  517697 kapi.go:107] duration metric: took 1m50.503726384s to wait for app.kubernetes.io/name=ingress-nginx ...
	I1121 23:50:37.730115  517697 out.go:179] * Enabled addons: default-storageclass, inspektor-gadget, registry-creds, storage-provisioner, amd-gpu-device-plugin, nvidia-device-plugin, cloud-spanner, ingress-dns, metrics-server, yakd, storage-provisioner-rancher, volumesnapshots, registry, csi-hostpath-driver, gcp-auth, ingress
	I1121 23:50:37.733327  517697 addons.go:530] duration metric: took 1m56.932312921s for enable addons: enabled=[default-storageclass inspektor-gadget registry-creds storage-provisioner amd-gpu-device-plugin nvidia-device-plugin cloud-spanner ingress-dns metrics-server yakd storage-provisioner-rancher volumesnapshots registry csi-hostpath-driver gcp-auth ingress]
	I1121 23:50:37.733373  517697 start.go:247] waiting for cluster config update ...
	I1121 23:50:37.733399  517697 start.go:256] writing updated cluster config ...
	I1121 23:50:37.733687  517697 ssh_runner.go:195] Run: rm -f paused
	I1121 23:50:37.738474  517697 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1121 23:50:37.741769  517697 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-zjrtb" in "kube-system" namespace to be "Ready" or be gone ...
	I1121 23:50:37.746938  517697 pod_ready.go:94] pod "coredns-66bc5c9577-zjrtb" is "Ready"
	I1121 23:50:37.747009  517697 pod_ready.go:86] duration metric: took 5.133432ms for pod "coredns-66bc5c9577-zjrtb" in "kube-system" namespace to be "Ready" or be gone ...
	I1121 23:50:37.749591  517697 pod_ready.go:83] waiting for pod "etcd-addons-882841" in "kube-system" namespace to be "Ready" or be gone ...
	I1121 23:50:37.754400  517697 pod_ready.go:94] pod "etcd-addons-882841" is "Ready"
	I1121 23:50:37.754430  517697 pod_ready.go:86] duration metric: took 4.811889ms for pod "etcd-addons-882841" in "kube-system" namespace to be "Ready" or be gone ...
	I1121 23:50:37.756715  517697 pod_ready.go:83] waiting for pod "kube-apiserver-addons-882841" in "kube-system" namespace to be "Ready" or be gone ...
	I1121 23:50:37.761504  517697 pod_ready.go:94] pod "kube-apiserver-addons-882841" is "Ready"
	I1121 23:50:37.761530  517697 pod_ready.go:86] duration metric: took 4.750525ms for pod "kube-apiserver-addons-882841" in "kube-system" namespace to be "Ready" or be gone ...
	I1121 23:50:37.764141  517697 pod_ready.go:83] waiting for pod "kube-controller-manager-addons-882841" in "kube-system" namespace to be "Ready" or be gone ...
	I1121 23:50:38.142465  517697 pod_ready.go:94] pod "kube-controller-manager-addons-882841" is "Ready"
	I1121 23:50:38.142497  517697 pod_ready.go:86] duration metric: took 378.334868ms for pod "kube-controller-manager-addons-882841" in "kube-system" namespace to be "Ready" or be gone ...
	I1121 23:50:38.342625  517697 pod_ready.go:83] waiting for pod "kube-proxy-gthqw" in "kube-system" namespace to be "Ready" or be gone ...
	I1121 23:50:38.742880  517697 pod_ready.go:94] pod "kube-proxy-gthqw" is "Ready"
	I1121 23:50:38.742908  517697 pod_ready.go:86] duration metric: took 400.251724ms for pod "kube-proxy-gthqw" in "kube-system" namespace to be "Ready" or be gone ...
	I1121 23:50:38.942322  517697 pod_ready.go:83] waiting for pod "kube-scheduler-addons-882841" in "kube-system" namespace to be "Ready" or be gone ...
	I1121 23:50:39.342604  517697 pod_ready.go:94] pod "kube-scheduler-addons-882841" is "Ready"
	I1121 23:50:39.342635  517697 pod_ready.go:86] duration metric: took 400.288014ms for pod "kube-scheduler-addons-882841" in "kube-system" namespace to be "Ready" or be gone ...
	I1121 23:50:39.342649  517697 pod_ready.go:40] duration metric: took 1.604140769s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1121 23:50:39.404354  517697 start.go:628] kubectl: 1.33.2, cluster: 1.34.1 (minor skew: 1)
	I1121 23:50:39.407995  517697 out.go:179] * Done! kubectl is now configured to use "addons-882841" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Nov 21 23:51:08 addons-882841 crio[827]: time="2025-11-21T23:51:08.945905242Z" level=info msg="Started container" PID=5360 containerID=b276c20fe098d83369106be6f7aa39513377e25b08de820f293ca86949be7d7e description=default/test-local-path/busybox id=3b3586b5-227e-44d2-b825-0ca58a2365c2 name=/runtime.v1.RuntimeService/StartContainer sandboxID=0d5fc44bec856935bb72f2e3d0b96466ecb3d4c639eee5c81a5a5ef3965d3fb2
	Nov 21 23:51:10 addons-882841 crio[827]: time="2025-11-21T23:51:10.400361208Z" level=info msg="Stopping pod sandbox: 0d5fc44bec856935bb72f2e3d0b96466ecb3d4c639eee5c81a5a5ef3965d3fb2" id=bb70fd0d-c4d2-4909-bbf3-7578ad3de65f name=/runtime.v1.RuntimeService/StopPodSandbox
	Nov 21 23:51:10 addons-882841 crio[827]: time="2025-11-21T23:51:10.400661827Z" level=info msg="Got pod network &{Name:test-local-path Namespace:default ID:0d5fc44bec856935bb72f2e3d0b96466ecb3d4c639eee5c81a5a5ef3965d3fb2 UID:4b6a8df7-0e37-45e6-9597-b37a52bd040e NetNS:/var/run/netns/3bedea61-9510-44df-b963-46da31a07c6e Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0x400012d388}] Aliases:map[]}"
	Nov 21 23:51:10 addons-882841 crio[827]: time="2025-11-21T23:51:10.40080373Z" level=info msg="Deleting pod default_test-local-path from CNI network \"kindnet\" (type=ptp)"
	Nov 21 23:51:10 addons-882841 crio[827]: time="2025-11-21T23:51:10.426869734Z" level=info msg="Stopped pod sandbox: 0d5fc44bec856935bb72f2e3d0b96466ecb3d4c639eee5c81a5a5ef3965d3fb2" id=bb70fd0d-c4d2-4909-bbf3-7578ad3de65f name=/runtime.v1.RuntimeService/StopPodSandbox
	Nov 21 23:51:12 addons-882841 crio[827]: time="2025-11-21T23:51:11.993391926Z" level=info msg="Running pod sandbox: local-path-storage/helper-pod-delete-pvc-f4b21e23-9cc1-4889-8265-d98a837f9eda/POD" id=b5b0fbdd-f79b-4ed3-9957-943a71abe655 name=/runtime.v1.RuntimeService/RunPodSandbox
	Nov 21 23:51:12 addons-882841 crio[827]: time="2025-11-21T23:51:11.993474754Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 21 23:51:12 addons-882841 crio[827]: time="2025-11-21T23:51:12.002181547Z" level=info msg="Got pod network &{Name:helper-pod-delete-pvc-f4b21e23-9cc1-4889-8265-d98a837f9eda Namespace:local-path-storage ID:75557f58076f860e4ec19aa3cd5a6afff12967bb1311b30d878d9d88686329f5 UID:f56231bc-01c4-4571-9836-63b5c15be7e1 NetNS:/var/run/netns/b5b5d765-89f5-4a16-985f-d5821e1b4a2c Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0x4002404670}] Aliases:map[]}"
	Nov 21 23:51:12 addons-882841 crio[827]: time="2025-11-21T23:51:12.002222473Z" level=info msg="Adding pod local-path-storage_helper-pod-delete-pvc-f4b21e23-9cc1-4889-8265-d98a837f9eda to CNI network \"kindnet\" (type=ptp)"
	Nov 21 23:51:12 addons-882841 crio[827]: time="2025-11-21T23:51:12.013013997Z" level=info msg="Got pod network &{Name:helper-pod-delete-pvc-f4b21e23-9cc1-4889-8265-d98a837f9eda Namespace:local-path-storage ID:75557f58076f860e4ec19aa3cd5a6afff12967bb1311b30d878d9d88686329f5 UID:f56231bc-01c4-4571-9836-63b5c15be7e1 NetNS:/var/run/netns/b5b5d765-89f5-4a16-985f-d5821e1b4a2c Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0x4002404670}] Aliases:map[]}"
	Nov 21 23:51:12 addons-882841 crio[827]: time="2025-11-21T23:51:12.013168544Z" level=info msg="Checking pod local-path-storage_helper-pod-delete-pvc-f4b21e23-9cc1-4889-8265-d98a837f9eda for CNI network kindnet (type=ptp)"
	Nov 21 23:51:12 addons-882841 crio[827]: time="2025-11-21T23:51:12.03079544Z" level=info msg="Ran pod sandbox 75557f58076f860e4ec19aa3cd5a6afff12967bb1311b30d878d9d88686329f5 with infra container: local-path-storage/helper-pod-delete-pvc-f4b21e23-9cc1-4889-8265-d98a837f9eda/POD" id=b5b0fbdd-f79b-4ed3-9957-943a71abe655 name=/runtime.v1.RuntimeService/RunPodSandbox
	Nov 21 23:51:12 addons-882841 crio[827]: time="2025-11-21T23:51:12.032226898Z" level=info msg="Checking image status: docker.io/busybox:stable@sha256:3fbc632167424a6d997e74f52b878d7cc478225cffac6bc977eedfe51c7f4e79" id=06ca85be-2cc2-4393-90e6-9fb769d3dc5d name=/runtime.v1.ImageService/ImageStatus
	Nov 21 23:51:12 addons-882841 crio[827]: time="2025-11-21T23:51:12.037389556Z" level=info msg="Checking image status: docker.io/busybox:stable@sha256:3fbc632167424a6d997e74f52b878d7cc478225cffac6bc977eedfe51c7f4e79" id=432fb851-0a42-48a0-913c-4e868672de13 name=/runtime.v1.ImageService/ImageStatus
	Nov 21 23:51:12 addons-882841 crio[827]: time="2025-11-21T23:51:12.046556651Z" level=info msg="Creating container: local-path-storage/helper-pod-delete-pvc-f4b21e23-9cc1-4889-8265-d98a837f9eda/helper-pod" id=f5b050a7-fce0-47df-9368-8bd653b86ec4 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 21 23:51:12 addons-882841 crio[827]: time="2025-11-21T23:51:12.046845225Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 21 23:51:12 addons-882841 crio[827]: time="2025-11-21T23:51:12.060390704Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 21 23:51:12 addons-882841 crio[827]: time="2025-11-21T23:51:12.06245087Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 21 23:51:12 addons-882841 crio[827]: time="2025-11-21T23:51:12.088957337Z" level=info msg="Created container 710150e860d715f2083775b0627a3e7d73883917345288b46fb3c70941aef15e: local-path-storage/helper-pod-delete-pvc-f4b21e23-9cc1-4889-8265-d98a837f9eda/helper-pod" id=f5b050a7-fce0-47df-9368-8bd653b86ec4 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 21 23:51:12 addons-882841 crio[827]: time="2025-11-21T23:51:12.113332215Z" level=info msg="Starting container: 710150e860d715f2083775b0627a3e7d73883917345288b46fb3c70941aef15e" id=cd51068e-9c76-4863-8070-fee8e7a9afd2 name=/runtime.v1.RuntimeService/StartContainer
	Nov 21 23:51:12 addons-882841 crio[827]: time="2025-11-21T23:51:12.119463656Z" level=info msg="Started container" PID=5450 containerID=710150e860d715f2083775b0627a3e7d73883917345288b46fb3c70941aef15e description=local-path-storage/helper-pod-delete-pvc-f4b21e23-9cc1-4889-8265-d98a837f9eda/helper-pod id=cd51068e-9c76-4863-8070-fee8e7a9afd2 name=/runtime.v1.RuntimeService/StartContainer sandboxID=75557f58076f860e4ec19aa3cd5a6afff12967bb1311b30d878d9d88686329f5
	Nov 21 23:51:13 addons-882841 crio[827]: time="2025-11-21T23:51:13.415515803Z" level=info msg="Stopping pod sandbox: 75557f58076f860e4ec19aa3cd5a6afff12967bb1311b30d878d9d88686329f5" id=c3f7999b-ada0-4e37-b4cf-1ce0e3b3031d name=/runtime.v1.RuntimeService/StopPodSandbox
	Nov 21 23:51:13 addons-882841 crio[827]: time="2025-11-21T23:51:13.415871576Z" level=info msg="Got pod network &{Name:helper-pod-delete-pvc-f4b21e23-9cc1-4889-8265-d98a837f9eda Namespace:local-path-storage ID:75557f58076f860e4ec19aa3cd5a6afff12967bb1311b30d878d9d88686329f5 UID:f56231bc-01c4-4571-9836-63b5c15be7e1 NetNS:/var/run/netns/b5b5d765-89f5-4a16-985f-d5821e1b4a2c Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0x400012de98}] Aliases:map[]}"
	Nov 21 23:51:13 addons-882841 crio[827]: time="2025-11-21T23:51:13.416028675Z" level=info msg="Deleting pod local-path-storage_helper-pod-delete-pvc-f4b21e23-9cc1-4889-8265-d98a837f9eda from CNI network \"kindnet\" (type=ptp)"
	Nov 21 23:51:13 addons-882841 crio[827]: time="2025-11-21T23:51:13.449874573Z" level=info msg="Stopped pod sandbox: 75557f58076f860e4ec19aa3cd5a6afff12967bb1311b30d878d9d88686329f5" id=c3f7999b-ada0-4e37-b4cf-1ce0e3b3031d name=/runtime.v1.RuntimeService/StopPodSandbox
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                                                        CREATED              STATE               NAME                                     ATTEMPT             POD ID              POD                                                          NAMESPACE
	710150e860d71       fc9db2894f4e4b8c296b8c9dab7e18a6e78de700d21bc0cfaf5c78484226db9c                                                                             1 second ago         Exited              helper-pod                               0                   75557f58076f8       helper-pod-delete-pvc-f4b21e23-9cc1-4889-8265-d98a837f9eda   local-path-storage
	b276c20fe098d       docker.io/library/busybox@sha256:079b4a73854a059a2073c6e1a031b17fcbf23a47c6c59ae760d78045199e403c                                            4 seconds ago        Exited              busybox                                  0                   0d5fc44bec856       test-local-path                                              default
	63630ecafe0b8       docker.io/library/busybox@sha256:1fa89c01cd0473cedbd1a470abb8c139eeb80920edf1bc55de87851bfb63ea11                                            8 seconds ago        Exited              helper-pod                               0                   e8a91bd1bea08       helper-pod-create-pvc-f4b21e23-9cc1-4889-8265-d98a837f9eda   local-path-storage
	2ea6072bf2123       gcr.io/k8s-minikube/busybox@sha256:580b0aa58b210f512f818b7b7ef4f63c803f7a8cd6baf571b1462b79f7b7719e                                          31 seconds ago       Running             busybox                                  0                   d3669523d5fc5       busybox                                                      default
	bbd9358920805       registry.k8s.io/ingress-nginx/controller@sha256:655333e68deab34ee3701f400c4d5d9709000cdfdadb802e4bd7500b027e1259                             37 seconds ago       Running             controller                               0                   0559b46fe9974       ingress-nginx-controller-6c8bf45fb-tnj6x                     ingress-nginx
	ea59934e5547e       gcr.io/k8s-minikube/gcp-auth-webhook@sha256:2de98fa4b397f92e5e8e05d73caf21787a1c72c41378f3eb7bad72b1e0f4e9ff                                 43 seconds ago       Running             gcp-auth                                 0                   aac08c4fcde57       gcp-auth-78565c9fb4-rpmr7                                    gcp-auth
	2cf9276585e52       32daba64b064c571f27dbd4e285969f47f8e5dd6c692279b48622e941b4d137f                                                                             54 seconds ago       Exited              patch                                    3                   d516257b4774e       gcp-auth-certs-patch-g6wlv                                   gcp-auth
	d4288bfcc52ca       registry.k8s.io/sig-storage/csi-snapshotter@sha256:bd6b8417b2a83e66ab1d4c1193bb2774f027745bdebbd9e0c1a6518afdecc39a                          About a minute ago   Running             csi-snapshotter                          0                   6fc2758cdfb87       csi-hostpathplugin-mkngh                                     kube-system
	c2b953c3eb94b       registry.k8s.io/sig-storage/csi-provisioner@sha256:98ffd09c0784203d200e0f8c241501de31c8df79644caac7eed61bd6391e5d49                          About a minute ago   Running             csi-provisioner                          0                   6fc2758cdfb87       csi-hostpathplugin-mkngh                                     kube-system
	4b84c2719358c       registry.k8s.io/sig-storage/livenessprobe@sha256:8b00c6e8f52639ed9c6f866085893ab688e57879741b3089e3cfa9998502e158                            About a minute ago   Running             liveness-probe                           0                   6fc2758cdfb87       csi-hostpathplugin-mkngh                                     kube-system
	03e07c8bd5633       registry.k8s.io/sig-storage/hostpathplugin@sha256:7b1dfc90a367222067fc468442fdf952e20fc5961f25c1ad654300ddc34d7083                           About a minute ago   Running             hostpath                                 0                   6fc2758cdfb87       csi-hostpathplugin-mkngh                                     kube-system
	4f400d29139bb       registry.k8s.io/sig-storage/csi-node-driver-registrar@sha256:511b8c8ac828194a753909d26555ff08bc12f497dd8daeb83fe9d593693a26c1                About a minute ago   Running             node-driver-registrar                    0                   6fc2758cdfb87       csi-hostpathplugin-mkngh                                     kube-system
	d0aa8b937f2aa       ghcr.io/inspektor-gadget/inspektor-gadget@sha256:c2c5268a38de5c792beb84122c5350c644fbb9b85e04342ef72fa9a6d052f0b0                            About a minute ago   Running             gadget                                   0                   18d998949e253       gadget-84krr                                                 gadget
	0ecc1bcf50504       registry.k8s.io/sig-storage/csi-external-health-monitor-controller@sha256:8b9df00898ded1bfb4d8f3672679f29cd9f88e651b76fef64121c8d347dd12c0   About a minute ago   Running             csi-external-health-monitor-controller   0                   6fc2758cdfb87       csi-hostpathplugin-mkngh                                     kube-system
	0d6392a88c56a       nvcr.io/nvidia/k8s-device-plugin@sha256:80924fc52384565a7c59f1e2f12319fb8f2b02a1c974bb3d73a9853fe01af874                                     About a minute ago   Running             nvidia-device-plugin-ctr                 0                   6fdfcb727159f       nvidia-device-plugin-daemonset-4jvp9                         kube-system
	8eba7b83af467       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:e733096c3a5b75504c6380083abc960c9627efd23e099df780adfb4eec197583                   About a minute ago   Exited              patch                                    0                   77fe03c4dbab0       ingress-nginx-admission-patch-lfbrr                          ingress-nginx
	492d7c5835fb8       registry.k8s.io/sig-storage/csi-attacher@sha256:4b5609c78455de45821910065281a368d5f760b41250f90cbde5110543bdc326                             About a minute ago   Running             csi-attacher                             0                   c91237a67e54a       csi-hostpath-attacher-0                                      kube-system
	438ba026464ea       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:e733096c3a5b75504c6380083abc960c9627efd23e099df780adfb4eec197583                   About a minute ago   Exited              create                                   0                   1c3a1894461df       ingress-nginx-admission-create-f9tdh                         ingress-nginx
	96794062627c7       registry.k8s.io/sig-storage/csi-resizer@sha256:82c1945463342884c05a5b2bc31319712ce75b154c279c2a10765f61e0f688af                              About a minute ago   Running             csi-resizer                              0                   30c9f2ed0a3ba       csi-hostpath-resizer-0                                       kube-system
	d6df5ea7f4eb5       gcr.io/k8s-minikube/kube-registry-proxy@sha256:26c84a64530a67aa4d749dd4356d67ea27a2576e4d25b640d21857b0574cfd4b                              About a minute ago   Running             registry-proxy                           0                   68c26a2e4f7da       registry-proxy-rrtfc                                         kube-system
	800239fcbfd60       registry.k8s.io/sig-storage/snapshot-controller@sha256:5d668e35c15df6e87e2530da25d557f543182cedbdb39d421b87076463ee9857                      About a minute ago   Running             volume-snapshot-controller               0                   6f45d55fc6313       snapshot-controller-7d9fbc56b8-44w6b                         kube-system
	1d9bfd16346a7       registry.k8s.io/sig-storage/snapshot-controller@sha256:5d668e35c15df6e87e2530da25d557f543182cedbdb39d421b87076463ee9857                      About a minute ago   Running             volume-snapshot-controller               0                   9f2b3bc75d335       snapshot-controller-7d9fbc56b8-q99bt                         kube-system
	cd8dbe13ad5f4       docker.io/marcnuri/yakd@sha256:1c961556224d57fc747de0b1874524208e5fb4f8386f23e9c1c4c18e97109f17                                              About a minute ago   Running             yakd                                     0                   08e33db47f3a7       yakd-dashboard-5ff678cb9-x6sbv                               yakd-dashboard
	b7eb954adbbab       registry.k8s.io/metrics-server/metrics-server@sha256:8f49cf1b0688bb0eae18437882dbf6de2c7a2baac71b1492bc4eca25439a1bf2                        About a minute ago   Running             metrics-server                           0                   72fe85b281535       metrics-server-85b7d694d7-7tk8r                              kube-system
	ba42295e49f9a       docker.io/library/registry@sha256:8715992817b2254fe61e74ffc6a4096d57a0cde36c95ea075676c05f7a94a630                                           About a minute ago   Running             registry                                 0                   27f67d1dd7bf1       registry-6b586f9694-5jvr4                                    kube-system
	2d039a17459cf       gcr.io/cloud-spanner-emulator/emulator@sha256:c2688dc4b7ecb4546084321d63c2b3b616a54263488137e18fcb7c7005aef086                               About a minute ago   Running             cloud-spanner-emulator                   0                   1d3f95b027770       cloud-spanner-emulator-6f9fcf858b-z8s5r                      default
	36f901d726865       docker.io/kicbase/minikube-ingress-dns@sha256:6d710af680d8a9b5a5b1f9047eb83ee4c9258efd3fcd962f938c00bcbb4c5958                               About a minute ago   Running             minikube-ingress-dns                     0                   bf601ff2e8c9e       kube-ingress-dns-minikube                                    kube-system
	e7957c170631a       docker.io/rancher/local-path-provisioner@sha256:689a2489a24e74426e4a4666e611c988202c5fa995908b0c60133aca3eb87d98                             About a minute ago   Running             local-path-provisioner                   0                   444ccf0807ab2       local-path-provisioner-648f6765c9-sv9ds                      local-path-storage
	561d110537c5c       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                                                             About a minute ago   Running             storage-provisioner                      0                   bffa57760e853       storage-provisioner                                          kube-system
	e6dc31e093068       138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc                                                                             About a minute ago   Running             coredns                                  0                   849fa7ee214a9       coredns-66bc5c9577-zjrtb                                     kube-system
	074654b9d6b9f       05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9                                                                             2 minutes ago        Running             kube-proxy                               0                   8debede13cde9       kube-proxy-gthqw                                             kube-system
	970af788676bd       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c                                                                             2 minutes ago        Running             kindnet-cni                              0                   cdbadd866640c       kindnet-wghw5                                                kube-system
	f6c2269669bcf       7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a                                                                             2 minutes ago        Running             kube-controller-manager                  0                   29706823db1fd       kube-controller-manager-addons-882841                        kube-system
	415d7ebb38dbf       43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196                                                                             2 minutes ago        Running             kube-apiserver                           0                   81f16edd0c739       kube-apiserver-addons-882841                                 kube-system
	ba805611fb053       b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0                                                                             2 minutes ago        Running             kube-scheduler                           0                   2bf7c2eeb1e7e       kube-scheduler-addons-882841                                 kube-system
	0b539dfc17788       a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e                                                                             2 minutes ago        Running             etcd                                     0                   31bf5833591bf       etcd-addons-882841                                           kube-system
	
	
	==> coredns [e6dc31e0930681fac6fbd625f4ec7a07e57c10d13a728a7ec163a4c66a6d4a2b] <==
	[INFO] 10.244.0.12:32983 - 42028 "AAAA IN registry.kube-system.svc.cluster.local.us-east-2.compute.internal. udp 94 false 1232" NXDOMAIN qr,rd,ra 83 0.003126023s
	[INFO] 10.244.0.12:32983 - 31053 "AAAA IN registry.kube-system.svc.cluster.local. udp 67 false 1232" NOERROR qr,aa,rd 149 0.001051014s
	[INFO] 10.244.0.12:32983 - 29123 "A IN registry.kube-system.svc.cluster.local. udp 67 false 1232" NOERROR qr,aa,rd 110 0.00043865s
	[INFO] 10.244.0.12:36008 - 5721 "AAAA IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000157812s
	[INFO] 10.244.0.12:36008 - 5508 "A IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000067887s
	[INFO] 10.244.0.12:34000 - 476 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000095013s
	[INFO] 10.244.0.12:34000 - 31 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000062439s
	[INFO] 10.244.0.12:50238 - 47165 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000080071s
	[INFO] 10.244.0.12:50238 - 46968 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000059994s
	[INFO] 10.244.0.12:46915 - 17370 "AAAA IN registry.kube-system.svc.cluster.local.us-east-2.compute.internal. udp 83 false 512" NXDOMAIN qr,rd,ra 83 0.002665852s
	[INFO] 10.244.0.12:46915 - 16909 "A IN registry.kube-system.svc.cluster.local.us-east-2.compute.internal. udp 83 false 512" NXDOMAIN qr,rd,ra 83 0.002678413s
	[INFO] 10.244.0.12:39562 - 34668 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000096055s
	[INFO] 10.244.0.12:39562 - 34512 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000066066s
	[INFO] 10.244.0.21:39683 - 37096 "AAAA IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000273296s
	[INFO] 10.244.0.21:44539 - 15643 "A IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000194464s
	[INFO] 10.244.0.21:33461 - 646 "AAAA IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000132944s
	[INFO] 10.244.0.21:53688 - 51532 "A IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000186579s
	[INFO] 10.244.0.21:44031 - 64469 "AAAA IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000126322s
	[INFO] 10.244.0.21:44494 - 39960 "A IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000077356s
	[INFO] 10.244.0.21:49021 - 36189 "A IN storage.googleapis.com.us-east-2.compute.internal. udp 78 false 1232" NXDOMAIN qr,rd,ra 67 0.001748667s
	[INFO] 10.244.0.21:35656 - 579 "AAAA IN storage.googleapis.com.us-east-2.compute.internal. udp 78 false 1232" NXDOMAIN qr,rd,ra 67 0.005246297s
	[INFO] 10.244.0.21:43206 - 41865 "A IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 648 0.001533741s
	[INFO] 10.244.0.21:49219 - 29542 "AAAA IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 240 0.0019691s
	[INFO] 10.244.0.23:52402 - 2 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000162802s
	[INFO] 10.244.0.23:46277 - 3 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000105893s
	
	
	==> describe nodes <==
	Name:               addons-882841
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=addons-882841
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=299bbe887a12c40541707cc636234f35f4ff1785
	                    minikube.k8s.io/name=addons-882841
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_11_21T23_48_37_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-882841
	Annotations:        csi.volume.kubernetes.io/nodeid: {"hostpath.csi.k8s.io":"addons-882841"}
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 21 Nov 2025 23:48:33 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-882841
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 21 Nov 2025 23:51:09 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 21 Nov 2025 23:51:10 +0000   Fri, 21 Nov 2025 23:48:30 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 21 Nov 2025 23:51:10 +0000   Fri, 21 Nov 2025 23:48:30 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 21 Nov 2025 23:51:10 +0000   Fri, 21 Nov 2025 23:48:30 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 21 Nov 2025 23:51:10 +0000   Fri, 21 Nov 2025 23:49:22 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    addons-882841
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	System Info:
	  Machine ID:                 c92eea4d3c03c0156e01fcf8691e3907
	  System UUID:                5694beba-5776-4cd9-a5e8-6657562a60ef
	  Boot ID:                    72ac7385-472f-47d1-a23e-bd80468d6e09
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.2
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (26 in total)
	  Namespace                   Name                                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         34s
	  default                     cloud-spanner-emulator-6f9fcf858b-z8s5r     0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m30s
	  gadget                      gadget-84krr                                0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m28s
	  gcp-auth                    gcp-auth-78565c9fb4-rpmr7                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m24s
	  ingress-nginx               ingress-nginx-controller-6c8bf45fb-tnj6x    100m (5%)     0 (0%)      90Mi (1%)        0 (0%)         2m27s
	  kube-system                 coredns-66bc5c9577-zjrtb                    100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     2m33s
	  kube-system                 csi-hostpath-attacher-0                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m27s
	  kube-system                 csi-hostpath-resizer-0                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m27s
	  kube-system                 csi-hostpathplugin-mkngh                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         112s
	  kube-system                 etcd-addons-882841                          100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         2m38s
	  kube-system                 kindnet-wghw5                               100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      2m33s
	  kube-system                 kube-apiserver-addons-882841                250m (12%)    0 (0%)      0 (0%)           0 (0%)         2m38s
	  kube-system                 kube-controller-manager-addons-882841       200m (10%)    0 (0%)      0 (0%)           0 (0%)         2m38s
	  kube-system                 kube-ingress-dns-minikube                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m29s
	  kube-system                 kube-proxy-gthqw                            0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m33s
	  kube-system                 kube-scheduler-addons-882841                100m (5%)     0 (0%)      0 (0%)           0 (0%)         2m38s
	  kube-system                 metrics-server-85b7d694d7-7tk8r             100m (5%)     0 (0%)      200Mi (2%)       0 (0%)         2m29s
	  kube-system                 nvidia-device-plugin-daemonset-4jvp9        0 (0%)        0 (0%)      0 (0%)           0 (0%)         112s
	  kube-system                 registry-6b586f9694-5jvr4                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m29s
	  kube-system                 registry-creds-764b6fb674-8wv2f             0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m31s
	  kube-system                 registry-proxy-rrtfc                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         112s
	  kube-system                 snapshot-controller-7d9fbc56b8-44w6b        0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m28s
	  kube-system                 snapshot-controller-7d9fbc56b8-q99bt        0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m28s
	  kube-system                 storage-provisioner                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m29s
	  local-path-storage          local-path-provisioner-648f6765c9-sv9ds     0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m28s
	  yakd-dashboard              yakd-dashboard-5ff678cb9-x6sbv              0 (0%)        0 (0%)      128Mi (1%)       256Mi (3%)     2m28s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                1050m (52%)  100m (5%)
	  memory             638Mi (8%)   476Mi (6%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-1Gi      0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	  hugepages-32Mi     0 (0%)       0 (0%)
	  hugepages-64Ki     0 (0%)       0 (0%)
	Events:
	  Type     Reason                   Age                    From             Message
	  ----     ------                   ----                   ----             -------
	  Normal   Starting                 2m31s                  kube-proxy       
	  Normal   Starting                 2m45s                  kubelet          Starting kubelet.
	  Warning  CgroupV1                 2m45s                  kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  2m45s (x8 over 2m45s)  kubelet          Node addons-882841 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    2m45s (x8 over 2m45s)  kubelet          Node addons-882841 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     2m45s (x8 over 2m45s)  kubelet          Node addons-882841 status is now: NodeHasSufficientPID
	  Normal   Starting                 2m38s                  kubelet          Starting kubelet.
	  Warning  CgroupV1                 2m38s                  kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  2m38s                  kubelet          Node addons-882841 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    2m38s                  kubelet          Node addons-882841 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     2m38s                  kubelet          Node addons-882841 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           2m34s                  node-controller  Node addons-882841 event: Registered Node addons-882841 in Controller
	  Normal   NodeReady                112s                   kubelet          Node addons-882841 status is now: NodeReady
	
	
	==> dmesg <==
	[ +23.891724] overlayfs: idmapped layers are currently not supported
	[Nov21 23:06] overlayfs: idmapped layers are currently not supported
	[ +32.573452] overlayfs: idmapped layers are currently not supported
	[  +9.452963] overlayfs: idmapped layers are currently not supported
	[Nov21 23:08] overlayfs: idmapped layers are currently not supported
	[ +24.877472] overlayfs: idmapped layers are currently not supported
	[Nov21 23:11] overlayfs: idmapped layers are currently not supported
	[Nov21 23:13] overlayfs: idmapped layers are currently not supported
	[Nov21 23:14] overlayfs: idmapped layers are currently not supported
	[Nov21 23:15] overlayfs: idmapped layers are currently not supported
	[Nov21 23:16] overlayfs: idmapped layers are currently not supported
	[Nov21 23:17] overlayfs: idmapped layers are currently not supported
	[ +10.681159] overlayfs: idmapped layers are currently not supported
	[Nov21 23:19] overlayfs: idmapped layers are currently not supported
	[ +15.192296] overlayfs: idmapped layers are currently not supported
	[Nov21 23:20] overlayfs: idmapped layers are currently not supported
	[Nov21 23:21] overlayfs: idmapped layers are currently not supported
	[Nov21 23:22] overlayfs: idmapped layers are currently not supported
	[ +12.884842] overlayfs: idmapped layers are currently not supported
	[Nov21 23:23] overlayfs: idmapped layers are currently not supported
	[ +12.022080] overlayfs: idmapped layers are currently not supported
	[Nov21 23:25] overlayfs: idmapped layers are currently not supported
	[ +24.447615] overlayfs: idmapped layers are currently not supported
	[Nov21 23:46] kauditd_printk_skb: 8 callbacks suppressed
	[Nov21 23:48] overlayfs: idmapped layers are currently not supported
	
	
	==> etcd [0b539dfc17788b7400ee2eb5abadb008e5a8a9796bb112d8a69f14a34f2fd551] <==
	{"level":"warn","ts":"2025-11-21T23:48:32.341440Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34868","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-21T23:48:32.361337Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34886","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-21T23:48:32.372921Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34918","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-21T23:48:32.393724Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34922","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-21T23:48:32.416888Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34942","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-21T23:48:32.446862Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34956","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-21T23:48:32.470324Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34978","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-21T23:48:32.483436Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35002","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-21T23:48:32.507178Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35014","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-21T23:48:32.516352Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35028","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-21T23:48:32.535319Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35048","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-21T23:48:32.550711Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35066","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-21T23:48:32.567380Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35076","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-21T23:48:32.587652Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35094","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-21T23:48:32.622209Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35104","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-21T23:48:32.637397Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35110","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-21T23:48:32.663638Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35126","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-21T23:48:32.691732Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35146","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-21T23:48:32.859489Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35162","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-21T23:48:47.768274Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46672","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-21T23:48:47.782599Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46684","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-21T23:49:10.728841Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54846","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-21T23:49:10.744091Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54862","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-21T23:49:10.774047Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54890","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-21T23:49:10.788751Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54904","server-name":"","error":"EOF"}
	
	
	==> gcp-auth [ea59934e5547eaa1836042ec90c5594d31bad9d818cf377cf1a0fa06d816c2e9] <==
	2025/11/21 23:50:30 GCP Auth Webhook started!
	2025/11/21 23:50:39 Ready to marshal response ...
	2025/11/21 23:50:39 Ready to write response ...
	2025/11/21 23:50:40 Ready to marshal response ...
	2025/11/21 23:50:40 Ready to write response ...
	2025/11/21 23:50:40 Ready to marshal response ...
	2025/11/21 23:50:40 Ready to write response ...
	2025/11/21 23:51:02 Ready to marshal response ...
	2025/11/21 23:51:02 Ready to write response ...
	2025/11/21 23:51:02 Ready to marshal response ...
	2025/11/21 23:51:02 Ready to write response ...
	2025/11/21 23:51:02 Ready to marshal response ...
	2025/11/21 23:51:02 Ready to write response ...
	2025/11/21 23:51:11 Ready to marshal response ...
	2025/11/21 23:51:11 Ready to write response ...
	
	
	==> kernel <==
	 23:51:14 up  4:33,  0 user,  load average: 1.80, 1.39, 1.25
	Linux addons-882841 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [970af788676bddee24edc4dbf7882805510ac451d6658537c9d2152752c3ffee] <==
	I1121 23:49:13.915286       1 controller.go:711] "Syncing nftables rules"
	I1121 23:49:22.418030       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1121 23:49:22.418084       1 main.go:301] handling current node
	I1121 23:49:32.417882       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1121 23:49:32.417931       1 main.go:301] handling current node
	I1121 23:49:42.415043       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1121 23:49:42.415073       1 main.go:301] handling current node
	I1121 23:49:52.415204       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1121 23:49:52.415249       1 main.go:301] handling current node
	I1121 23:50:02.414063       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1121 23:50:02.414089       1 main.go:301] handling current node
	I1121 23:50:12.414973       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1121 23:50:12.415006       1 main.go:301] handling current node
	I1121 23:50:22.414433       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1121 23:50:22.414471       1 main.go:301] handling current node
	I1121 23:50:32.413942       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1121 23:50:32.413974       1 main.go:301] handling current node
	I1121 23:50:42.414683       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1121 23:50:42.414828       1 main.go:301] handling current node
	I1121 23:50:52.414488       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1121 23:50:52.414523       1 main.go:301] handling current node
	I1121 23:51:02.414168       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1121 23:51:02.414208       1 main.go:301] handling current node
	I1121 23:51:12.414265       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1121 23:51:12.414296       1 main.go:301] handling current node
	
	
	==> kube-apiserver [415d7ebb38dbfa2b139e79fcf924802ecf12d3ba74e075e30026fdd18353d343] <==
	W1121 23:49:45.962832       1 handler_proxy.go:99] no RequestInfo found in the context
	E1121 23:49:45.962930       1 controller.go:146] "Unhandled Error" err=<
		Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	E1121 23:49:45.966785       1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.108.241.191:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.108.241.191:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.108.241.191:443: connect: connection refused" logger="UnhandledError"
	E1121 23:49:45.967748       1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.108.241.191:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.108.241.191:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.108.241.191:443: connect: connection refused" logger="UnhandledError"
	W1121 23:49:46.733860       1 handler_proxy.go:99] no RequestInfo found in the context
	E1121 23:49:46.733903       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	I1121 23:49:46.733916       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W1121 23:49:46.735096       1 handler_proxy.go:99] no RequestInfo found in the context
	E1121 23:49:46.735170       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I1121 23:49:46.735180       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W1121 23:49:50.985013       1 handler_proxy.go:99] no RequestInfo found in the context
	E1121 23:49:50.985066       1 controller.go:146] "Unhandled Error" err=<
		Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	E1121 23:49:50.985124       1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.108.241.191:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.108.241.191:443/apis/metrics.k8s.io/v1beta1\": context deadline exceeded" logger="UnhandledError"
	I1121 23:49:51.041438       1 handler.go:285] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	E1121 23:50:50.348423       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8443->192.168.49.1:37206: use of closed network connection
	E1121 23:50:50.757015       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8443->192.168.49.1:37240: use of closed network connection
	
	
	==> kube-controller-manager [f6c2269669bcf9942b43c38a6d80d882a37c12fba06a2b1b514b07dbd6183350] <==
	I1121 23:48:40.747766       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I1121 23:48:40.749782       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="addons-882841"
	I1121 23:48:40.749917       1 node_lifecycle_controller.go:1025] "Controller detected that all Nodes are not-Ready. Entering master disruption mode" logger="node-lifecycle-controller"
	I1121 23:48:40.749659       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I1121 23:48:40.749667       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I1121 23:48:40.749682       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I1121 23:48:40.749692       1 shared_informer.go:356] "Caches are synced" controller="GC"
	I1121 23:48:40.749593       1 shared_informer.go:356] "Caches are synced" controller="service-cidr-controller"
	I1121 23:48:40.749649       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I1121 23:48:40.751003       1 shared_informer.go:356] "Caches are synced" controller="ReplicationController"
	I1121 23:48:40.751058       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1121 23:48:40.758724       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1121 23:48:40.759985       1 shared_informer.go:356] "Caches are synced" controller="legacy-service-account-token-cleaner"
	I1121 23:48:40.765342       1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
	E1121 23:48:45.854083       1 replica_set.go:587] "Unhandled Error" err="sync \"kube-system/metrics-server-85b7d694d7\" failed with pods \"metrics-server-85b7d694d7-\" is forbidden: error looking up service account kube-system/metrics-server: serviceaccount \"metrics-server\" not found" logger="UnhandledError"
	E1121 23:49:10.721924       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1121 23:49:10.722169       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="volumesnapshots.snapshot.storage.k8s.io"
	I1121 23:49:10.722258       1 shared_informer.go:349] "Waiting for caches to sync" controller="resource quota"
	I1121 23:49:10.762605       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	I1121 23:49:10.766724       1 shared_informer.go:349] "Waiting for caches to sync" controller="garbage collector"
	I1121 23:49:10.822776       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1121 23:49:10.867906       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1121 23:49:25.759178       1 node_lifecycle_controller.go:1044] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	E1121 23:49:40.827929       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1121 23:49:40.892338       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	
	
	==> kube-proxy [074654b9d6b9f820e2f61d8ef839ef5ebec8673802a3e034c02530f243f023d0] <==
	I1121 23:48:42.215012       1 server_linux.go:53] "Using iptables proxy"
	I1121 23:48:42.323653       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1121 23:48:42.437409       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1121 23:48:42.437439       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.49.2"]
	E1121 23:48:42.437524       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1121 23:48:42.479624       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1121 23:48:42.479692       1 server_linux.go:132] "Using iptables Proxier"
	I1121 23:48:42.488132       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1121 23:48:42.488463       1 server.go:527] "Version info" version="v1.34.1"
	I1121 23:48:42.488479       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1121 23:48:42.489680       1 config.go:200] "Starting service config controller"
	I1121 23:48:42.489689       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1121 23:48:42.489705       1 config.go:106] "Starting endpoint slice config controller"
	I1121 23:48:42.489709       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1121 23:48:42.489720       1 config.go:403] "Starting serviceCIDR config controller"
	I1121 23:48:42.489724       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1121 23:48:42.491014       1 config.go:309] "Starting node config controller"
	I1121 23:48:42.495868       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1121 23:48:42.495890       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1121 23:48:42.590111       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1121 23:48:42.590178       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1121 23:48:42.590395       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	
	
	==> kube-scheduler [ba805611fb053ae520be5762cfc188a2dd5915f488bed9770376cb5e14b60936] <==
	I1121 23:48:34.335668       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1121 23:48:34.335757       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1121 23:48:34.336207       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1121 23:48:34.336252       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	E1121 23:48:34.346574       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_arm64.s:1223" type="*v1.ConfigMap"
	E1121 23:48:34.347178       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1121 23:48:34.347238       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1121 23:48:34.347371       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1121 23:48:34.347446       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1121 23:48:34.347479       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1121 23:48:34.347515       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1121 23:48:34.349293       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1121 23:48:34.349413       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1121 23:48:34.349493       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1121 23:48:34.351330       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1121 23:48:34.351446       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1121 23:48:34.351554       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1121 23:48:34.351786       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1121 23:48:34.351973       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1121 23:48:34.352013       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1121 23:48:34.352049       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1121 23:48:34.352084       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1121 23:48:34.352133       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1121 23:48:35.309012       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_arm64.s:1223" type="*v1.ConfigMap"
	I1121 23:48:37.836669       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Nov 21 23:51:10 addons-882841 kubelet[1277]: I1121 23:51:10.556724    1277 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4b6a8df7-0e37-45e6-9597-b37a52bd040e-pvc-f4b21e23-9cc1-4889-8265-d98a837f9eda" (OuterVolumeSpecName: "data") pod "4b6a8df7-0e37-45e6-9597-b37a52bd040e" (UID: "4b6a8df7-0e37-45e6-9597-b37a52bd040e"). InnerVolumeSpecName "pvc-f4b21e23-9cc1-4889-8265-d98a837f9eda". PluginName "kubernetes.io/host-path", VolumeGIDValue ""
	Nov 21 23:51:10 addons-882841 kubelet[1277]: I1121 23:51:10.556751    1277 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4b6a8df7-0e37-45e6-9597-b37a52bd040e-gcp-creds" (OuterVolumeSpecName: "gcp-creds") pod "4b6a8df7-0e37-45e6-9597-b37a52bd040e" (UID: "4b6a8df7-0e37-45e6-9597-b37a52bd040e"). InnerVolumeSpecName "gcp-creds". PluginName "kubernetes.io/host-path", VolumeGIDValue ""
	Nov 21 23:51:10 addons-882841 kubelet[1277]: I1121 23:51:10.558718    1277 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4b6a8df7-0e37-45e6-9597-b37a52bd040e-kube-api-access-srfn2" (OuterVolumeSpecName: "kube-api-access-srfn2") pod "4b6a8df7-0e37-45e6-9597-b37a52bd040e" (UID: "4b6a8df7-0e37-45e6-9597-b37a52bd040e"). InnerVolumeSpecName "kube-api-access-srfn2". PluginName "kubernetes.io/projected", VolumeGIDValue ""
	Nov 21 23:51:10 addons-882841 kubelet[1277]: I1121 23:51:10.657271    1277 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-srfn2\" (UniqueName: \"kubernetes.io/projected/4b6a8df7-0e37-45e6-9597-b37a52bd040e-kube-api-access-srfn2\") on node \"addons-882841\" DevicePath \"\""
	Nov 21 23:51:10 addons-882841 kubelet[1277]: I1121 23:51:10.657515    1277 reconciler_common.go:299] "Volume detached for volume \"pvc-f4b21e23-9cc1-4889-8265-d98a837f9eda\" (UniqueName: \"kubernetes.io/host-path/4b6a8df7-0e37-45e6-9597-b37a52bd040e-pvc-f4b21e23-9cc1-4889-8265-d98a837f9eda\") on node \"addons-882841\" DevicePath \"\""
	Nov 21 23:51:10 addons-882841 kubelet[1277]: I1121 23:51:10.657598    1277 reconciler_common.go:299] "Volume detached for volume \"gcp-creds\" (UniqueName: \"kubernetes.io/host-path/4b6a8df7-0e37-45e6-9597-b37a52bd040e-gcp-creds\") on node \"addons-882841\" DevicePath \"\""
	Nov 21 23:51:11 addons-882841 kubelet[1277]: I1121 23:51:11.404859    1277 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0d5fc44bec856935bb72f2e3d0b96466ecb3d4c639eee5c81a5a5ef3965d3fb2"
	Nov 21 23:51:11 addons-882841 kubelet[1277]: I1121 23:51:11.767175    1277 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-92jsk\" (UniqueName: \"kubernetes.io/projected/f56231bc-01c4-4571-9836-63b5c15be7e1-kube-api-access-92jsk\") pod \"helper-pod-delete-pvc-f4b21e23-9cc1-4889-8265-d98a837f9eda\" (UID: \"f56231bc-01c4-4571-9836-63b5c15be7e1\") " pod="local-path-storage/helper-pod-delete-pvc-f4b21e23-9cc1-4889-8265-d98a837f9eda"
	Nov 21 23:51:11 addons-882841 kubelet[1277]: I1121 23:51:11.767886    1277 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"script\" (UniqueName: \"kubernetes.io/configmap/f56231bc-01c4-4571-9836-63b5c15be7e1-script\") pod \"helper-pod-delete-pvc-f4b21e23-9cc1-4889-8265-d98a837f9eda\" (UID: \"f56231bc-01c4-4571-9836-63b5c15be7e1\") " pod="local-path-storage/helper-pod-delete-pvc-f4b21e23-9cc1-4889-8265-d98a837f9eda"
	Nov 21 23:51:11 addons-882841 kubelet[1277]: I1121 23:51:11.768040    1277 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"data\" (UniqueName: \"kubernetes.io/host-path/f56231bc-01c4-4571-9836-63b5c15be7e1-data\") pod \"helper-pod-delete-pvc-f4b21e23-9cc1-4889-8265-d98a837f9eda\" (UID: \"f56231bc-01c4-4571-9836-63b5c15be7e1\") " pod="local-path-storage/helper-pod-delete-pvc-f4b21e23-9cc1-4889-8265-d98a837f9eda"
	Nov 21 23:51:11 addons-882841 kubelet[1277]: I1121 23:51:11.768227    1277 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"gcp-creds\" (UniqueName: \"kubernetes.io/host-path/f56231bc-01c4-4571-9836-63b5c15be7e1-gcp-creds\") pod \"helper-pod-delete-pvc-f4b21e23-9cc1-4889-8265-d98a837f9eda\" (UID: \"f56231bc-01c4-4571-9836-63b5c15be7e1\") " pod="local-path-storage/helper-pod-delete-pvc-f4b21e23-9cc1-4889-8265-d98a837f9eda"
	Nov 21 23:51:12 addons-882841 kubelet[1277]: W1121 23:51:12.026422    1277 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/cbf01a114cc53a8b6c72a0ed56d9776d5ffd3dfdacd5a45cb3e08babfb8e2033/crio-75557f58076f860e4ec19aa3cd5a6afff12967bb1311b30d878d9d88686329f5 WatchSource:0}: Error finding container 75557f58076f860e4ec19aa3cd5a6afff12967bb1311b30d878d9d88686329f5: Status 404 returned error can't find the container with id 75557f58076f860e4ec19aa3cd5a6afff12967bb1311b30d878d9d88686329f5
	Nov 21 23:51:12 addons-882841 kubelet[1277]: I1121 23:51:12.565957    1277 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4b6a8df7-0e37-45e6-9597-b37a52bd040e" path="/var/lib/kubelet/pods/4b6a8df7-0e37-45e6-9597-b37a52bd040e/volumes"
	Nov 21 23:51:13 addons-882841 kubelet[1277]: I1121 23:51:13.582030    1277 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"data\" (UniqueName: \"kubernetes.io/host-path/f56231bc-01c4-4571-9836-63b5c15be7e1-data\") pod \"f56231bc-01c4-4571-9836-63b5c15be7e1\" (UID: \"f56231bc-01c4-4571-9836-63b5c15be7e1\") "
	Nov 21 23:51:13 addons-882841 kubelet[1277]: I1121 23:51:13.582102    1277 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kube-api-access-92jsk\" (UniqueName: \"kubernetes.io/projected/f56231bc-01c4-4571-9836-63b5c15be7e1-kube-api-access-92jsk\") pod \"f56231bc-01c4-4571-9836-63b5c15be7e1\" (UID: \"f56231bc-01c4-4571-9836-63b5c15be7e1\") "
	Nov 21 23:51:13 addons-882841 kubelet[1277]: I1121 23:51:13.582150    1277 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"gcp-creds\" (UniqueName: \"kubernetes.io/host-path/f56231bc-01c4-4571-9836-63b5c15be7e1-gcp-creds\") pod \"f56231bc-01c4-4571-9836-63b5c15be7e1\" (UID: \"f56231bc-01c4-4571-9836-63b5c15be7e1\") "
	Nov 21 23:51:13 addons-882841 kubelet[1277]: I1121 23:51:13.582177    1277 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"script\" (UniqueName: \"kubernetes.io/configmap/f56231bc-01c4-4571-9836-63b5c15be7e1-script\") pod \"f56231bc-01c4-4571-9836-63b5c15be7e1\" (UID: \"f56231bc-01c4-4571-9836-63b5c15be7e1\") "
	Nov 21 23:51:13 addons-882841 kubelet[1277]: I1121 23:51:13.582883    1277 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f56231bc-01c4-4571-9836-63b5c15be7e1-script" (OuterVolumeSpecName: "script") pod "f56231bc-01c4-4571-9836-63b5c15be7e1" (UID: "f56231bc-01c4-4571-9836-63b5c15be7e1"). InnerVolumeSpecName "script". PluginName "kubernetes.io/configmap", VolumeGIDValue ""
	Nov 21 23:51:13 addons-882841 kubelet[1277]: I1121 23:51:13.582939    1277 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f56231bc-01c4-4571-9836-63b5c15be7e1-data" (OuterVolumeSpecName: "data") pod "f56231bc-01c4-4571-9836-63b5c15be7e1" (UID: "f56231bc-01c4-4571-9836-63b5c15be7e1"). InnerVolumeSpecName "data". PluginName "kubernetes.io/host-path", VolumeGIDValue ""
	Nov 21 23:51:13 addons-882841 kubelet[1277]: I1121 23:51:13.583059    1277 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f56231bc-01c4-4571-9836-63b5c15be7e1-gcp-creds" (OuterVolumeSpecName: "gcp-creds") pod "f56231bc-01c4-4571-9836-63b5c15be7e1" (UID: "f56231bc-01c4-4571-9836-63b5c15be7e1"). InnerVolumeSpecName "gcp-creds". PluginName "kubernetes.io/host-path", VolumeGIDValue ""
	Nov 21 23:51:13 addons-882841 kubelet[1277]: I1121 23:51:13.590377    1277 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f56231bc-01c4-4571-9836-63b5c15be7e1-kube-api-access-92jsk" (OuterVolumeSpecName: "kube-api-access-92jsk") pod "f56231bc-01c4-4571-9836-63b5c15be7e1" (UID: "f56231bc-01c4-4571-9836-63b5c15be7e1"). InnerVolumeSpecName "kube-api-access-92jsk". PluginName "kubernetes.io/projected", VolumeGIDValue ""
	Nov 21 23:51:13 addons-882841 kubelet[1277]: I1121 23:51:13.683332    1277 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-92jsk\" (UniqueName: \"kubernetes.io/projected/f56231bc-01c4-4571-9836-63b5c15be7e1-kube-api-access-92jsk\") on node \"addons-882841\" DevicePath \"\""
	Nov 21 23:51:13 addons-882841 kubelet[1277]: I1121 23:51:13.683375    1277 reconciler_common.go:299] "Volume detached for volume \"gcp-creds\" (UniqueName: \"kubernetes.io/host-path/f56231bc-01c4-4571-9836-63b5c15be7e1-gcp-creds\") on node \"addons-882841\" DevicePath \"\""
	Nov 21 23:51:13 addons-882841 kubelet[1277]: I1121 23:51:13.683386    1277 reconciler_common.go:299] "Volume detached for volume \"script\" (UniqueName: \"kubernetes.io/configmap/f56231bc-01c4-4571-9836-63b5c15be7e1-script\") on node \"addons-882841\" DevicePath \"\""
	Nov 21 23:51:13 addons-882841 kubelet[1277]: I1121 23:51:13.683396    1277 reconciler_common.go:299] "Volume detached for volume \"data\" (UniqueName: \"kubernetes.io/host-path/f56231bc-01c4-4571-9836-63b5c15be7e1-data\") on node \"addons-882841\" DevicePath \"\""
	
	
	==> storage-provisioner [561d110537c5cbfb43c832086d9c8216a7180387df20a1b9c68b29a4b682f207] <==
	W1121 23:50:50.066864       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1121 23:50:52.069964       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1121 23:50:52.074893       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1121 23:50:54.079358       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1121 23:50:54.084181       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1121 23:50:56.087280       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1121 23:50:56.097682       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1121 23:50:58.101402       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1121 23:50:58.106194       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1121 23:51:00.139931       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1121 23:51:00.167763       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1121 23:51:02.171744       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1121 23:51:02.183945       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1121 23:51:04.189210       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1121 23:51:04.196716       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1121 23:51:06.199455       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1121 23:51:06.204470       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1121 23:51:08.207808       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1121 23:51:08.215517       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1121 23:51:10.218472       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1121 23:51:10.225660       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1121 23:51:12.228726       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1121 23:51:12.233369       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1121 23:51:14.237506       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1121 23:51:14.241972       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p addons-882841 -n addons-882841
helpers_test.go:269: (dbg) Run:  kubectl --context addons-882841 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:280: non-running pods: ingress-nginx-admission-create-f9tdh ingress-nginx-admission-patch-lfbrr registry-creds-764b6fb674-8wv2f
helpers_test.go:282: ======> post-mortem[TestAddons/parallel/Headlamp]: describe non-running pods <======
helpers_test.go:285: (dbg) Run:  kubectl --context addons-882841 describe pod ingress-nginx-admission-create-f9tdh ingress-nginx-admission-patch-lfbrr registry-creds-764b6fb674-8wv2f
helpers_test.go:285: (dbg) Non-zero exit: kubectl --context addons-882841 describe pod ingress-nginx-admission-create-f9tdh ingress-nginx-admission-patch-lfbrr registry-creds-764b6fb674-8wv2f: exit status 1 (89.45833ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "ingress-nginx-admission-create-f9tdh" not found
	Error from server (NotFound): pods "ingress-nginx-admission-patch-lfbrr" not found
	Error from server (NotFound): pods "registry-creds-764b6fb674-8wv2f" not found

                                                
                                                
** /stderr **
helpers_test.go:287: kubectl --context addons-882841 describe pod ingress-nginx-admission-create-f9tdh ingress-nginx-admission-patch-lfbrr registry-creds-764b6fb674-8wv2f: exit status 1
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-882841 addons disable headlamp --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-arm64 -p addons-882841 addons disable headlamp --alsologtostderr -v=1: exit status 11 (252.712527ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1121 23:51:15.213759  525022 out.go:360] Setting OutFile to fd 1 ...
	I1121 23:51:15.214692  525022 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1121 23:51:15.214707  525022 out.go:374] Setting ErrFile to fd 2...
	I1121 23:51:15.214713  525022 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1121 23:51:15.214958  525022 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21934-513600/.minikube/bin
	I1121 23:51:15.215239  525022 mustload.go:66] Loading cluster: addons-882841
	I1121 23:51:15.215599  525022 config.go:182] Loaded profile config "addons-882841": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1121 23:51:15.215617  525022 addons.go:622] checking whether the cluster is paused
	I1121 23:51:15.215796  525022 config.go:182] Loaded profile config "addons-882841": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1121 23:51:15.215814  525022 host.go:66] Checking if "addons-882841" exists ...
	I1121 23:51:15.216317  525022 cli_runner.go:164] Run: docker container inspect addons-882841 --format={{.State.Status}}
	I1121 23:51:15.233627  525022 ssh_runner.go:195] Run: systemctl --version
	I1121 23:51:15.233693  525022 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-882841
	I1121 23:51:15.250960  525022 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33495 SSHKeyPath:/home/jenkins/minikube-integration/21934-513600/.minikube/machines/addons-882841/id_rsa Username:docker}
	I1121 23:51:15.352462  525022 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1121 23:51:15.352556  525022 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1121 23:51:15.381846  525022 cri.go:89] found id: "d4288bfcc52cab787e4c57cd6f6ce8b5e4eab8e0f753f7b4b8c0dfbb6d7fcacf"
	I1121 23:51:15.381870  525022 cri.go:89] found id: "c2b953c3eb94bb9506b88f8f9082db05bab7bbad3c3c92fb83a61fd4148bcd7c"
	I1121 23:51:15.381875  525022 cri.go:89] found id: "4b84c2719358c69a49238d48c9737dea6a7f98672fde68733fbdfd0c5f76c519"
	I1121 23:51:15.381879  525022 cri.go:89] found id: "03e07c8bd5633a56d64051d6a63832c1a6fc109d661151083091d50ee1a7dfb7"
	I1121 23:51:15.381883  525022 cri.go:89] found id: "4f400d29139bbf4aeaa41d46055af4710e38ae5d6a844d3ee8b87ce4de3a0f3a"
	I1121 23:51:15.381886  525022 cri.go:89] found id: "0ecc1bcf505044ab1e7b917a3904e9d8ff652e08129c43483e6ff6f465bc7f48"
	I1121 23:51:15.381890  525022 cri.go:89] found id: "0d6392a88c56afc228d16b7659bcfa96628c1585bdb4b03af537ff609bf9f34a"
	I1121 23:51:15.381894  525022 cri.go:89] found id: "492d7c5835fb8502b60633915d9d3f885aa7bc3696e4febec5b394bab0a6773b"
	I1121 23:51:15.381899  525022 cri.go:89] found id: "96794062627c79a139839f726287bb12566d789fa0f4d5b1994cd88518a2e2eb"
	I1121 23:51:15.381909  525022 cri.go:89] found id: "d6df5ea7f4eb5e6b4852fd9cf791dda6c35753e420985175cbcc2a80b368d82b"
	I1121 23:51:15.381913  525022 cri.go:89] found id: "800239fcbfd600dbcec2ac03099f8200b62cf3769357e03ab2d40f672490913e"
	I1121 23:51:15.381916  525022 cri.go:89] found id: "1d9bfd16346a7b544d9696f5ac0700133b9c44dfb05b597e6ece14fdf7c1ee4d"
	I1121 23:51:15.381920  525022 cri.go:89] found id: "b7eb954adbbab5deddd57f625c7dce81a5fbc6e9ee1d2cb260d88e1fbd1482da"
	I1121 23:51:15.381923  525022 cri.go:89] found id: "ba42295e49f9af2999894c8bde53ee31c193600b80cc921d12a7b280aefbca13"
	I1121 23:51:15.381926  525022 cri.go:89] found id: "36f901d7268653e5e64e73d2f8c787b658cab9899bc17b2cc522fb984b5ae3f7"
	I1121 23:51:15.381931  525022 cri.go:89] found id: "561d110537c5cbfb43c832086d9c8216a7180387df20a1b9c68b29a4b682f207"
	I1121 23:51:15.381939  525022 cri.go:89] found id: "e6dc31e0930681fac6fbd625f4ec7a07e57c10d13a728a7ec163a4c66a6d4a2b"
	I1121 23:51:15.381943  525022 cri.go:89] found id: "074654b9d6b9f820e2f61d8ef839ef5ebec8673802a3e034c02530f243f023d0"
	I1121 23:51:15.381946  525022 cri.go:89] found id: "970af788676bddee24edc4dbf7882805510ac451d6658537c9d2152752c3ffee"
	I1121 23:51:15.381949  525022 cri.go:89] found id: "f6c2269669bcf9942b43c38a6d80d882a37c12fba06a2b1b514b07dbd6183350"
	I1121 23:51:15.381954  525022 cri.go:89] found id: "415d7ebb38dbfa2b139e79fcf924802ecf12d3ba74e075e30026fdd18353d343"
	I1121 23:51:15.381958  525022 cri.go:89] found id: "ba805611fb053ae520be5762cfc188a2dd5915f488bed9770376cb5e14b60936"
	I1121 23:51:15.381963  525022 cri.go:89] found id: "0b539dfc17788b7400ee2eb5abadb008e5a8a9796bb112d8a69f14a34f2fd551"
	I1121 23:51:15.381972  525022 cri.go:89] found id: ""
	I1121 23:51:15.382023  525022 ssh_runner.go:195] Run: sudo runc list -f json
	I1121 23:51:15.397486  525022 out.go:203] 
	W1121 23:51:15.400391  525022 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-21T23:51:15Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-21T23:51:15Z" level=error msg="open /run/runc: no such file or directory"
	
	W1121 23:51:15.400415  525022 out.go:285] * 
	* 
	W1121 23:51:15.407277  525022 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_efe3f0a65eabdab15324ffdebd5a66da17706a9c_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_efe3f0a65eabdab15324ffdebd5a66da17706a9c_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1121 23:51:15.410154  525022 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable headlamp addon: args "out/minikube-linux-arm64 -p addons-882841 addons disable headlamp --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/Headlamp (3.38s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (5.37s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
=== PAUSE TestAddons/parallel/CloudSpanner

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:840: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
helpers_test.go:352: "cloud-spanner-emulator-6f9fcf858b-z8s5r" [8e8fa226-ffcb-4341-9902-d89a829583b0] Running
addons_test.go:840: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 5.003810981s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-882841 addons disable cloud-spanner --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-arm64 -p addons-882841 addons disable cloud-spanner --alsologtostderr -v=1: exit status 11 (355.320943ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1121 23:51:12.109266  524448 out.go:360] Setting OutFile to fd 1 ...
	I1121 23:51:12.110034  524448 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1121 23:51:12.110057  524448 out.go:374] Setting ErrFile to fd 2...
	I1121 23:51:12.110064  524448 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1121 23:51:12.110345  524448 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21934-513600/.minikube/bin
	I1121 23:51:12.110679  524448 mustload.go:66] Loading cluster: addons-882841
	I1121 23:51:12.111074  524448 config.go:182] Loaded profile config "addons-882841": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1121 23:51:12.111094  524448 addons.go:622] checking whether the cluster is paused
	I1121 23:51:12.111206  524448 config.go:182] Loaded profile config "addons-882841": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1121 23:51:12.111221  524448 host.go:66] Checking if "addons-882841" exists ...
	I1121 23:51:12.112264  524448 cli_runner.go:164] Run: docker container inspect addons-882841 --format={{.State.Status}}
	I1121 23:51:12.139777  524448 ssh_runner.go:195] Run: systemctl --version
	I1121 23:51:12.139832  524448 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-882841
	I1121 23:51:12.172605  524448 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33495 SSHKeyPath:/home/jenkins/minikube-integration/21934-513600/.minikube/machines/addons-882841/id_rsa Username:docker}
	I1121 23:51:12.273164  524448 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1121 23:51:12.273250  524448 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1121 23:51:12.308870  524448 cri.go:89] found id: "d4288bfcc52cab787e4c57cd6f6ce8b5e4eab8e0f753f7b4b8c0dfbb6d7fcacf"
	I1121 23:51:12.308893  524448 cri.go:89] found id: "c2b953c3eb94bb9506b88f8f9082db05bab7bbad3c3c92fb83a61fd4148bcd7c"
	I1121 23:51:12.308902  524448 cri.go:89] found id: "4b84c2719358c69a49238d48c9737dea6a7f98672fde68733fbdfd0c5f76c519"
	I1121 23:51:12.308906  524448 cri.go:89] found id: "03e07c8bd5633a56d64051d6a63832c1a6fc109d661151083091d50ee1a7dfb7"
	I1121 23:51:12.308909  524448 cri.go:89] found id: "4f400d29139bbf4aeaa41d46055af4710e38ae5d6a844d3ee8b87ce4de3a0f3a"
	I1121 23:51:12.308914  524448 cri.go:89] found id: "0ecc1bcf505044ab1e7b917a3904e9d8ff652e08129c43483e6ff6f465bc7f48"
	I1121 23:51:12.308917  524448 cri.go:89] found id: "0d6392a88c56afc228d16b7659bcfa96628c1585bdb4b03af537ff609bf9f34a"
	I1121 23:51:12.308921  524448 cri.go:89] found id: "492d7c5835fb8502b60633915d9d3f885aa7bc3696e4febec5b394bab0a6773b"
	I1121 23:51:12.308924  524448 cri.go:89] found id: "96794062627c79a139839f726287bb12566d789fa0f4d5b1994cd88518a2e2eb"
	I1121 23:51:12.308930  524448 cri.go:89] found id: "d6df5ea7f4eb5e6b4852fd9cf791dda6c35753e420985175cbcc2a80b368d82b"
	I1121 23:51:12.308934  524448 cri.go:89] found id: "800239fcbfd600dbcec2ac03099f8200b62cf3769357e03ab2d40f672490913e"
	I1121 23:51:12.308937  524448 cri.go:89] found id: "1d9bfd16346a7b544d9696f5ac0700133b9c44dfb05b597e6ece14fdf7c1ee4d"
	I1121 23:51:12.308940  524448 cri.go:89] found id: "b7eb954adbbab5deddd57f625c7dce81a5fbc6e9ee1d2cb260d88e1fbd1482da"
	I1121 23:51:12.308944  524448 cri.go:89] found id: "ba42295e49f9af2999894c8bde53ee31c193600b80cc921d12a7b280aefbca13"
	I1121 23:51:12.308947  524448 cri.go:89] found id: "36f901d7268653e5e64e73d2f8c787b658cab9899bc17b2cc522fb984b5ae3f7"
	I1121 23:51:12.308952  524448 cri.go:89] found id: "561d110537c5cbfb43c832086d9c8216a7180387df20a1b9c68b29a4b682f207"
	I1121 23:51:12.308956  524448 cri.go:89] found id: "e6dc31e0930681fac6fbd625f4ec7a07e57c10d13a728a7ec163a4c66a6d4a2b"
	I1121 23:51:12.308963  524448 cri.go:89] found id: "074654b9d6b9f820e2f61d8ef839ef5ebec8673802a3e034c02530f243f023d0"
	I1121 23:51:12.308967  524448 cri.go:89] found id: "970af788676bddee24edc4dbf7882805510ac451d6658537c9d2152752c3ffee"
	I1121 23:51:12.308975  524448 cri.go:89] found id: "f6c2269669bcf9942b43c38a6d80d882a37c12fba06a2b1b514b07dbd6183350"
	I1121 23:51:12.308979  524448 cri.go:89] found id: "415d7ebb38dbfa2b139e79fcf924802ecf12d3ba74e075e30026fdd18353d343"
	I1121 23:51:12.308983  524448 cri.go:89] found id: "ba805611fb053ae520be5762cfc188a2dd5915f488bed9770376cb5e14b60936"
	I1121 23:51:12.308986  524448 cri.go:89] found id: "0b539dfc17788b7400ee2eb5abadb008e5a8a9796bb112d8a69f14a34f2fd551"
	I1121 23:51:12.308989  524448 cri.go:89] found id: ""
	I1121 23:51:12.309038  524448 ssh_runner.go:195] Run: sudo runc list -f json
	I1121 23:51:12.327531  524448 out.go:203] 
	W1121 23:51:12.330456  524448 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-21T23:51:12Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-21T23:51:12Z" level=error msg="open /run/runc: no such file or directory"
	
	W1121 23:51:12.330474  524448 out.go:285] * 
	* 
	W1121 23:51:12.337163  524448 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_e93ff976b7e98e1dc466aded9385c0856b6d1b41_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_e93ff976b7e98e1dc466aded9385c0856b6d1b41_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1121 23:51:12.340055  524448 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable cloud-spanner addon: args "out/minikube-linux-arm64 -p addons-882841 addons disable cloud-spanner --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/CloudSpanner (5.37s)

                                                
                                    
x
+
TestAddons/parallel/LocalPath (9.48s)

                                                
                                                
=== RUN   TestAddons/parallel/LocalPath
=== PAUSE TestAddons/parallel/LocalPath

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/LocalPath
addons_test.go:949: (dbg) Run:  kubectl --context addons-882841 apply -f testdata/storage-provisioner-rancher/pvc.yaml
addons_test.go:955: (dbg) Run:  kubectl --context addons-882841 apply -f testdata/storage-provisioner-rancher/pod.yaml
addons_test.go:959: (dbg) TestAddons/parallel/LocalPath: waiting 5m0s for pvc "test-pvc" in namespace "default" ...
helpers_test.go:402: (dbg) Run:  kubectl --context addons-882841 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-882841 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-882841 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-882841 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-882841 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-882841 get pvc test-pvc -o jsonpath={.status.phase} -n default
addons_test.go:962: (dbg) TestAddons/parallel/LocalPath: waiting 3m0s for pods matching "run=test-local-path" in namespace "default" ...
helpers_test.go:352: "test-local-path" [4b6a8df7-0e37-45e6-9597-b37a52bd040e] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "test-local-path" [4b6a8df7-0e37-45e6-9597-b37a52bd040e] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:352: "test-local-path" [4b6a8df7-0e37-45e6-9597-b37a52bd040e] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
addons_test.go:962: (dbg) TestAddons/parallel/LocalPath: run=test-local-path healthy within 3.005965759s
addons_test.go:967: (dbg) Run:  kubectl --context addons-882841 get pvc test-pvc -o=json
addons_test.go:976: (dbg) Run:  out/minikube-linux-arm64 -p addons-882841 ssh "cat /opt/local-path-provisioner/pvc-f4b21e23-9cc1-4889-8265-d98a837f9eda_default_test-pvc/file1"
addons_test.go:988: (dbg) Run:  kubectl --context addons-882841 delete pod test-local-path
addons_test.go:992: (dbg) Run:  kubectl --context addons-882841 delete pvc test-pvc
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-882841 addons disable storage-provisioner-rancher --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-arm64 -p addons-882841 addons disable storage-provisioner-rancher --alsologtostderr -v=1: exit status 11 (295.068243ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1121 23:51:11.804375  524403 out.go:360] Setting OutFile to fd 1 ...
	I1121 23:51:11.805199  524403 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1121 23:51:11.805214  524403 out.go:374] Setting ErrFile to fd 2...
	I1121 23:51:11.805220  524403 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1121 23:51:11.805513  524403 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21934-513600/.minikube/bin
	I1121 23:51:11.805858  524403 mustload.go:66] Loading cluster: addons-882841
	I1121 23:51:11.806257  524403 config.go:182] Loaded profile config "addons-882841": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1121 23:51:11.806276  524403 addons.go:622] checking whether the cluster is paused
	I1121 23:51:11.806423  524403 config.go:182] Loaded profile config "addons-882841": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1121 23:51:11.806441  524403 host.go:66] Checking if "addons-882841" exists ...
	I1121 23:51:11.806993  524403 cli_runner.go:164] Run: docker container inspect addons-882841 --format={{.State.Status}}
	I1121 23:51:11.823813  524403 ssh_runner.go:195] Run: systemctl --version
	I1121 23:51:11.823868  524403 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-882841
	I1121 23:51:11.842364  524403 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33495 SSHKeyPath:/home/jenkins/minikube-integration/21934-513600/.minikube/machines/addons-882841/id_rsa Username:docker}
	I1121 23:51:11.944333  524403 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1121 23:51:11.944437  524403 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1121 23:51:11.973396  524403 cri.go:89] found id: "d4288bfcc52cab787e4c57cd6f6ce8b5e4eab8e0f753f7b4b8c0dfbb6d7fcacf"
	I1121 23:51:11.973474  524403 cri.go:89] found id: "c2b953c3eb94bb9506b88f8f9082db05bab7bbad3c3c92fb83a61fd4148bcd7c"
	I1121 23:51:11.973494  524403 cri.go:89] found id: "4b84c2719358c69a49238d48c9737dea6a7f98672fde68733fbdfd0c5f76c519"
	I1121 23:51:11.973510  524403 cri.go:89] found id: "03e07c8bd5633a56d64051d6a63832c1a6fc109d661151083091d50ee1a7dfb7"
	I1121 23:51:11.973528  524403 cri.go:89] found id: "4f400d29139bbf4aeaa41d46055af4710e38ae5d6a844d3ee8b87ce4de3a0f3a"
	I1121 23:51:11.973546  524403 cri.go:89] found id: "0ecc1bcf505044ab1e7b917a3904e9d8ff652e08129c43483e6ff6f465bc7f48"
	I1121 23:51:11.973564  524403 cri.go:89] found id: "0d6392a88c56afc228d16b7659bcfa96628c1585bdb4b03af537ff609bf9f34a"
	I1121 23:51:11.973581  524403 cri.go:89] found id: "492d7c5835fb8502b60633915d9d3f885aa7bc3696e4febec5b394bab0a6773b"
	I1121 23:51:11.973599  524403 cri.go:89] found id: "96794062627c79a139839f726287bb12566d789fa0f4d5b1994cd88518a2e2eb"
	I1121 23:51:11.973620  524403 cri.go:89] found id: "d6df5ea7f4eb5e6b4852fd9cf791dda6c35753e420985175cbcc2a80b368d82b"
	I1121 23:51:11.973648  524403 cri.go:89] found id: "800239fcbfd600dbcec2ac03099f8200b62cf3769357e03ab2d40f672490913e"
	I1121 23:51:11.973666  524403 cri.go:89] found id: "1d9bfd16346a7b544d9696f5ac0700133b9c44dfb05b597e6ece14fdf7c1ee4d"
	I1121 23:51:11.973685  524403 cri.go:89] found id: "b7eb954adbbab5deddd57f625c7dce81a5fbc6e9ee1d2cb260d88e1fbd1482da"
	I1121 23:51:11.973701  524403 cri.go:89] found id: "ba42295e49f9af2999894c8bde53ee31c193600b80cc921d12a7b280aefbca13"
	I1121 23:51:11.973719  524403 cri.go:89] found id: "36f901d7268653e5e64e73d2f8c787b658cab9899bc17b2cc522fb984b5ae3f7"
	I1121 23:51:11.973738  524403 cri.go:89] found id: "561d110537c5cbfb43c832086d9c8216a7180387df20a1b9c68b29a4b682f207"
	I1121 23:51:11.973764  524403 cri.go:89] found id: "e6dc31e0930681fac6fbd625f4ec7a07e57c10d13a728a7ec163a4c66a6d4a2b"
	I1121 23:51:11.973782  524403 cri.go:89] found id: "074654b9d6b9f820e2f61d8ef839ef5ebec8673802a3e034c02530f243f023d0"
	I1121 23:51:11.973837  524403 cri.go:89] found id: "970af788676bddee24edc4dbf7882805510ac451d6658537c9d2152752c3ffee"
	I1121 23:51:11.973856  524403 cri.go:89] found id: "f6c2269669bcf9942b43c38a6d80d882a37c12fba06a2b1b514b07dbd6183350"
	I1121 23:51:11.973881  524403 cri.go:89] found id: "415d7ebb38dbfa2b139e79fcf924802ecf12d3ba74e075e30026fdd18353d343"
	I1121 23:51:11.973909  524403 cri.go:89] found id: "ba805611fb053ae520be5762cfc188a2dd5915f488bed9770376cb5e14b60936"
	I1121 23:51:11.973928  524403 cri.go:89] found id: "0b539dfc17788b7400ee2eb5abadb008e5a8a9796bb112d8a69f14a34f2fd551"
	I1121 23:51:11.973946  524403 cri.go:89] found id: ""
	I1121 23:51:11.974023  524403 ssh_runner.go:195] Run: sudo runc list -f json
	I1121 23:51:11.989631  524403 out.go:203] 
	W1121 23:51:11.994509  524403 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-21T23:51:11Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-21T23:51:11Z" level=error msg="open /run/runc: no such file or directory"
	
	W1121 23:51:11.994532  524403 out.go:285] * 
	* 
	W1121 23:51:12.017003  524403 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_e8b2053d4ef30ba659303f708d034237180eb1ed_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_e8b2053d4ef30ba659303f708d034237180eb1ed_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1121 23:51:12.025608  524403 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable storage-provisioner-rancher addon: args "out/minikube-linux-arm64 -p addons-882841 addons disable storage-provisioner-rancher --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/LocalPath (9.48s)

                                                
                                    
x
+
TestAddons/parallel/NvidiaDevicePlugin (5.27s)

                                                
                                                
=== RUN   TestAddons/parallel/NvidiaDevicePlugin
=== PAUSE TestAddons/parallel/NvidiaDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/NvidiaDevicePlugin
addons_test.go:1025: (dbg) TestAddons/parallel/NvidiaDevicePlugin: waiting 6m0s for pods matching "name=nvidia-device-plugin-ds" in namespace "kube-system" ...
helpers_test.go:352: "nvidia-device-plugin-daemonset-4jvp9" [54878aa0-88b5-4a6b-ad02-91d34115cc3d] Running
addons_test.go:1025: (dbg) TestAddons/parallel/NvidiaDevicePlugin: name=nvidia-device-plugin-ds healthy within 5.004060839s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-882841 addons disable nvidia-device-plugin --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-arm64 -p addons-882841 addons disable nvidia-device-plugin --alsologtostderr -v=1: exit status 11 (265.856049ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1121 23:51:02.341135  523949 out.go:360] Setting OutFile to fd 1 ...
	I1121 23:51:02.341994  523949 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1121 23:51:02.342010  523949 out.go:374] Setting ErrFile to fd 2...
	I1121 23:51:02.342016  523949 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1121 23:51:02.342325  523949 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21934-513600/.minikube/bin
	I1121 23:51:02.342682  523949 mustload.go:66] Loading cluster: addons-882841
	I1121 23:51:02.343113  523949 config.go:182] Loaded profile config "addons-882841": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1121 23:51:02.343135  523949 addons.go:622] checking whether the cluster is paused
	I1121 23:51:02.343275  523949 config.go:182] Loaded profile config "addons-882841": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1121 23:51:02.343293  523949 host.go:66] Checking if "addons-882841" exists ...
	I1121 23:51:02.343891  523949 cli_runner.go:164] Run: docker container inspect addons-882841 --format={{.State.Status}}
	I1121 23:51:02.361962  523949 ssh_runner.go:195] Run: systemctl --version
	I1121 23:51:02.362018  523949 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-882841
	I1121 23:51:02.382753  523949 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33495 SSHKeyPath:/home/jenkins/minikube-integration/21934-513600/.minikube/machines/addons-882841/id_rsa Username:docker}
	I1121 23:51:02.485246  523949 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1121 23:51:02.485347  523949 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1121 23:51:02.520213  523949 cri.go:89] found id: "d4288bfcc52cab787e4c57cd6f6ce8b5e4eab8e0f753f7b4b8c0dfbb6d7fcacf"
	I1121 23:51:02.520236  523949 cri.go:89] found id: "c2b953c3eb94bb9506b88f8f9082db05bab7bbad3c3c92fb83a61fd4148bcd7c"
	I1121 23:51:02.520242  523949 cri.go:89] found id: "4b84c2719358c69a49238d48c9737dea6a7f98672fde68733fbdfd0c5f76c519"
	I1121 23:51:02.520246  523949 cri.go:89] found id: "03e07c8bd5633a56d64051d6a63832c1a6fc109d661151083091d50ee1a7dfb7"
	I1121 23:51:02.520249  523949 cri.go:89] found id: "4f400d29139bbf4aeaa41d46055af4710e38ae5d6a844d3ee8b87ce4de3a0f3a"
	I1121 23:51:02.520252  523949 cri.go:89] found id: "0ecc1bcf505044ab1e7b917a3904e9d8ff652e08129c43483e6ff6f465bc7f48"
	I1121 23:51:02.520256  523949 cri.go:89] found id: "0d6392a88c56afc228d16b7659bcfa96628c1585bdb4b03af537ff609bf9f34a"
	I1121 23:51:02.520259  523949 cri.go:89] found id: "492d7c5835fb8502b60633915d9d3f885aa7bc3696e4febec5b394bab0a6773b"
	I1121 23:51:02.520262  523949 cri.go:89] found id: "96794062627c79a139839f726287bb12566d789fa0f4d5b1994cd88518a2e2eb"
	I1121 23:51:02.520270  523949 cri.go:89] found id: "d6df5ea7f4eb5e6b4852fd9cf791dda6c35753e420985175cbcc2a80b368d82b"
	I1121 23:51:02.520273  523949 cri.go:89] found id: "800239fcbfd600dbcec2ac03099f8200b62cf3769357e03ab2d40f672490913e"
	I1121 23:51:02.520276  523949 cri.go:89] found id: "1d9bfd16346a7b544d9696f5ac0700133b9c44dfb05b597e6ece14fdf7c1ee4d"
	I1121 23:51:02.520279  523949 cri.go:89] found id: "b7eb954adbbab5deddd57f625c7dce81a5fbc6e9ee1d2cb260d88e1fbd1482da"
	I1121 23:51:02.520283  523949 cri.go:89] found id: "ba42295e49f9af2999894c8bde53ee31c193600b80cc921d12a7b280aefbca13"
	I1121 23:51:02.520286  523949 cri.go:89] found id: "36f901d7268653e5e64e73d2f8c787b658cab9899bc17b2cc522fb984b5ae3f7"
	I1121 23:51:02.520290  523949 cri.go:89] found id: "561d110537c5cbfb43c832086d9c8216a7180387df20a1b9c68b29a4b682f207"
	I1121 23:51:02.520293  523949 cri.go:89] found id: "e6dc31e0930681fac6fbd625f4ec7a07e57c10d13a728a7ec163a4c66a6d4a2b"
	I1121 23:51:02.520297  523949 cri.go:89] found id: "074654b9d6b9f820e2f61d8ef839ef5ebec8673802a3e034c02530f243f023d0"
	I1121 23:51:02.520300  523949 cri.go:89] found id: "970af788676bddee24edc4dbf7882805510ac451d6658537c9d2152752c3ffee"
	I1121 23:51:02.520303  523949 cri.go:89] found id: "f6c2269669bcf9942b43c38a6d80d882a37c12fba06a2b1b514b07dbd6183350"
	I1121 23:51:02.520308  523949 cri.go:89] found id: "415d7ebb38dbfa2b139e79fcf924802ecf12d3ba74e075e30026fdd18353d343"
	I1121 23:51:02.520311  523949 cri.go:89] found id: "ba805611fb053ae520be5762cfc188a2dd5915f488bed9770376cb5e14b60936"
	I1121 23:51:02.520314  523949 cri.go:89] found id: "0b539dfc17788b7400ee2eb5abadb008e5a8a9796bb112d8a69f14a34f2fd551"
	I1121 23:51:02.520317  523949 cri.go:89] found id: ""
	I1121 23:51:02.520369  523949 ssh_runner.go:195] Run: sudo runc list -f json
	I1121 23:51:02.536190  523949 out.go:203] 
	W1121 23:51:02.539397  523949 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-21T23:51:02Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-21T23:51:02Z" level=error msg="open /run/runc: no such file or directory"
	
	W1121 23:51:02.539428  523949 out.go:285] * 
	* 
	W1121 23:51:02.546228  523949 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_47e1a72799625313bd916979b0f8aa84efd54736_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_47e1a72799625313bd916979b0f8aa84efd54736_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1121 23:51:02.549466  523949 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable nvidia-device-plugin addon: args "out/minikube-linux-arm64 -p addons-882841 addons disable nvidia-device-plugin --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/NvidiaDevicePlugin (5.27s)

                                                
                                    
x
+
TestAddons/parallel/Yakd (6.26s)

                                                
                                                
=== RUN   TestAddons/parallel/Yakd
=== PAUSE TestAddons/parallel/Yakd

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Yakd
addons_test.go:1047: (dbg) TestAddons/parallel/Yakd: waiting 2m0s for pods matching "app.kubernetes.io/name=yakd-dashboard" in namespace "yakd-dashboard" ...
helpers_test.go:352: "yakd-dashboard-5ff678cb9-x6sbv" [7a59fb5c-32f1-4738-b675-ef20e83a1d80] Running
addons_test.go:1047: (dbg) TestAddons/parallel/Yakd: app.kubernetes.io/name=yakd-dashboard healthy within 6.003541656s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-882841 addons disable yakd --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-arm64 -p addons-882841 addons disable yakd --alsologtostderr -v=1: exit status 11 (254.822644ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1121 23:50:57.083177  523877 out.go:360] Setting OutFile to fd 1 ...
	I1121 23:50:57.083766  523877 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1121 23:50:57.083784  523877 out.go:374] Setting ErrFile to fd 2...
	I1121 23:50:57.083791  523877 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1121 23:50:57.084071  523877 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21934-513600/.minikube/bin
	I1121 23:50:57.084412  523877 mustload.go:66] Loading cluster: addons-882841
	I1121 23:50:57.084819  523877 config.go:182] Loaded profile config "addons-882841": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1121 23:50:57.084838  523877 addons.go:622] checking whether the cluster is paused
	I1121 23:50:57.084951  523877 config.go:182] Loaded profile config "addons-882841": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1121 23:50:57.084966  523877 host.go:66] Checking if "addons-882841" exists ...
	I1121 23:50:57.085471  523877 cli_runner.go:164] Run: docker container inspect addons-882841 --format={{.State.Status}}
	I1121 23:50:57.102535  523877 ssh_runner.go:195] Run: systemctl --version
	I1121 23:50:57.102600  523877 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-882841
	I1121 23:50:57.122362  523877 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33495 SSHKeyPath:/home/jenkins/minikube-integration/21934-513600/.minikube/machines/addons-882841/id_rsa Username:docker}
	I1121 23:50:57.224447  523877 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1121 23:50:57.224522  523877 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1121 23:50:57.255048  523877 cri.go:89] found id: "d4288bfcc52cab787e4c57cd6f6ce8b5e4eab8e0f753f7b4b8c0dfbb6d7fcacf"
	I1121 23:50:57.255070  523877 cri.go:89] found id: "c2b953c3eb94bb9506b88f8f9082db05bab7bbad3c3c92fb83a61fd4148bcd7c"
	I1121 23:50:57.255075  523877 cri.go:89] found id: "4b84c2719358c69a49238d48c9737dea6a7f98672fde68733fbdfd0c5f76c519"
	I1121 23:50:57.255079  523877 cri.go:89] found id: "03e07c8bd5633a56d64051d6a63832c1a6fc109d661151083091d50ee1a7dfb7"
	I1121 23:50:57.255082  523877 cri.go:89] found id: "4f400d29139bbf4aeaa41d46055af4710e38ae5d6a844d3ee8b87ce4de3a0f3a"
	I1121 23:50:57.255086  523877 cri.go:89] found id: "0ecc1bcf505044ab1e7b917a3904e9d8ff652e08129c43483e6ff6f465bc7f48"
	I1121 23:50:57.255089  523877 cri.go:89] found id: "0d6392a88c56afc228d16b7659bcfa96628c1585bdb4b03af537ff609bf9f34a"
	I1121 23:50:57.255092  523877 cri.go:89] found id: "492d7c5835fb8502b60633915d9d3f885aa7bc3696e4febec5b394bab0a6773b"
	I1121 23:50:57.255117  523877 cri.go:89] found id: "96794062627c79a139839f726287bb12566d789fa0f4d5b1994cd88518a2e2eb"
	I1121 23:50:57.255131  523877 cri.go:89] found id: "d6df5ea7f4eb5e6b4852fd9cf791dda6c35753e420985175cbcc2a80b368d82b"
	I1121 23:50:57.255135  523877 cri.go:89] found id: "800239fcbfd600dbcec2ac03099f8200b62cf3769357e03ab2d40f672490913e"
	I1121 23:50:57.255139  523877 cri.go:89] found id: "1d9bfd16346a7b544d9696f5ac0700133b9c44dfb05b597e6ece14fdf7c1ee4d"
	I1121 23:50:57.255142  523877 cri.go:89] found id: "b7eb954adbbab5deddd57f625c7dce81a5fbc6e9ee1d2cb260d88e1fbd1482da"
	I1121 23:50:57.255153  523877 cri.go:89] found id: "ba42295e49f9af2999894c8bde53ee31c193600b80cc921d12a7b280aefbca13"
	I1121 23:50:57.255156  523877 cri.go:89] found id: "36f901d7268653e5e64e73d2f8c787b658cab9899bc17b2cc522fb984b5ae3f7"
	I1121 23:50:57.255162  523877 cri.go:89] found id: "561d110537c5cbfb43c832086d9c8216a7180387df20a1b9c68b29a4b682f207"
	I1121 23:50:57.255166  523877 cri.go:89] found id: "e6dc31e0930681fac6fbd625f4ec7a07e57c10d13a728a7ec163a4c66a6d4a2b"
	I1121 23:50:57.255171  523877 cri.go:89] found id: "074654b9d6b9f820e2f61d8ef839ef5ebec8673802a3e034c02530f243f023d0"
	I1121 23:50:57.255174  523877 cri.go:89] found id: "970af788676bddee24edc4dbf7882805510ac451d6658537c9d2152752c3ffee"
	I1121 23:50:57.255177  523877 cri.go:89] found id: "f6c2269669bcf9942b43c38a6d80d882a37c12fba06a2b1b514b07dbd6183350"
	I1121 23:50:57.255195  523877 cri.go:89] found id: "415d7ebb38dbfa2b139e79fcf924802ecf12d3ba74e075e30026fdd18353d343"
	I1121 23:50:57.255203  523877 cri.go:89] found id: "ba805611fb053ae520be5762cfc188a2dd5915f488bed9770376cb5e14b60936"
	I1121 23:50:57.255206  523877 cri.go:89] found id: "0b539dfc17788b7400ee2eb5abadb008e5a8a9796bb112d8a69f14a34f2fd551"
	I1121 23:50:57.255209  523877 cri.go:89] found id: ""
	I1121 23:50:57.255271  523877 ssh_runner.go:195] Run: sudo runc list -f json
	I1121 23:50:57.270130  523877 out.go:203] 
	W1121 23:50:57.272384  523877 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-21T23:50:57Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-21T23:50:57Z" level=error msg="open /run/runc: no such file or directory"
	
	W1121 23:50:57.272407  523877 out.go:285] * 
	* 
	W1121 23:50:57.279348  523877 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_82e5d844def28f20a5cac88dc27578ab5d1e7e1a_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_82e5d844def28f20a5cac88dc27578ab5d1e7e1a_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1121 23:50:57.281689  523877 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable yakd addon: args "out/minikube-linux-arm64 -p addons-882841 addons disable yakd --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/Yakd (6.26s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (603.53s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1636: (dbg) Run:  kubectl --context functional-354825 create deployment hello-node-connect --image kicbase/echo-server
functional_test.go:1640: (dbg) Run:  kubectl --context functional-354825 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1645: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:352: "hello-node-connect-7d85dfc575-fglvv" [065ee2fd-7122-4a1b-ab5d-ca8bf81bcd00] Pending
helpers_test.go:352: "hello-node-connect-7d85dfc575-fglvv" [065ee2fd-7122-4a1b-ab5d-ca8bf81bcd00] Pending / Ready:ContainersNotReady (containers with unready status: [echo-server]) / ContainersReady:ContainersNotReady (containers with unready status: [echo-server])
helpers_test.go:337: TestFunctional/parallel/ServiceCmdConnect: WARNING: pod list for "default" "app=hello-node-connect" returned: client rate limiter Wait returned an error: context deadline exceeded
functional_test.go:1645: ***** TestFunctional/parallel/ServiceCmdConnect: pod "app=hello-node-connect" failed to start within 10m0s: context deadline exceeded ****
functional_test.go:1645: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-354825 -n functional-354825
functional_test.go:1645: TestFunctional/parallel/ServiceCmdConnect: showing logs for failed pods as of 2025-11-22 00:07:54.480138525 +0000 UTC m=+1251.709946657
functional_test.go:1645: (dbg) Run:  kubectl --context functional-354825 describe po hello-node-connect-7d85dfc575-fglvv -n default
functional_test.go:1645: (dbg) kubectl --context functional-354825 describe po hello-node-connect-7d85dfc575-fglvv -n default:
Name:             hello-node-connect-7d85dfc575-fglvv
Namespace:        default
Priority:         0
Service Account:  default
Node:             functional-354825/192.168.49.2
Start Time:       Fri, 21 Nov 2025 23:57:53 +0000
Labels:           app=hello-node-connect
pod-template-hash=7d85dfc575
Annotations:      <none>
Status:           Pending
IP:               10.244.0.6
IPs:
IP:           10.244.0.6
Controlled By:  ReplicaSet/hello-node-connect-7d85dfc575
Containers:
echo-server:
Container ID:   
Image:          kicbase/echo-server
Image ID:       
Port:           <none>
Host Port:      <none>
State:          Waiting
Reason:       ImagePullBackOff
Ready:          False
Restart Count:  0
Environment:    <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-x6bqg (ro)
Conditions:
Type                        Status
PodReadyToStartContainers   True 
Initialized                 True 
Ready                       False 
ContainersReady             False 
PodScheduled                True 
Volumes:
kube-api-access-x6bqg:
Type:                    Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds:  3607
ConfigMapName:           kube-root-ca.crt
Optional:                false
DownwardAPI:             true
QoS Class:                   BestEffort
Node-Selectors:              <none>
Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type     Reason     Age                     From               Message
----     ------     ----                    ----               -------
Normal   Scheduled  10m                     default-scheduler  Successfully assigned default/hello-node-connect-7d85dfc575-fglvv to functional-354825
Normal   Pulling    7m8s (x5 over 9m59s)    kubelet            Pulling image "kicbase/echo-server"
Warning  Failed     7m8s (x5 over 9m59s)    kubelet            Failed to pull image "kicbase/echo-server": short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list
Warning  Failed     7m8s (x5 over 9m59s)    kubelet            Error: ErrImagePull
Warning  Failed     4m57s (x20 over 9m59s)  kubelet            Error: ImagePullBackOff
Normal   BackOff    4m45s (x21 over 9m59s)  kubelet            Back-off pulling image "kicbase/echo-server"
functional_test.go:1645: (dbg) Run:  kubectl --context functional-354825 logs hello-node-connect-7d85dfc575-fglvv -n default
functional_test.go:1645: (dbg) Non-zero exit: kubectl --context functional-354825 logs hello-node-connect-7d85dfc575-fglvv -n default: exit status 1 (97.474719ms)

                                                
                                                
** stderr ** 
	Error from server (BadRequest): container "echo-server" in pod "hello-node-connect-7d85dfc575-fglvv" is waiting to start: trying and failing to pull image

                                                
                                                
** /stderr **
functional_test.go:1645: kubectl --context functional-354825 logs hello-node-connect-7d85dfc575-fglvv -n default: exit status 1
functional_test.go:1646: failed waiting for hello-node pod: app=hello-node-connect within 10m0s: context deadline exceeded
functional_test.go:1608: service test failed - dumping debug information
functional_test.go:1609: -----------------------service failure post-mortem--------------------------------
functional_test.go:1612: (dbg) Run:  kubectl --context functional-354825 describe po hello-node-connect
functional_test.go:1616: hello-node pod describe:
Name:             hello-node-connect-7d85dfc575-fglvv
Namespace:        default
Priority:         0
Service Account:  default
Node:             functional-354825/192.168.49.2
Start Time:       Fri, 21 Nov 2025 23:57:53 +0000
Labels:           app=hello-node-connect
pod-template-hash=7d85dfc575
Annotations:      <none>
Status:           Pending
IP:               10.244.0.6
IPs:
IP:           10.244.0.6
Controlled By:  ReplicaSet/hello-node-connect-7d85dfc575
Containers:
echo-server:
Container ID:   
Image:          kicbase/echo-server
Image ID:       
Port:           <none>
Host Port:      <none>
State:          Waiting
Reason:       ImagePullBackOff
Ready:          False
Restart Count:  0
Environment:    <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-x6bqg (ro)
Conditions:
Type                        Status
PodReadyToStartContainers   True 
Initialized                 True 
Ready                       False 
ContainersReady             False 
PodScheduled                True 
Volumes:
kube-api-access-x6bqg:
Type:                    Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds:  3607
ConfigMapName:           kube-root-ca.crt
Optional:                false
DownwardAPI:             true
QoS Class:                   BestEffort
Node-Selectors:              <none>
Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type     Reason     Age                     From               Message
----     ------     ----                    ----               -------
Normal   Scheduled  10m                     default-scheduler  Successfully assigned default/hello-node-connect-7d85dfc575-fglvv to functional-354825
Normal   Pulling    7m8s (x5 over 9m59s)    kubelet            Pulling image "kicbase/echo-server"
Warning  Failed     7m8s (x5 over 9m59s)    kubelet            Failed to pull image "kicbase/echo-server": short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list
Warning  Failed     7m8s (x5 over 9m59s)    kubelet            Error: ErrImagePull
Warning  Failed     4m57s (x20 over 9m59s)  kubelet            Error: ImagePullBackOff
Normal   BackOff    4m45s (x21 over 9m59s)  kubelet            Back-off pulling image "kicbase/echo-server"

                                                
                                                
functional_test.go:1618: (dbg) Run:  kubectl --context functional-354825 logs -l app=hello-node-connect
functional_test.go:1618: (dbg) Non-zero exit: kubectl --context functional-354825 logs -l app=hello-node-connect: exit status 1 (83.962251ms)

                                                
                                                
** stderr ** 
	Error from server (BadRequest): container "echo-server" in pod "hello-node-connect-7d85dfc575-fglvv" is waiting to start: trying and failing to pull image

                                                
                                                
** /stderr **
functional_test.go:1620: "kubectl --context functional-354825 logs -l app=hello-node-connect" failed: exit status 1
functional_test.go:1622: hello-node logs:
functional_test.go:1624: (dbg) Run:  kubectl --context functional-354825 describe svc hello-node-connect
functional_test.go:1628: hello-node svc describe:
Name:                     hello-node-connect
Namespace:                default
Labels:                   app=hello-node-connect
Annotations:              <none>
Selector:                 app=hello-node-connect
Type:                     NodePort
IP Family Policy:         SingleStack
IP Families:              IPv4
IP:                       10.98.36.84
IPs:                      10.98.36.84
Port:                     <unset>  8080/TCP
TargetPort:               8080/TCP
NodePort:                 <unset>  30820/TCP
Endpoints:                
Session Affinity:         None
External Traffic Policy:  Cluster
Internal Traffic Policy:  Cluster
Events:                   <none>
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestFunctional/parallel/ServiceCmdConnect]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestFunctional/parallel/ServiceCmdConnect]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect functional-354825
helpers_test.go:243: (dbg) docker inspect functional-354825:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "4e41c2fd8dc039f941c2ec74349648e8a1391fe0b117e3df7dc6865863da7f7a",
	        "Created": "2025-11-21T23:55:12.358740088Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 532743,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-11-21T23:55:12.403473174Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:97061622515a85470dad13cb046ad5fc5021fcd2b05aa921a620abb52951cd5d",
	        "ResolvConfPath": "/var/lib/docker/containers/4e41c2fd8dc039f941c2ec74349648e8a1391fe0b117e3df7dc6865863da7f7a/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/4e41c2fd8dc039f941c2ec74349648e8a1391fe0b117e3df7dc6865863da7f7a/hostname",
	        "HostsPath": "/var/lib/docker/containers/4e41c2fd8dc039f941c2ec74349648e8a1391fe0b117e3df7dc6865863da7f7a/hosts",
	        "LogPath": "/var/lib/docker/containers/4e41c2fd8dc039f941c2ec74349648e8a1391fe0b117e3df7dc6865863da7f7a/4e41c2fd8dc039f941c2ec74349648e8a1391fe0b117e3df7dc6865863da7f7a-json.log",
	        "Name": "/functional-354825",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "functional-354825:/var",
	                "/lib/modules:/lib/modules:ro"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "functional-354825",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "4e41c2fd8dc039f941c2ec74349648e8a1391fe0b117e3df7dc6865863da7f7a",
	                "LowerDir": "/var/lib/docker/overlay2/a0a7bb5ecf3c9ab2237e745d6c77be07af5f527d094acde70b4321c427a76e57-init/diff:/var/lib/docker/overlay2/7e8788c6de692bc1c3758a2bb2c4b8da0fbba26855f855c0f3b655bfbdd92f8e/diff",
	                "MergedDir": "/var/lib/docker/overlay2/a0a7bb5ecf3c9ab2237e745d6c77be07af5f527d094acde70b4321c427a76e57/merged",
	                "UpperDir": "/var/lib/docker/overlay2/a0a7bb5ecf3c9ab2237e745d6c77be07af5f527d094acde70b4321c427a76e57/diff",
	                "WorkDir": "/var/lib/docker/overlay2/a0a7bb5ecf3c9ab2237e745d6c77be07af5f527d094acde70b4321c427a76e57/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "functional-354825",
	                "Source": "/var/lib/docker/volumes/functional-354825/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "functional-354825",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8441/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763588073-21934@sha256:19d3da0413e1bfa354cbb88004c6796f8e9772a083e0230b0f6e50212ee04c7e",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "functional-354825",
	                "name.minikube.sigs.k8s.io": "functional-354825",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "ed77701e63505e04e3a196edcb68be71f470dc58fb20f443e6035a5023539d5b",
	            "SandboxKey": "/var/run/docker/netns/ed77701e6350",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33505"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33506"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33509"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33507"
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33508"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "functional-354825": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "06:48:b9:69:fc:50",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "1922e9d9f200982c6108a81b71fccc9a62db97308220a0f5703bcc82a185bc22",
	                    "EndpointID": "c879fee6fa85a92c495b849c723462c75ea2a62a8acb8ab5674c6e0f5d4e98c1",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "functional-354825",
	                        "4e41c2fd8dc0"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p functional-354825 -n functional-354825
helpers_test.go:252: <<< TestFunctional/parallel/ServiceCmdConnect FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestFunctional/parallel/ServiceCmdConnect]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p functional-354825 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p functional-354825 logs -n 25: (1.442099505s)
helpers_test.go:260: TestFunctional/parallel/ServiceCmdConnect logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬──────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                   ARGS                                                   │      PROFILE      │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼──────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ cache   │ delete registry.k8s.io/pause:3.3                                                                         │ minikube          │ jenkins │ v1.37.0 │ 21 Nov 25 23:57 UTC │ 21 Nov 25 23:57 UTC │
	│ cache   │ list                                                                                                     │ minikube          │ jenkins │ v1.37.0 │ 21 Nov 25 23:57 UTC │ 21 Nov 25 23:57 UTC │
	│ ssh     │ functional-354825 ssh sudo crictl images                                                                 │ functional-354825 │ jenkins │ v1.37.0 │ 21 Nov 25 23:57 UTC │ 21 Nov 25 23:57 UTC │
	│ ssh     │ functional-354825 ssh sudo crictl rmi registry.k8s.io/pause:latest                                       │ functional-354825 │ jenkins │ v1.37.0 │ 21 Nov 25 23:57 UTC │ 21 Nov 25 23:57 UTC │
	│ ssh     │ functional-354825 ssh sudo crictl inspecti registry.k8s.io/pause:latest                                  │ functional-354825 │ jenkins │ v1.37.0 │ 21 Nov 25 23:57 UTC │                     │
	│ cache   │ functional-354825 cache reload                                                                           │ functional-354825 │ jenkins │ v1.37.0 │ 21 Nov 25 23:57 UTC │ 21 Nov 25 23:57 UTC │
	│ ssh     │ functional-354825 ssh sudo crictl inspecti registry.k8s.io/pause:latest                                  │ functional-354825 │ jenkins │ v1.37.0 │ 21 Nov 25 23:57 UTC │ 21 Nov 25 23:57 UTC │
	│ cache   │ delete registry.k8s.io/pause:3.1                                                                         │ minikube          │ jenkins │ v1.37.0 │ 21 Nov 25 23:57 UTC │ 21 Nov 25 23:57 UTC │
	│ cache   │ delete registry.k8s.io/pause:latest                                                                      │ minikube          │ jenkins │ v1.37.0 │ 21 Nov 25 23:57 UTC │ 21 Nov 25 23:57 UTC │
	│ kubectl │ functional-354825 kubectl -- --context functional-354825 get pods                                        │ functional-354825 │ jenkins │ v1.37.0 │ 21 Nov 25 23:57 UTC │ 21 Nov 25 23:57 UTC │
	│ start   │ -p functional-354825 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all │ functional-354825 │ jenkins │ v1.37.0 │ 21 Nov 25 23:57 UTC │ 21 Nov 25 23:57 UTC │
	│ service │ invalid-svc -p functional-354825                                                                         │ functional-354825 │ jenkins │ v1.37.0 │ 21 Nov 25 23:57 UTC │                     │
	│ config  │ functional-354825 config unset cpus                                                                      │ functional-354825 │ jenkins │ v1.37.0 │ 21 Nov 25 23:57 UTC │ 21 Nov 25 23:57 UTC │
	│ ssh     │ functional-354825 ssh echo hello                                                                         │ functional-354825 │ jenkins │ v1.37.0 │ 21 Nov 25 23:57 UTC │ 21 Nov 25 23:57 UTC │
	│ config  │ functional-354825 config get cpus                                                                        │ functional-354825 │ jenkins │ v1.37.0 │ 21 Nov 25 23:57 UTC │                     │
	│ config  │ functional-354825 config set cpus 2                                                                      │ functional-354825 │ jenkins │ v1.37.0 │ 21 Nov 25 23:57 UTC │ 21 Nov 25 23:57 UTC │
	│ config  │ functional-354825 config get cpus                                                                        │ functional-354825 │ jenkins │ v1.37.0 │ 21 Nov 25 23:57 UTC │ 21 Nov 25 23:57 UTC │
	│ config  │ functional-354825 config unset cpus                                                                      │ functional-354825 │ jenkins │ v1.37.0 │ 21 Nov 25 23:57 UTC │ 21 Nov 25 23:57 UTC │
	│ ssh     │ functional-354825 ssh cat /etc/hostname                                                                  │ functional-354825 │ jenkins │ v1.37.0 │ 21 Nov 25 23:57 UTC │ 21 Nov 25 23:57 UTC │
	│ config  │ functional-354825 config get cpus                                                                        │ functional-354825 │ jenkins │ v1.37.0 │ 21 Nov 25 23:57 UTC │                     │
	│ tunnel  │ functional-354825 tunnel --alsologtostderr                                                               │ functional-354825 │ jenkins │ v1.37.0 │ 21 Nov 25 23:57 UTC │                     │
	│ tunnel  │ functional-354825 tunnel --alsologtostderr                                                               │ functional-354825 │ jenkins │ v1.37.0 │ 21 Nov 25 23:57 UTC │                     │
	│ tunnel  │ functional-354825 tunnel --alsologtostderr                                                               │ functional-354825 │ jenkins │ v1.37.0 │ 21 Nov 25 23:57 UTC │                     │
	│ addons  │ functional-354825 addons list                                                                            │ functional-354825 │ jenkins │ v1.37.0 │ 21 Nov 25 23:57 UTC │ 21 Nov 25 23:57 UTC │
	│ addons  │ functional-354825 addons list -o json                                                                    │ functional-354825 │ jenkins │ v1.37.0 │ 21 Nov 25 23:57 UTC │ 21 Nov 25 23:57 UTC │
	└─────────┴──────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/21 23:57:03
	Running on machine: ip-172-31-21-244
	Binary: Built with gc go1.25.3 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1121 23:57:03.757762  536897 out.go:360] Setting OutFile to fd 1 ...
	I1121 23:57:03.757909  536897 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1121 23:57:03.757914  536897 out.go:374] Setting ErrFile to fd 2...
	I1121 23:57:03.757918  536897 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1121 23:57:03.758299  536897 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21934-513600/.minikube/bin
	I1121 23:57:03.758761  536897 out.go:368] Setting JSON to false
	I1121 23:57:03.760043  536897 start.go:133] hostinfo: {"hostname":"ip-172-31-21-244","uptime":16740,"bootTime":1763752684,"procs":174,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"da8ac1fd-6236-412a-a346-95873c98230d"}
	I1121 23:57:03.760121  536897 start.go:143] virtualization:  
	I1121 23:57:03.763633  536897 out.go:179] * [functional-354825] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1121 23:57:03.766522  536897 out.go:179]   - MINIKUBE_LOCATION=21934
	I1121 23:57:03.766684  536897 notify.go:221] Checking for updates...
	I1121 23:57:03.772373  536897 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1121 23:57:03.775369  536897 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21934-513600/kubeconfig
	I1121 23:57:03.778386  536897 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21934-513600/.minikube
	I1121 23:57:03.781164  536897 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1121 23:57:03.784034  536897 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1121 23:57:03.787513  536897 config.go:182] Loaded profile config "functional-354825": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1121 23:57:03.787605  536897 driver.go:422] Setting default libvirt URI to qemu:///system
	I1121 23:57:03.814843  536897 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1121 23:57:03.814951  536897 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1121 23:57:03.873199  536897 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:65 SystemTime:2025-11-21 23:57:03.86340786 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aa
rch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pa
th:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1121 23:57:03.873291  536897 docker.go:319] overlay module found
	I1121 23:57:03.878319  536897 out.go:179] * Using the docker driver based on existing profile
	I1121 23:57:03.881252  536897 start.go:309] selected driver: docker
	I1121 23:57:03.881262  536897 start.go:930] validating driver "docker" against &{Name:functional-354825 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763588073-21934@sha256:19d3da0413e1bfa354cbb88004c6796f8e9772a083e0230b0f6e50212ee04c7e Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:functional-354825 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false D
isableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1121 23:57:03.881354  536897 start.go:941] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1121 23:57:03.881452  536897 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1121 23:57:03.937082  536897 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:65 SystemTime:2025-11-21 23:57:03.928020776 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1121 23:57:03.937471  536897 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1121 23:57:03.937496  536897 cni.go:84] Creating CNI manager for ""
	I1121 23:57:03.937548  536897 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1121 23:57:03.937590  536897 start.go:353] cluster config:
	{Name:functional-354825 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763588073-21934@sha256:19d3da0413e1bfa354cbb88004c6796f8e9772a083e0230b0f6e50212ee04c7e Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:functional-354825 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Containe
rRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false Di
sableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1121 23:57:03.940668  536897 out.go:179] * Starting "functional-354825" primary control-plane node in "functional-354825" cluster
	I1121 23:57:03.943549  536897 cache.go:134] Beginning downloading kic base image for docker with crio
	I1121 23:57:03.946511  536897 out.go:179] * Pulling base image v0.0.48-1763588073-21934 ...
	I1121 23:57:03.949461  536897 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1121 23:57:03.949467  536897 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763588073-21934@sha256:19d3da0413e1bfa354cbb88004c6796f8e9772a083e0230b0f6e50212ee04c7e in local docker daemon
	I1121 23:57:03.949518  536897 preload.go:203] Found local preload: /home/jenkins/minikube-integration/21934-513600/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4
	I1121 23:57:03.949527  536897 cache.go:65] Caching tarball of preloaded images
	I1121 23:57:03.949604  536897 preload.go:238] Found /home/jenkins/minikube-integration/21934-513600/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1121 23:57:03.949613  536897 cache.go:68] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1121 23:57:03.949715  536897 profile.go:143] Saving config to /home/jenkins/minikube-integration/21934-513600/.minikube/profiles/functional-354825/config.json ...
	I1121 23:57:03.969081  536897 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763588073-21934@sha256:19d3da0413e1bfa354cbb88004c6796f8e9772a083e0230b0f6e50212ee04c7e in local docker daemon, skipping pull
	I1121 23:57:03.969091  536897 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763588073-21934@sha256:19d3da0413e1bfa354cbb88004c6796f8e9772a083e0230b0f6e50212ee04c7e exists in daemon, skipping load
	I1121 23:57:03.969110  536897 cache.go:243] Successfully downloaded all kic artifacts
	I1121 23:57:03.969131  536897 start.go:360] acquireMachinesLock for functional-354825: {Name:mk86839b76ce71e9f4e0da258cf510bbe529779f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1121 23:57:03.969191  536897 start.go:364] duration metric: took 45.143µs to acquireMachinesLock for "functional-354825"
	I1121 23:57:03.969209  536897 start.go:96] Skipping create...Using existing machine configuration
	I1121 23:57:03.969212  536897 fix.go:54] fixHost starting: 
	I1121 23:57:03.969478  536897 cli_runner.go:164] Run: docker container inspect functional-354825 --format={{.State.Status}}
	I1121 23:57:03.986550  536897 fix.go:112] recreateIfNeeded on functional-354825: state=Running err=<nil>
	W1121 23:57:03.986569  536897 fix.go:138] unexpected machine state, will restart: <nil>
	I1121 23:57:03.989815  536897 out.go:252] * Updating the running docker "functional-354825" container ...
	I1121 23:57:03.989840  536897 machine.go:94] provisionDockerMachine start ...
	I1121 23:57:03.989934  536897 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-354825
	I1121 23:57:04.010948  536897 main.go:143] libmachine: Using SSH client type: native
	I1121 23:57:04.011284  536897 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33505 <nil> <nil>}
	I1121 23:57:04.011291  536897 main.go:143] libmachine: About to run SSH command:
	hostname
	I1121 23:57:04.149519  536897 main.go:143] libmachine: SSH cmd err, output: <nil>: functional-354825
	
	I1121 23:57:04.149533  536897 ubuntu.go:182] provisioning hostname "functional-354825"
	I1121 23:57:04.149604  536897 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-354825
	I1121 23:57:04.167694  536897 main.go:143] libmachine: Using SSH client type: native
	I1121 23:57:04.167991  536897 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33505 <nil> <nil>}
	I1121 23:57:04.168000  536897 main.go:143] libmachine: About to run SSH command:
	sudo hostname functional-354825 && echo "functional-354825" | sudo tee /etc/hostname
	I1121 23:57:04.320098  536897 main.go:143] libmachine: SSH cmd err, output: <nil>: functional-354825
	
	I1121 23:57:04.320167  536897 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-354825
	I1121 23:57:04.339867  536897 main.go:143] libmachine: Using SSH client type: native
	I1121 23:57:04.340185  536897 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33505 <nil> <nil>}
	I1121 23:57:04.340208  536897 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sfunctional-354825' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 functional-354825/g' /etc/hosts;
				else 
					echo '127.0.1.1 functional-354825' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1121 23:57:04.482122  536897 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1121 23:57:04.482139  536897 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21934-513600/.minikube CaCertPath:/home/jenkins/minikube-integration/21934-513600/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21934-513600/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21934-513600/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21934-513600/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21934-513600/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21934-513600/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21934-513600/.minikube}
	I1121 23:57:04.482166  536897 ubuntu.go:190] setting up certificates
	I1121 23:57:04.482173  536897 provision.go:84] configureAuth start
	I1121 23:57:04.482229  536897 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-354825
	I1121 23:57:04.499005  536897 provision.go:143] copyHostCerts
	I1121 23:57:04.499065  536897 exec_runner.go:144] found /home/jenkins/minikube-integration/21934-513600/.minikube/ca.pem, removing ...
	I1121 23:57:04.499080  536897 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21934-513600/.minikube/ca.pem
	I1121 23:57:04.499150  536897 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21934-513600/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21934-513600/.minikube/ca.pem (1078 bytes)
	I1121 23:57:04.499256  536897 exec_runner.go:144] found /home/jenkins/minikube-integration/21934-513600/.minikube/cert.pem, removing ...
	I1121 23:57:04.499260  536897 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21934-513600/.minikube/cert.pem
	I1121 23:57:04.499284  536897 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21934-513600/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21934-513600/.minikube/cert.pem (1123 bytes)
	I1121 23:57:04.499346  536897 exec_runner.go:144] found /home/jenkins/minikube-integration/21934-513600/.minikube/key.pem, removing ...
	I1121 23:57:04.499349  536897 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21934-513600/.minikube/key.pem
	I1121 23:57:04.499372  536897 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21934-513600/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21934-513600/.minikube/key.pem (1675 bytes)
	I1121 23:57:04.499463  536897 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21934-513600/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21934-513600/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21934-513600/.minikube/certs/ca-key.pem org=jenkins.functional-354825 san=[127.0.0.1 192.168.49.2 functional-354825 localhost minikube]
	I1121 23:57:04.867470  536897 provision.go:177] copyRemoteCerts
	I1121 23:57:04.867520  536897 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1121 23:57:04.867557  536897 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-354825
	I1121 23:57:04.885535  536897 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33505 SSHKeyPath:/home/jenkins/minikube-integration/21934-513600/.minikube/machines/functional-354825/id_rsa Username:docker}
	I1121 23:57:04.986058  536897 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21934-513600/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1121 23:57:05.004728  536897 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21934-513600/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1121 23:57:05.025395  536897 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21934-513600/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1121 23:57:05.043619  536897 provision.go:87] duration metric: took 561.423807ms to configureAuth
	I1121 23:57:05.043634  536897 ubuntu.go:206] setting minikube options for container-runtime
	I1121 23:57:05.043829  536897 config.go:182] Loaded profile config "functional-354825": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1121 23:57:05.043929  536897 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-354825
	I1121 23:57:05.060422  536897 main.go:143] libmachine: Using SSH client type: native
	I1121 23:57:05.060747  536897 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33505 <nil> <nil>}
	I1121 23:57:05.060761  536897 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1121 23:57:10.479717  536897 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1121 23:57:10.479731  536897 machine.go:97] duration metric: took 6.489883947s to provisionDockerMachine
	I1121 23:57:10.479741  536897 start.go:293] postStartSetup for "functional-354825" (driver="docker")
	I1121 23:57:10.479750  536897 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1121 23:57:10.479820  536897 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1121 23:57:10.479858  536897 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-354825
	I1121 23:57:10.498011  536897 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33505 SSHKeyPath:/home/jenkins/minikube-integration/21934-513600/.minikube/machines/functional-354825/id_rsa Username:docker}
	I1121 23:57:10.602605  536897 ssh_runner.go:195] Run: cat /etc/os-release
	I1121 23:57:10.606390  536897 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1121 23:57:10.606407  536897 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1121 23:57:10.606417  536897 filesync.go:126] Scanning /home/jenkins/minikube-integration/21934-513600/.minikube/addons for local assets ...
	I1121 23:57:10.606471  536897 filesync.go:126] Scanning /home/jenkins/minikube-integration/21934-513600/.minikube/files for local assets ...
	I1121 23:57:10.606555  536897 filesync.go:149] local asset: /home/jenkins/minikube-integration/21934-513600/.minikube/files/etc/ssl/certs/5169372.pem -> 5169372.pem in /etc/ssl/certs
	I1121 23:57:10.606633  536897 filesync.go:149] local asset: /home/jenkins/minikube-integration/21934-513600/.minikube/files/etc/test/nested/copy/516937/hosts -> hosts in /etc/test/nested/copy/516937
	I1121 23:57:10.606675  536897 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs /etc/test/nested/copy/516937
	I1121 23:57:10.614334  536897 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21934-513600/.minikube/files/etc/ssl/certs/5169372.pem --> /etc/ssl/certs/5169372.pem (1708 bytes)
	I1121 23:57:10.633192  536897 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21934-513600/.minikube/files/etc/test/nested/copy/516937/hosts --> /etc/test/nested/copy/516937/hosts (40 bytes)
	I1121 23:57:10.651751  536897 start.go:296] duration metric: took 171.996304ms for postStartSetup
	I1121 23:57:10.651822  536897 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1121 23:57:10.651862  536897 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-354825
	I1121 23:57:10.668578  536897 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33505 SSHKeyPath:/home/jenkins/minikube-integration/21934-513600/.minikube/machines/functional-354825/id_rsa Username:docker}
	I1121 23:57:10.767005  536897 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1121 23:57:10.771958  536897 fix.go:56] duration metric: took 6.802738069s for fixHost
	I1121 23:57:10.771973  536897 start.go:83] releasing machines lock for "functional-354825", held for 6.802775327s
	I1121 23:57:10.772041  536897 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-354825
	I1121 23:57:10.788577  536897 ssh_runner.go:195] Run: cat /version.json
	I1121 23:57:10.788627  536897 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-354825
	I1121 23:57:10.788900  536897 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1121 23:57:10.788944  536897 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-354825
	I1121 23:57:10.815183  536897 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33505 SSHKeyPath:/home/jenkins/minikube-integration/21934-513600/.minikube/machines/functional-354825/id_rsa Username:docker}
	I1121 23:57:10.818529  536897 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33505 SSHKeyPath:/home/jenkins/minikube-integration/21934-513600/.minikube/machines/functional-354825/id_rsa Username:docker}
	I1121 23:57:11.002799  536897 ssh_runner.go:195] Run: systemctl --version
	I1121 23:57:11.010140  536897 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1121 23:57:11.046748  536897 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1121 23:57:11.051191  536897 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1121 23:57:11.051251  536897 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1121 23:57:11.059338  536897 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1121 23:57:11.059351  536897 start.go:496] detecting cgroup driver to use...
	I1121 23:57:11.059382  536897 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1121 23:57:11.059429  536897 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1121 23:57:11.075205  536897 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1121 23:57:11.088857  536897 docker.go:218] disabling cri-docker service (if available) ...
	I1121 23:57:11.088925  536897 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1121 23:57:11.105270  536897 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1121 23:57:11.120392  536897 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1121 23:57:11.257639  536897 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1121 23:57:11.382833  536897 docker.go:234] disabling docker service ...
	I1121 23:57:11.382901  536897 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1121 23:57:11.398994  536897 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1121 23:57:11.412365  536897 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1121 23:57:11.552718  536897 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1121 23:57:11.691629  536897 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1121 23:57:11.705443  536897 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1121 23:57:11.720496  536897 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1121 23:57:11.720555  536897 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1121 23:57:11.729300  536897 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1121 23:57:11.729358  536897 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1121 23:57:11.738454  536897 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1121 23:57:11.747379  536897 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1121 23:57:11.756264  536897 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1121 23:57:11.765207  536897 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1121 23:57:11.774155  536897 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1121 23:57:11.782579  536897 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1121 23:57:11.791284  536897 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1121 23:57:11.798746  536897 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1121 23:57:11.806291  536897 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1121 23:57:11.934899  536897 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1121 23:57:12.186543  536897 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1121 23:57:12.186611  536897 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1121 23:57:12.190473  536897 start.go:564] Will wait 60s for crictl version
	I1121 23:57:12.190527  536897 ssh_runner.go:195] Run: which crictl
	I1121 23:57:12.193944  536897 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1121 23:57:12.223391  536897 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.2
	RuntimeApiVersion:  v1
	I1121 23:57:12.223464  536897 ssh_runner.go:195] Run: crio --version
	I1121 23:57:12.251617  536897 ssh_runner.go:195] Run: crio --version
	I1121 23:57:12.283439  536897 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.2 ...
	I1121 23:57:12.286473  536897 cli_runner.go:164] Run: docker network inspect functional-354825 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1121 23:57:12.302259  536897 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1121 23:57:12.309529  536897 out.go:179]   - apiserver.enable-admission-plugins=NamespaceAutoProvision
	I1121 23:57:12.312395  536897 kubeadm.go:884] updating cluster {Name:functional-354825 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763588073-21934@sha256:19d3da0413e1bfa354cbb88004c6796f8e9772a083e0230b0f6e50212ee04c7e Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:functional-354825 Namespace:default APIServerHAVIP: APIServerName:minikubeCA API
ServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType
:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1121 23:57:12.312535  536897 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1121 23:57:12.312606  536897 ssh_runner.go:195] Run: sudo crictl images --output json
	I1121 23:57:12.370137  536897 crio.go:514] all images are preloaded for cri-o runtime.
	I1121 23:57:12.370147  536897 crio.go:433] Images already preloaded, skipping extraction
	I1121 23:57:12.370201  536897 ssh_runner.go:195] Run: sudo crictl images --output json
	I1121 23:57:12.409567  536897 crio.go:514] all images are preloaded for cri-o runtime.
	I1121 23:57:12.409579  536897 cache_images.go:86] Images are preloaded, skipping loading
	I1121 23:57:12.409585  536897 kubeadm.go:935] updating node { 192.168.49.2 8441 v1.34.1 crio true true} ...
	I1121 23:57:12.409684  536897 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=functional-354825 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:functional-354825 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1121 23:57:12.409763  536897 ssh_runner.go:195] Run: crio config
	I1121 23:57:12.478719  536897 extraconfig.go:125] Overwriting default enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota with user provided enable-admission-plugins=NamespaceAutoProvision for component apiserver
	I1121 23:57:12.478750  536897 cni.go:84] Creating CNI manager for ""
	I1121 23:57:12.478758  536897 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1121 23:57:12.478773  536897 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1121 23:57:12.478798  536897 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8441 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:functional-354825 NodeName:functional-354825 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceAutoProvision] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:ma
p[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1121 23:57:12.478966  536897 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8441
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "functional-354825"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceAutoProvision"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8441
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1121 23:57:12.479045  536897 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1121 23:57:12.486843  536897 binaries.go:51] Found k8s binaries, skipping transfer
	I1121 23:57:12.486901  536897 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1121 23:57:12.494097  536897 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (367 bytes)
	I1121 23:57:12.508149  536897 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1121 23:57:12.520192  536897 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2064 bytes)
	I1121 23:57:12.533669  536897 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I1121 23:57:12.537438  536897 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1121 23:57:12.672072  536897 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1121 23:57:12.685922  536897 certs.go:69] Setting up /home/jenkins/minikube-integration/21934-513600/.minikube/profiles/functional-354825 for IP: 192.168.49.2
	I1121 23:57:12.685931  536897 certs.go:195] generating shared ca certs ...
	I1121 23:57:12.685949  536897 certs.go:227] acquiring lock for ca certs: {Name:mkaf4c79493334cb2058b349c36be7473837f9b7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1121 23:57:12.686090  536897 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21934-513600/.minikube/ca.key
	I1121 23:57:12.686131  536897 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21934-513600/.minikube/proxy-client-ca.key
	I1121 23:57:12.686137  536897 certs.go:257] generating profile certs ...
	I1121 23:57:12.686225  536897 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21934-513600/.minikube/profiles/functional-354825/client.key
	I1121 23:57:12.686269  536897 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21934-513600/.minikube/profiles/functional-354825/apiserver.key.4a6ec1a9
	I1121 23:57:12.686307  536897 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21934-513600/.minikube/profiles/functional-354825/proxy-client.key
	I1121 23:57:12.686416  536897 certs.go:484] found cert: /home/jenkins/minikube-integration/21934-513600/.minikube/certs/516937.pem (1338 bytes)
	W1121 23:57:12.686458  536897 certs.go:480] ignoring /home/jenkins/minikube-integration/21934-513600/.minikube/certs/516937_empty.pem, impossibly tiny 0 bytes
	I1121 23:57:12.686465  536897 certs.go:484] found cert: /home/jenkins/minikube-integration/21934-513600/.minikube/certs/ca-key.pem (1679 bytes)
	I1121 23:57:12.686490  536897 certs.go:484] found cert: /home/jenkins/minikube-integration/21934-513600/.minikube/certs/ca.pem (1078 bytes)
	I1121 23:57:12.686513  536897 certs.go:484] found cert: /home/jenkins/minikube-integration/21934-513600/.minikube/certs/cert.pem (1123 bytes)
	I1121 23:57:12.686537  536897 certs.go:484] found cert: /home/jenkins/minikube-integration/21934-513600/.minikube/certs/key.pem (1675 bytes)
	I1121 23:57:12.686580  536897 certs.go:484] found cert: /home/jenkins/minikube-integration/21934-513600/.minikube/files/etc/ssl/certs/5169372.pem (1708 bytes)
	I1121 23:57:12.687203  536897 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21934-513600/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1121 23:57:12.706572  536897 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21934-513600/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1121 23:57:12.724083  536897 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21934-513600/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1121 23:57:12.740447  536897 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21934-513600/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1121 23:57:12.758032  536897 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21934-513600/.minikube/profiles/functional-354825/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1121 23:57:12.774971  536897 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21934-513600/.minikube/profiles/functional-354825/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1121 23:57:12.795058  536897 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21934-513600/.minikube/profiles/functional-354825/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1121 23:57:12.811892  536897 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21934-513600/.minikube/profiles/functional-354825/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1121 23:57:12.829135  536897 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21934-513600/.minikube/certs/516937.pem --> /usr/share/ca-certificates/516937.pem (1338 bytes)
	I1121 23:57:12.846436  536897 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21934-513600/.minikube/files/etc/ssl/certs/5169372.pem --> /usr/share/ca-certificates/5169372.pem (1708 bytes)
	I1121 23:57:12.864293  536897 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21934-513600/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1121 23:57:12.881726  536897 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1121 23:57:12.894489  536897 ssh_runner.go:195] Run: openssl version
	I1121 23:57:12.900713  536897 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/516937.pem && ln -fs /usr/share/ca-certificates/516937.pem /etc/ssl/certs/516937.pem"
	I1121 23:57:12.908959  536897 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/516937.pem
	I1121 23:57:12.912843  536897 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov 21 23:55 /usr/share/ca-certificates/516937.pem
	I1121 23:57:12.912895  536897 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/516937.pem
	I1121 23:57:12.953705  536897 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/516937.pem /etc/ssl/certs/51391683.0"
	I1121 23:57:12.961700  536897 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/5169372.pem && ln -fs /usr/share/ca-certificates/5169372.pem /etc/ssl/certs/5169372.pem"
	I1121 23:57:12.969787  536897 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/5169372.pem
	I1121 23:57:12.973434  536897 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov 21 23:55 /usr/share/ca-certificates/5169372.pem
	I1121 23:57:12.973488  536897 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/5169372.pem
	I1121 23:57:13.014644  536897 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/5169372.pem /etc/ssl/certs/3ec20f2e.0"
	I1121 23:57:13.023149  536897 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1121 23:57:13.031430  536897 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1121 23:57:13.035122  536897 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov 21 23:48 /usr/share/ca-certificates/minikubeCA.pem
	I1121 23:57:13.035176  536897 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1121 23:57:13.076308  536897 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1121 23:57:13.084398  536897 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1121 23:57:13.088167  536897 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1121 23:57:13.131037  536897 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1121 23:57:13.172031  536897 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1121 23:57:13.212645  536897 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1121 23:57:13.253433  536897 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1121 23:57:13.294407  536897 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1121 23:57:13.335197  536897 kubeadm.go:401] StartCluster: {Name:functional-354825 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763588073-21934@sha256:19d3da0413e1bfa354cbb88004c6796f8e9772a083e0230b0f6e50212ee04c7e Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:functional-354825 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISer
verNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p
MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1121 23:57:13.335277  536897 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1121 23:57:13.335357  536897 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1121 23:57:13.366449  536897 cri.go:89] found id: "21b012a2c915b29b087ee09086411311859280f0dbdd3ccc609367ccd81fafb6"
	I1121 23:57:13.366461  536897 cri.go:89] found id: "4ec9b22907cb542b63d71cd01f64224af3315e50b2b522081c6dc9ee631267ba"
	I1121 23:57:13.366464  536897 cri.go:89] found id: "d979ba16770b37d087fcda20d0998b346daeecaefa878edb4a1ebe248df2b584"
	I1121 23:57:13.366466  536897 cri.go:89] found id: "0ffac8dca99689cdc0b642991d6bf7f592a5431a4520c75680c0bcc8afe93ab9"
	I1121 23:57:13.366469  536897 cri.go:89] found id: "d40326817fd4686bc9ffd91958858d457a036787ee0862725a68831dcc544c77"
	I1121 23:57:13.366471  536897 cri.go:89] found id: "da23f5fa723d25d28d8c9bc8568dd5f2bd9a038b8b349fd74ba63112e83c7bdf"
	I1121 23:57:13.366473  536897 cri.go:89] found id: "2ff36b9c200a15e34b87c21ebf8aaa61fe4a5908d3317e71c9d71ca708945fc9"
	I1121 23:57:13.366475  536897 cri.go:89] found id: "9a1d91336a5b917fcd71534c4bddbfd6e77f4c40e9566f5451e4fe3644934874"
	I1121 23:57:13.366478  536897 cri.go:89] found id: ""
	I1121 23:57:13.366532  536897 ssh_runner.go:195] Run: sudo runc list -f json
	W1121 23:57:13.377601  536897 kubeadm.go:408] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-21T23:57:13Z" level=error msg="open /run/runc: no such file or directory"
	I1121 23:57:13.377673  536897 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1121 23:57:13.385398  536897 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1121 23:57:13.385408  536897 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1121 23:57:13.385459  536897 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1121 23:57:13.392850  536897 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1121 23:57:13.393371  536897 kubeconfig.go:125] found "functional-354825" server: "https://192.168.49.2:8441"
	I1121 23:57:13.394792  536897 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1121 23:57:13.403789  536897 kubeadm.go:645] detected kubeadm config drift (will reconfigure cluster from new /var/tmp/minikube/kubeadm.yaml):
	-- stdout --
	--- /var/tmp/minikube/kubeadm.yaml	2025-11-21 23:55:20.679607631 +0000
	+++ /var/tmp/minikube/kubeadm.yaml.new	2025-11-21 23:57:12.528493844 +0000
	@@ -24,7 +24,7 @@
	   certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	   extraArgs:
	     - name: "enable-admission-plugins"
	-      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	+      value: "NamespaceAutoProvision"
	 controllerManager:
	   extraArgs:
	     - name: "allocate-node-cidrs"
	
	-- /stdout --
	I1121 23:57:13.403799  536897 kubeadm.go:1161] stopping kube-system containers ...
	I1121 23:57:13.403810  536897 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I1121 23:57:13.403866  536897 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1121 23:57:13.434216  536897 cri.go:89] found id: "21b012a2c915b29b087ee09086411311859280f0dbdd3ccc609367ccd81fafb6"
	I1121 23:57:13.434227  536897 cri.go:89] found id: "4ec9b22907cb542b63d71cd01f64224af3315e50b2b522081c6dc9ee631267ba"
	I1121 23:57:13.434231  536897 cri.go:89] found id: "d979ba16770b37d087fcda20d0998b346daeecaefa878edb4a1ebe248df2b584"
	I1121 23:57:13.434233  536897 cri.go:89] found id: "0ffac8dca99689cdc0b642991d6bf7f592a5431a4520c75680c0bcc8afe93ab9"
	I1121 23:57:13.434236  536897 cri.go:89] found id: "d40326817fd4686bc9ffd91958858d457a036787ee0862725a68831dcc544c77"
	I1121 23:57:13.434239  536897 cri.go:89] found id: "da23f5fa723d25d28d8c9bc8568dd5f2bd9a038b8b349fd74ba63112e83c7bdf"
	I1121 23:57:13.434241  536897 cri.go:89] found id: "2ff36b9c200a15e34b87c21ebf8aaa61fe4a5908d3317e71c9d71ca708945fc9"
	I1121 23:57:13.434243  536897 cri.go:89] found id: "9a1d91336a5b917fcd71534c4bddbfd6e77f4c40e9566f5451e4fe3644934874"
	I1121 23:57:13.434245  536897 cri.go:89] found id: ""
	I1121 23:57:13.434250  536897 cri.go:252] Stopping containers: [21b012a2c915b29b087ee09086411311859280f0dbdd3ccc609367ccd81fafb6 4ec9b22907cb542b63d71cd01f64224af3315e50b2b522081c6dc9ee631267ba d979ba16770b37d087fcda20d0998b346daeecaefa878edb4a1ebe248df2b584 0ffac8dca99689cdc0b642991d6bf7f592a5431a4520c75680c0bcc8afe93ab9 d40326817fd4686bc9ffd91958858d457a036787ee0862725a68831dcc544c77 da23f5fa723d25d28d8c9bc8568dd5f2bd9a038b8b349fd74ba63112e83c7bdf 2ff36b9c200a15e34b87c21ebf8aaa61fe4a5908d3317e71c9d71ca708945fc9 9a1d91336a5b917fcd71534c4bddbfd6e77f4c40e9566f5451e4fe3644934874]
	I1121 23:57:13.434310  536897 ssh_runner.go:195] Run: which crictl
	I1121 23:57:13.438017  536897 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl stop --timeout=10 21b012a2c915b29b087ee09086411311859280f0dbdd3ccc609367ccd81fafb6 4ec9b22907cb542b63d71cd01f64224af3315e50b2b522081c6dc9ee631267ba d979ba16770b37d087fcda20d0998b346daeecaefa878edb4a1ebe248df2b584 0ffac8dca99689cdc0b642991d6bf7f592a5431a4520c75680c0bcc8afe93ab9 d40326817fd4686bc9ffd91958858d457a036787ee0862725a68831dcc544c77 da23f5fa723d25d28d8c9bc8568dd5f2bd9a038b8b349fd74ba63112e83c7bdf 2ff36b9c200a15e34b87c21ebf8aaa61fe4a5908d3317e71c9d71ca708945fc9 9a1d91336a5b917fcd71534c4bddbfd6e77f4c40e9566f5451e4fe3644934874
	I1121 23:57:13.501474  536897 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1121 23:57:13.616639  536897 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1121 23:57:13.624668  536897 kubeadm.go:158] found existing configuration files:
	-rw------- 1 root root 5631 Nov 21 23:55 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5636 Nov 21 23:55 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 1972 Nov 21 23:55 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5588 Nov 21 23:55 /etc/kubernetes/scheduler.conf
	
	I1121 23:57:13.624730  536897 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf
	I1121 23:57:13.632589  536897 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf
	I1121 23:57:13.640242  536897 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1121 23:57:13.640296  536897 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1121 23:57:13.648069  536897 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf
	I1121 23:57:13.655848  536897 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1121 23:57:13.655905  536897 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1121 23:57:13.663952  536897 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf
	I1121 23:57:13.671942  536897 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1121 23:57:13.671998  536897 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1121 23:57:13.679742  536897 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1121 23:57:13.687495  536897 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1121 23:57:13.734588  536897 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1121 23:57:15.278624  536897 ssh_runner.go:235] Completed: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.544010985s)
	I1121 23:57:15.278687  536897 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1121 23:57:15.486613  536897 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1121 23:57:15.556724  536897 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1121 23:57:15.613134  536897 api_server.go:52] waiting for apiserver process to appear ...
	I1121 23:57:15.613202  536897 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1121 23:57:16.113955  536897 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1121 23:57:16.614222  536897 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1121 23:57:16.626918  536897 api_server.go:72] duration metric: took 1.013800319s to wait for apiserver process to appear ...
	I1121 23:57:16.626935  536897 api_server.go:88] waiting for apiserver healthz status ...
	I1121 23:57:16.626952  536897 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8441/healthz ...
	I1121 23:57:19.524076  536897 api_server.go:279] https://192.168.49.2:8441/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1121 23:57:19.524092  536897 api_server.go:103] status: https://192.168.49.2:8441/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1121 23:57:19.524104  536897 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8441/healthz ...
	I1121 23:57:19.537508  536897 api_server.go:279] https://192.168.49.2:8441/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1121 23:57:19.537522  536897 api_server.go:103] status: https://192.168.49.2:8441/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1121 23:57:19.627767  536897 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8441/healthz ...
	I1121 23:57:19.646927  536897 api_server.go:279] https://192.168.49.2:8441/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1121 23:57:19.646943  536897 api_server.go:103] status: https://192.168.49.2:8441/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1121 23:57:20.127397  536897 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8441/healthz ...
	I1121 23:57:20.144820  536897 api_server.go:279] https://192.168.49.2:8441/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1121 23:57:20.144865  536897 api_server.go:103] status: https://192.168.49.2:8441/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1121 23:57:20.627424  536897 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8441/healthz ...
	I1121 23:57:20.639191  536897 api_server.go:279] https://192.168.49.2:8441/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1121 23:57:20.639208  536897 api_server.go:103] status: https://192.168.49.2:8441/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1121 23:57:21.127871  536897 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8441/healthz ...
	I1121 23:57:21.136065  536897 api_server.go:279] https://192.168.49.2:8441/healthz returned 200:
	ok
	I1121 23:57:21.149657  536897 api_server.go:141] control plane version: v1.34.1
	I1121 23:57:21.149673  536897 api_server.go:131] duration metric: took 4.522733719s to wait for apiserver health ...
	I1121 23:57:21.149681  536897 cni.go:84] Creating CNI manager for ""
	I1121 23:57:21.149686  536897 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1121 23:57:21.152836  536897 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I1121 23:57:21.155773  536897 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1121 23:57:21.159923  536897 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.1/kubectl ...
	I1121 23:57:21.159934  536897 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1121 23:57:21.174634  536897 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1121 23:57:21.597169  536897 system_pods.go:43] waiting for kube-system pods to appear ...
	I1121 23:57:21.601221  536897 system_pods.go:59] 8 kube-system pods found
	I1121 23:57:21.601245  536897 system_pods.go:61] "coredns-66bc5c9577-lbq5l" [b01dc8c4-27f4-4f42-b0f9-82bf6dfcc946] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1121 23:57:21.601252  536897 system_pods.go:61] "etcd-functional-354825" [6280fc99-a1c2-49a4-8bc4-64b2aa374da1] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1121 23:57:21.601257  536897 system_pods.go:61] "kindnet-fvhrx" [becf26c3-7a95-4100-b911-b0474a1e37df] Running
	I1121 23:57:21.601262  536897 system_pods.go:61] "kube-apiserver-functional-354825" [fb9a926a-fe96-451e-8d71-188b79525b3b] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1121 23:57:21.601267  536897 system_pods.go:61] "kube-controller-manager-functional-354825" [bffc8eb2-4854-44a3-8d89-3a7587976783] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1121 23:57:21.601271  536897 system_pods.go:61] "kube-proxy-5ct95" [0a9b0d51-cdd4-4b77-afef-8257eec2b1c4] Running
	I1121 23:57:21.601276  536897 system_pods.go:61] "kube-scheduler-functional-354825" [334ac0ed-0d79-442b-aa29-776fbf00ecf3] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1121 23:57:21.601279  536897 system_pods.go:61] "storage-provisioner" [769d2586-0909-4bce-b8e3-c4f125035c4a] Running
	I1121 23:57:21.601284  536897 system_pods.go:74] duration metric: took 4.102769ms to wait for pod list to return data ...
	I1121 23:57:21.601290  536897 node_conditions.go:102] verifying NodePressure condition ...
	I1121 23:57:21.606835  536897 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1121 23:57:21.606853  536897 node_conditions.go:123] node cpu capacity is 2
	I1121 23:57:21.606866  536897 node_conditions.go:105] duration metric: took 5.570634ms to run NodePressure ...
	I1121 23:57:21.606924  536897 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I1121 23:57:21.882180  536897 kubeadm.go:729] waiting for restarted kubelet to initialise ...
	I1121 23:57:21.885640  536897 kubeadm.go:744] kubelet initialised
	I1121 23:57:21.885650  536897 kubeadm.go:745] duration metric: took 3.458381ms waiting for restarted kubelet to initialise ...
	I1121 23:57:21.885664  536897 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1121 23:57:21.894911  536897 ops.go:34] apiserver oom_adj: -16
	I1121 23:57:21.894923  536897 kubeadm.go:602] duration metric: took 8.509509476s to restartPrimaryControlPlane
	I1121 23:57:21.894931  536897 kubeadm.go:403] duration metric: took 8.55974402s to StartCluster
	I1121 23:57:21.894949  536897 settings.go:142] acquiring lock: {Name:mk6c31eb57ec65b047b78b4e1046e03fe7cc77bb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1121 23:57:21.895013  536897 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21934-513600/kubeconfig
	I1121 23:57:21.895631  536897 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21934-513600/kubeconfig: {Name:mkcd094d8a765e5a81369c0fb33cb5e696b17bb8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1121 23:57:21.895835  536897 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1121 23:57:21.896089  536897 config.go:182] Loaded profile config "functional-354825": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1121 23:57:21.896127  536897 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1121 23:57:21.896185  536897 addons.go:70] Setting storage-provisioner=true in profile "functional-354825"
	I1121 23:57:21.896196  536897 addons.go:239] Setting addon storage-provisioner=true in "functional-354825"
	W1121 23:57:21.896201  536897 addons.go:248] addon storage-provisioner should already be in state true
	I1121 23:57:21.896230  536897 host.go:66] Checking if "functional-354825" exists ...
	I1121 23:57:21.896665  536897 cli_runner.go:164] Run: docker container inspect functional-354825 --format={{.State.Status}}
	I1121 23:57:21.897985  536897 addons.go:70] Setting default-storageclass=true in profile "functional-354825"
	I1121 23:57:21.898001  536897 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "functional-354825"
	I1121 23:57:21.898365  536897 cli_runner.go:164] Run: docker container inspect functional-354825 --format={{.State.Status}}
	I1121 23:57:21.901121  536897 out.go:179] * Verifying Kubernetes components...
	I1121 23:57:21.905834  536897 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1121 23:57:21.926824  536897 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1121 23:57:21.929647  536897 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1121 23:57:21.929658  536897 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1121 23:57:21.929726  536897 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-354825
	I1121 23:57:21.932091  536897 addons.go:239] Setting addon default-storageclass=true in "functional-354825"
	W1121 23:57:21.932100  536897 addons.go:248] addon default-storageclass should already be in state true
	I1121 23:57:21.932122  536897 host.go:66] Checking if "functional-354825" exists ...
	I1121 23:57:21.932535  536897 cli_runner.go:164] Run: docker container inspect functional-354825 --format={{.State.Status}}
	I1121 23:57:21.961941  536897 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33505 SSHKeyPath:/home/jenkins/minikube-integration/21934-513600/.minikube/machines/functional-354825/id_rsa Username:docker}
	I1121 23:57:21.973097  536897 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1121 23:57:21.973112  536897 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1121 23:57:21.973170  536897 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-354825
	I1121 23:57:22.002089  536897 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33505 SSHKeyPath:/home/jenkins/minikube-integration/21934-513600/.minikube/machines/functional-354825/id_rsa Username:docker}
	I1121 23:57:22.106045  536897 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1121 23:57:22.143244  536897 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1121 23:57:22.171189  536897 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1121 23:57:22.903547  536897 node_ready.go:35] waiting up to 6m0s for node "functional-354825" to be "Ready" ...
	I1121 23:57:22.907822  536897 node_ready.go:49] node "functional-354825" is "Ready"
	I1121 23:57:22.907838  536897 node_ready.go:38] duration metric: took 4.27467ms for node "functional-354825" to be "Ready" ...
	I1121 23:57:22.907849  536897 api_server.go:52] waiting for apiserver process to appear ...
	I1121 23:57:22.907908  536897 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1121 23:57:22.925655  536897 api_server.go:72] duration metric: took 1.02979548s to wait for apiserver process to appear ...
	I1121 23:57:22.925668  536897 api_server.go:88] waiting for apiserver healthz status ...
	I1121 23:57:22.925686  536897 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8441/healthz ...
	I1121 23:57:22.954995  536897 out.go:179] * Enabled addons: storage-provisioner, default-storageclass
	I1121 23:57:22.957905  536897 addons.go:530] duration metric: took 1.061764204s for enable addons: enabled=[storage-provisioner default-storageclass]
	I1121 23:57:22.961027  536897 api_server.go:279] https://192.168.49.2:8441/healthz returned 200:
	ok
	I1121 23:57:22.963791  536897 api_server.go:141] control plane version: v1.34.1
	I1121 23:57:22.963805  536897 api_server.go:131] duration metric: took 38.131759ms to wait for apiserver health ...
	I1121 23:57:22.963813  536897 system_pods.go:43] waiting for kube-system pods to appear ...
	I1121 23:57:22.982778  536897 system_pods.go:59] 8 kube-system pods found
	I1121 23:57:22.982797  536897 system_pods.go:61] "coredns-66bc5c9577-lbq5l" [b01dc8c4-27f4-4f42-b0f9-82bf6dfcc946] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1121 23:57:22.982808  536897 system_pods.go:61] "etcd-functional-354825" [6280fc99-a1c2-49a4-8bc4-64b2aa374da1] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1121 23:57:22.982812  536897 system_pods.go:61] "kindnet-fvhrx" [becf26c3-7a95-4100-b911-b0474a1e37df] Running
	I1121 23:57:22.982818  536897 system_pods.go:61] "kube-apiserver-functional-354825" [fb9a926a-fe96-451e-8d71-188b79525b3b] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1121 23:57:22.982822  536897 system_pods.go:61] "kube-controller-manager-functional-354825" [bffc8eb2-4854-44a3-8d89-3a7587976783] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1121 23:57:22.982826  536897 system_pods.go:61] "kube-proxy-5ct95" [0a9b0d51-cdd4-4b77-afef-8257eec2b1c4] Running
	I1121 23:57:22.982831  536897 system_pods.go:61] "kube-scheduler-functional-354825" [334ac0ed-0d79-442b-aa29-776fbf00ecf3] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1121 23:57:22.982833  536897 system_pods.go:61] "storage-provisioner" [769d2586-0909-4bce-b8e3-c4f125035c4a] Running
	I1121 23:57:22.982837  536897 system_pods.go:74] duration metric: took 19.02049ms to wait for pod list to return data ...
	I1121 23:57:22.982843  536897 default_sa.go:34] waiting for default service account to be created ...
	I1121 23:57:22.994305  536897 default_sa.go:45] found service account: "default"
	I1121 23:57:22.994319  536897 default_sa.go:55] duration metric: took 11.471384ms for default service account to be created ...
	I1121 23:57:22.994327  536897 system_pods.go:116] waiting for k8s-apps to be running ...
	I1121 23:57:23.011944  536897 system_pods.go:86] 8 kube-system pods found
	I1121 23:57:23.011973  536897 system_pods.go:89] "coredns-66bc5c9577-lbq5l" [b01dc8c4-27f4-4f42-b0f9-82bf6dfcc946] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1121 23:57:23.011981  536897 system_pods.go:89] "etcd-functional-354825" [6280fc99-a1c2-49a4-8bc4-64b2aa374da1] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1121 23:57:23.011986  536897 system_pods.go:89] "kindnet-fvhrx" [becf26c3-7a95-4100-b911-b0474a1e37df] Running
	I1121 23:57:23.011992  536897 system_pods.go:89] "kube-apiserver-functional-354825" [fb9a926a-fe96-451e-8d71-188b79525b3b] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1121 23:57:23.011998  536897 system_pods.go:89] "kube-controller-manager-functional-354825" [bffc8eb2-4854-44a3-8d89-3a7587976783] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1121 23:57:23.012001  536897 system_pods.go:89] "kube-proxy-5ct95" [0a9b0d51-cdd4-4b77-afef-8257eec2b1c4] Running
	I1121 23:57:23.012006  536897 system_pods.go:89] "kube-scheduler-functional-354825" [334ac0ed-0d79-442b-aa29-776fbf00ecf3] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1121 23:57:23.012009  536897 system_pods.go:89] "storage-provisioner" [769d2586-0909-4bce-b8e3-c4f125035c4a] Running
	I1121 23:57:23.012015  536897 system_pods.go:126] duration metric: took 17.683838ms to wait for k8s-apps to be running ...
	I1121 23:57:23.012022  536897 system_svc.go:44] waiting for kubelet service to be running ....
	I1121 23:57:23.012089  536897 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1121 23:57:23.027030  536897 system_svc.go:56] duration metric: took 14.989382ms WaitForService to wait for kubelet
	I1121 23:57:23.027046  536897 kubeadm.go:587] duration metric: took 1.131192966s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1121 23:57:23.027064  536897 node_conditions.go:102] verifying NodePressure condition ...
	I1121 23:57:23.032765  536897 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1121 23:57:23.032780  536897 node_conditions.go:123] node cpu capacity is 2
	I1121 23:57:23.032790  536897 node_conditions.go:105] duration metric: took 5.722564ms to run NodePressure ...
	I1121 23:57:23.032801  536897 start.go:242] waiting for startup goroutines ...
	I1121 23:57:23.032808  536897 start.go:247] waiting for cluster config update ...
	I1121 23:57:23.032828  536897 start.go:256] writing updated cluster config ...
	I1121 23:57:23.033119  536897 ssh_runner.go:195] Run: rm -f paused
	I1121 23:57:23.037337  536897 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1121 23:57:23.041003  536897 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-lbq5l" in "kube-system" namespace to be "Ready" or be gone ...
	W1121 23:57:25.047060  536897 pod_ready.go:104] pod "coredns-66bc5c9577-lbq5l" is not "Ready", error: <nil>
	W1121 23:57:27.546137  536897 pod_ready.go:104] pod "coredns-66bc5c9577-lbq5l" is not "Ready", error: <nil>
	I1121 23:57:28.567635  536897 pod_ready.go:94] pod "coredns-66bc5c9577-lbq5l" is "Ready"
	I1121 23:57:28.567651  536897 pod_ready.go:86] duration metric: took 5.526632721s for pod "coredns-66bc5c9577-lbq5l" in "kube-system" namespace to be "Ready" or be gone ...
	I1121 23:57:28.571591  536897 pod_ready.go:83] waiting for pod "etcd-functional-354825" in "kube-system" namespace to be "Ready" or be gone ...
	I1121 23:57:28.584964  536897 pod_ready.go:94] pod "etcd-functional-354825" is "Ready"
	I1121 23:57:28.584978  536897 pod_ready.go:86] duration metric: took 13.37259ms for pod "etcd-functional-354825" in "kube-system" namespace to be "Ready" or be gone ...
	I1121 23:57:28.591657  536897 pod_ready.go:83] waiting for pod "kube-apiserver-functional-354825" in "kube-system" namespace to be "Ready" or be gone ...
	I1121 23:57:28.606514  536897 pod_ready.go:94] pod "kube-apiserver-functional-354825" is "Ready"
	I1121 23:57:28.606540  536897 pod_ready.go:86] duration metric: took 14.869541ms for pod "kube-apiserver-functional-354825" in "kube-system" namespace to be "Ready" or be gone ...
	I1121 23:57:28.609188  536897 pod_ready.go:83] waiting for pod "kube-controller-manager-functional-354825" in "kube-system" namespace to be "Ready" or be gone ...
	W1121 23:57:30.614727  536897 pod_ready.go:104] pod "kube-controller-manager-functional-354825" is not "Ready", error: <nil>
	W1121 23:57:32.614899  536897 pod_ready.go:104] pod "kube-controller-manager-functional-354825" is not "Ready", error: <nil>
	I1121 23:57:33.614929  536897 pod_ready.go:94] pod "kube-controller-manager-functional-354825" is "Ready"
	I1121 23:57:33.614943  536897 pod_ready.go:86] duration metric: took 5.005742314s for pod "kube-controller-manager-functional-354825" in "kube-system" namespace to be "Ready" or be gone ...
	I1121 23:57:33.617255  536897 pod_ready.go:83] waiting for pod "kube-proxy-5ct95" in "kube-system" namespace to be "Ready" or be gone ...
	I1121 23:57:33.621890  536897 pod_ready.go:94] pod "kube-proxy-5ct95" is "Ready"
	I1121 23:57:33.621903  536897 pod_ready.go:86] duration metric: took 4.63594ms for pod "kube-proxy-5ct95" in "kube-system" namespace to be "Ready" or be gone ...
	I1121 23:57:33.624148  536897 pod_ready.go:83] waiting for pod "kube-scheduler-functional-354825" in "kube-system" namespace to be "Ready" or be gone ...
	W1121 23:57:35.629776  536897 pod_ready.go:104] pod "kube-scheduler-functional-354825" is not "Ready", error: <nil>
	I1121 23:57:36.129501  536897 pod_ready.go:94] pod "kube-scheduler-functional-354825" is "Ready"
	I1121 23:57:36.129515  536897 pod_ready.go:86] duration metric: took 2.505356751s for pod "kube-scheduler-functional-354825" in "kube-system" namespace to be "Ready" or be gone ...
	I1121 23:57:36.129525  536897 pod_ready.go:40] duration metric: took 13.092167s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1121 23:57:36.186632  536897 start.go:628] kubectl: 1.33.2, cluster: 1.34.1 (minor skew: 1)
	I1121 23:57:36.199387  536897 out.go:179] * Done! kubectl is now configured to use "functional-354825" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Nov 21 23:58:10 functional-354825 crio[3536]: time="2025-11-21T23:58:10.678598933Z" level=info msg="Got pod network &{Name:hello-node-75c85bcc94-hzm54 Namespace:default ID:0e68736cdf318b721f4858b80528804807848b5fc0c869fcbe126a09c8873dc4 UID:05dad3cf-08e8-419e-bfb3-42a931a1093a NetNS:/var/run/netns/bc23666e-06d0-472b-a141-af8e809f3875 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0x40017cc1f8}] Aliases:map[]}"
	Nov 21 23:58:10 functional-354825 crio[3536]: time="2025-11-21T23:58:10.678748435Z" level=info msg="Checking pod default_hello-node-75c85bcc94-hzm54 for CNI network kindnet (type=ptp)"
	Nov 21 23:58:10 functional-354825 crio[3536]: time="2025-11-21T23:58:10.681249739Z" level=info msg="Ran pod sandbox 0e68736cdf318b721f4858b80528804807848b5fc0c869fcbe126a09c8873dc4 with infra container: default/hello-node-75c85bcc94-hzm54/POD" id=21e29694-0b14-4463-9acd-57f65c4b241a name=/runtime.v1.RuntimeService/RunPodSandbox
	Nov 21 23:58:10 functional-354825 crio[3536]: time="2025-11-21T23:58:10.685573868Z" level=info msg="Pulling image: kicbase/echo-server:latest" id=a7ac2c40-ee79-4ccf-a58a-cb53e1a9bd07 name=/runtime.v1.ImageService/PullImage
	Nov 21 23:58:15 functional-354825 crio[3536]: time="2025-11-21T23:58:15.584107821Z" level=info msg="Stopping pod sandbox: 19c51cd5339250816b5be58a69e6127fd7d5bf7c3ea3d5ad593b89a2d5eaca15" id=f004e002-bf66-45b7-a030-0e94550192c9 name=/runtime.v1.RuntimeService/StopPodSandbox
	Nov 21 23:58:15 functional-354825 crio[3536]: time="2025-11-21T23:58:15.584165116Z" level=info msg="Stopped pod sandbox (already stopped): 19c51cd5339250816b5be58a69e6127fd7d5bf7c3ea3d5ad593b89a2d5eaca15" id=f004e002-bf66-45b7-a030-0e94550192c9 name=/runtime.v1.RuntimeService/StopPodSandbox
	Nov 21 23:58:15 functional-354825 crio[3536]: time="2025-11-21T23:58:15.584882207Z" level=info msg="Removing pod sandbox: 19c51cd5339250816b5be58a69e6127fd7d5bf7c3ea3d5ad593b89a2d5eaca15" id=ce8d4da7-922a-4d24-bd76-0ed5c62ce6be name=/runtime.v1.RuntimeService/RemovePodSandbox
	Nov 21 23:58:15 functional-354825 crio[3536]: time="2025-11-21T23:58:15.588627645Z" level=info msg="Removed pod sandbox: 19c51cd5339250816b5be58a69e6127fd7d5bf7c3ea3d5ad593b89a2d5eaca15" id=ce8d4da7-922a-4d24-bd76-0ed5c62ce6be name=/runtime.v1.RuntimeService/RemovePodSandbox
	Nov 21 23:58:15 functional-354825 crio[3536]: time="2025-11-21T23:58:15.589202373Z" level=info msg="Stopping pod sandbox: bab2479cafd17598d453ac6746e248cd433767c5e8e813c40047197d2b88849f" id=f37f015b-6cc8-4173-ae71-3a7a5860b502 name=/runtime.v1.RuntimeService/StopPodSandbox
	Nov 21 23:58:15 functional-354825 crio[3536]: time="2025-11-21T23:58:15.589246097Z" level=info msg="Stopped pod sandbox (already stopped): bab2479cafd17598d453ac6746e248cd433767c5e8e813c40047197d2b88849f" id=f37f015b-6cc8-4173-ae71-3a7a5860b502 name=/runtime.v1.RuntimeService/StopPodSandbox
	Nov 21 23:58:15 functional-354825 crio[3536]: time="2025-11-21T23:58:15.589558203Z" level=info msg="Removing pod sandbox: bab2479cafd17598d453ac6746e248cd433767c5e8e813c40047197d2b88849f" id=8ae678f2-6b86-41d2-8c6d-fd5cde65e768 name=/runtime.v1.RuntimeService/RemovePodSandbox
	Nov 21 23:58:15 functional-354825 crio[3536]: time="2025-11-21T23:58:15.592961201Z" level=info msg="Removed pod sandbox: bab2479cafd17598d453ac6746e248cd433767c5e8e813c40047197d2b88849f" id=8ae678f2-6b86-41d2-8c6d-fd5cde65e768 name=/runtime.v1.RuntimeService/RemovePodSandbox
	Nov 21 23:58:15 functional-354825 crio[3536]: time="2025-11-21T23:58:15.593390162Z" level=info msg="Stopping pod sandbox: 6ca3a896e4c63e9c4ed15f93b937f86a2522370ad1f3a3120b56a1c58f99c771" id=dd2f0c91-899f-433d-940f-4c6816466836 name=/runtime.v1.RuntimeService/StopPodSandbox
	Nov 21 23:58:15 functional-354825 crio[3536]: time="2025-11-21T23:58:15.593434895Z" level=info msg="Stopped pod sandbox (already stopped): 6ca3a896e4c63e9c4ed15f93b937f86a2522370ad1f3a3120b56a1c58f99c771" id=dd2f0c91-899f-433d-940f-4c6816466836 name=/runtime.v1.RuntimeService/StopPodSandbox
	Nov 21 23:58:15 functional-354825 crio[3536]: time="2025-11-21T23:58:15.593725323Z" level=info msg="Removing pod sandbox: 6ca3a896e4c63e9c4ed15f93b937f86a2522370ad1f3a3120b56a1c58f99c771" id=8ec05a66-947e-4b04-822f-8d404ac50b0d name=/runtime.v1.RuntimeService/RemovePodSandbox
	Nov 21 23:58:15 functional-354825 crio[3536]: time="2025-11-21T23:58:15.597279636Z" level=info msg="Removed pod sandbox: 6ca3a896e4c63e9c4ed15f93b937f86a2522370ad1f3a3120b56a1c58f99c771" id=8ec05a66-947e-4b04-822f-8d404ac50b0d name=/runtime.v1.RuntimeService/RemovePodSandbox
	Nov 21 23:58:21 functional-354825 crio[3536]: time="2025-11-21T23:58:21.636231183Z" level=info msg="Pulling image: kicbase/echo-server:latest" id=cbb4466d-1b4e-4d72-9073-de3dfc31f2d5 name=/runtime.v1.ImageService/PullImage
	Nov 21 23:58:32 functional-354825 crio[3536]: time="2025-11-21T23:58:32.634620508Z" level=info msg="Pulling image: kicbase/echo-server:latest" id=56be43ac-80ae-4011-be2f-d4cdc740d7c6 name=/runtime.v1.ImageService/PullImage
	Nov 21 23:58:48 functional-354825 crio[3536]: time="2025-11-21T23:58:48.634407307Z" level=info msg="Pulling image: kicbase/echo-server:latest" id=dd06aeaf-c0ad-4848-922d-6224c9ae1320 name=/runtime.v1.ImageService/PullImage
	Nov 21 23:59:16 functional-354825 crio[3536]: time="2025-11-21T23:59:16.634446479Z" level=info msg="Pulling image: kicbase/echo-server:latest" id=38e98932-86a5-4f45-a7bd-6942e9ee5e76 name=/runtime.v1.ImageService/PullImage
	Nov 21 23:59:42 functional-354825 crio[3536]: time="2025-11-21T23:59:42.634994569Z" level=info msg="Pulling image: kicbase/echo-server:latest" id=9c90c4a2-c81c-4f08-ad9e-e11b741a6de8 name=/runtime.v1.ImageService/PullImage
	Nov 22 00:00:46 functional-354825 crio[3536]: time="2025-11-22T00:00:46.634474111Z" level=info msg="Pulling image: kicbase/echo-server:latest" id=edebb3bc-0c29-47ff-bf12-b48cf7a72832 name=/runtime.v1.ImageService/PullImage
	Nov 22 00:01:04 functional-354825 crio[3536]: time="2025-11-22T00:01:04.634560306Z" level=info msg="Pulling image: kicbase/echo-server:latest" id=0346fe84-b859-4fa5-a0fc-c690a49db7b1 name=/runtime.v1.ImageService/PullImage
	Nov 22 00:03:34 functional-354825 crio[3536]: time="2025-11-22T00:03:34.634403528Z" level=info msg="Pulling image: kicbase/echo-server:latest" id=863ad82c-0fb6-47d1-bf61-51ffca66b636 name=/runtime.v1.ImageService/PullImage
	Nov 22 00:03:51 functional-354825 crio[3536]: time="2025-11-22T00:03:51.636346566Z" level=info msg="Pulling image: kicbase/echo-server:latest" id=afb4d699-3fda-4974-8da1-5a22efeb3b53 name=/runtime.v1.ImageService/PullImage
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                             CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                         NAMESPACE
	a1b665625ba58       docker.io/library/nginx@sha256:7de350c1fbb1f7b119a1d08f69fef5c92624cb01e03bc25c0ae11072b8969712   9 minutes ago       Running             myfrontend                0                   599a1725b931b       sp-pod                                      default
	465785613a7a6       docker.io/library/nginx@sha256:7391b3732e7f7ccd23ff1d02fbeadcde496f374d7460ad8e79260f8f6d2c9f90   10 minutes ago      Running             nginx                     0                   80f5717f9054c       nginx-svc                                   default
	148ce3ddc303d       138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc                                  10 minutes ago      Running             coredns                   2                   5894a9a5ba3db       coredns-66bc5c9577-lbq5l                    kube-system
	d9c96c6cc858a       05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9                                  10 minutes ago      Running             kube-proxy                2                   f0ef518098f80       kube-proxy-5ct95                            kube-system
	b91e8f3f7d860       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c                                  10 minutes ago      Running             kindnet-cni               2                   69f7f91d7f592       kindnet-fvhrx                               kube-system
	815bac2a26dc7       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                  10 minutes ago      Running             storage-provisioner       2                   06a1aaffbcb65       storage-provisioner                         kube-system
	6b65e5995c478       43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196                                  10 minutes ago      Running             kube-apiserver            0                   00f8e3a821afa       kube-apiserver-functional-354825            kube-system
	a5d32c150e163       b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0                                  10 minutes ago      Running             kube-scheduler            2                   e9014bd5268f5       kube-scheduler-functional-354825            kube-system
	ac5a651c54424       7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a                                  10 minutes ago      Running             kube-controller-manager   2                   63cc35797ce65       kube-controller-manager-functional-354825   kube-system
	efcbbb3102237       a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e                                  10 minutes ago      Running             etcd                      2                   26ceb33531a7b       etcd-functional-354825                      kube-system
	21b012a2c915b       7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a                                  11 minutes ago      Exited              kube-controller-manager   1                   63cc35797ce65       kube-controller-manager-functional-354825   kube-system
	4ec9b22907cb5       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                  11 minutes ago      Exited              storage-provisioner       1                   06a1aaffbcb65       storage-provisioner                         kube-system
	0ffac8dca9968       05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9                                  11 minutes ago      Exited              kube-proxy                1                   f0ef518098f80       kube-proxy-5ct95                            kube-system
	d40326817fd46       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c                                  11 minutes ago      Exited              kindnet-cni               1                   69f7f91d7f592       kindnet-fvhrx                               kube-system
	da23f5fa723d2       138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc                                  11 minutes ago      Exited              coredns                   1                   5894a9a5ba3db       coredns-66bc5c9577-lbq5l                    kube-system
	2ff36b9c200a1       b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0                                  11 minutes ago      Exited              kube-scheduler            1                   e9014bd5268f5       kube-scheduler-functional-354825            kube-system
	9a1d91336a5b9       a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e                                  11 minutes ago      Exited              etcd                      1                   26ceb33531a7b       etcd-functional-354825                      kube-system
	
	
	==> coredns [148ce3ddc303d5bfcfa734d2ae81a6d6cd65de77aadb0ded802a8838702185bc] <==
	maxprocs: Leaving GOMAXPROCS=2: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 9e2996f8cb67ac53e0259ab1f8d615d07d1beb0bd07e6a1e39769c3bf486a905bb991cc47f8d2f14d0d3a90a87dfc625a0b4c524fed169d8158c40657c0694b1
	CoreDNS-1.12.1
	linux/arm64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:34486 - 32033 "HINFO IN 3792304447700634848.4896045059128584090. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.027032106s
	
	
	==> coredns [da23f5fa723d25d28d8c9bc8568dd5f2bd9a038b8b349fd74ba63112e83c7bdf] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: endpointslices.discovery.k8s.io is forbidden: User "system:serviceaccount:kube-system:coredns" cannot list resource "endpointslices" in API group "discovery.k8s.io" at the cluster scope
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: namespaces is forbidden: User "system:serviceaccount:kube-system:coredns" cannot list resource "namespaces" in API group "" at the cluster scope
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: services is forbidden: User "system:serviceaccount:kube-system:coredns" cannot list resource "services" in API group "" at the cluster scope
	[ERROR] plugin/kubernetes: Unhandled Error
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 9e2996f8cb67ac53e0259ab1f8d615d07d1beb0bd07e6a1e39769c3bf486a905bb991cc47f8d2f14d0d3a90a87dfc625a0b4c524fed169d8158c40657c0694b1
	CoreDNS-1.12.1
	linux/arm64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:52129 - 18702 "HINFO IN 2206141408281481597.1372526469490822655. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.030886704s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> describe nodes <==
	Name:               functional-354825
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=functional-354825
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=299bbe887a12c40541707cc636234f35f4ff1785
	                    minikube.k8s.io/name=functional-354825
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_11_21T23_55_38_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 21 Nov 2025 23:55:35 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  functional-354825
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 22 Nov 2025 00:07:53 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 22 Nov 2025 00:07:33 +0000   Fri, 21 Nov 2025 23:55:32 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 22 Nov 2025 00:07:33 +0000   Fri, 21 Nov 2025 23:55:32 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 22 Nov 2025 00:07:33 +0000   Fri, 21 Nov 2025 23:55:32 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 22 Nov 2025 00:07:33 +0000   Fri, 21 Nov 2025 23:56:24 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    functional-354825
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	System Info:
	  Machine ID:                 c92eea4d3c03c0156e01fcf8691e3907
	  System UUID:                5bbf2142-a0ab-4ead-8818-bc8ebe884184
	  Boot ID:                    72ac7385-472f-47d1-a23e-bd80468d6e09
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.2
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (12 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  default                     hello-node-75c85bcc94-hzm54                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         9m46s
	  default                     hello-node-connect-7d85dfc575-fglvv          0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	  default                     nginx-svc                                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	  default                     sp-pod                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         9m53s
	  kube-system                 coredns-66bc5c9577-lbq5l                     100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     12m
	  kube-system                 etcd-functional-354825                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         12m
	  kube-system                 kindnet-fvhrx                                100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      12m
	  kube-system                 kube-apiserver-functional-354825             250m (12%)    0 (0%)      0 (0%)           0 (0%)         10m
	  kube-system                 kube-controller-manager-functional-354825    200m (10%)    0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 kube-proxy-5ct95                             0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 kube-scheduler-functional-354825             100m (5%)     0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 storage-provisioner                          0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  100m (5%)
	  memory             220Mi (2%)  220Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                From             Message
	  ----     ------                   ----               ----             -------
	  Normal   Starting                 12m                kube-proxy       
	  Normal   Starting                 10m                kube-proxy       
	  Normal   Starting                 11m                kube-proxy       
	  Warning  CgroupV1                 12m                kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   Starting                 12m                kubelet          Starting kubelet.
	  Normal   NodeHasSufficientMemory  12m                kubelet          Node functional-354825 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    12m                kubelet          Node functional-354825 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     12m                kubelet          Node functional-354825 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           12m                node-controller  Node functional-354825 event: Registered Node functional-354825 in Controller
	  Normal   NodeReady                11m                kubelet          Node functional-354825 status is now: NodeReady
	  Normal   RegisteredNode           11m                node-controller  Node functional-354825 event: Registered Node functional-354825 in Controller
	  Normal   NodeHasSufficientMemory  10m (x8 over 10m)  kubelet          Node functional-354825 status is now: NodeHasSufficientMemory
	  Warning  CgroupV1                 10m                kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   Starting                 10m                kubelet          Starting kubelet.
	  Normal   NodeHasNoDiskPressure    10m (x8 over 10m)  kubelet          Node functional-354825 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     10m (x8 over 10m)  kubelet          Node functional-354825 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           10m                node-controller  Node functional-354825 event: Registered Node functional-354825 in Controller
	
	
	==> dmesg <==
	[ +32.573452] overlayfs: idmapped layers are currently not supported
	[  +9.452963] overlayfs: idmapped layers are currently not supported
	[Nov21 23:08] overlayfs: idmapped layers are currently not supported
	[ +24.877472] overlayfs: idmapped layers are currently not supported
	[Nov21 23:11] overlayfs: idmapped layers are currently not supported
	[Nov21 23:13] overlayfs: idmapped layers are currently not supported
	[Nov21 23:14] overlayfs: idmapped layers are currently not supported
	[Nov21 23:15] overlayfs: idmapped layers are currently not supported
	[Nov21 23:16] overlayfs: idmapped layers are currently not supported
	[Nov21 23:17] overlayfs: idmapped layers are currently not supported
	[ +10.681159] overlayfs: idmapped layers are currently not supported
	[Nov21 23:19] overlayfs: idmapped layers are currently not supported
	[ +15.192296] overlayfs: idmapped layers are currently not supported
	[Nov21 23:20] overlayfs: idmapped layers are currently not supported
	[Nov21 23:21] overlayfs: idmapped layers are currently not supported
	[Nov21 23:22] overlayfs: idmapped layers are currently not supported
	[ +12.884842] overlayfs: idmapped layers are currently not supported
	[Nov21 23:23] overlayfs: idmapped layers are currently not supported
	[ +12.022080] overlayfs: idmapped layers are currently not supported
	[Nov21 23:25] overlayfs: idmapped layers are currently not supported
	[ +24.447615] overlayfs: idmapped layers are currently not supported
	[Nov21 23:46] kauditd_printk_skb: 8 callbacks suppressed
	[Nov21 23:48] overlayfs: idmapped layers are currently not supported
	[Nov21 23:54] overlayfs: idmapped layers are currently not supported
	[Nov21 23:55] overlayfs: idmapped layers are currently not supported
	
	
	==> etcd [9a1d91336a5b917fcd71534c4bddbfd6e77f4c40e9566f5451e4fe3644934874] <==
	{"level":"warn","ts":"2025-11-21T23:56:39.768311Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46354","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-21T23:56:39.782361Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46378","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-21T23:56:39.799030Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46402","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-21T23:56:39.842138Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46412","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-21T23:56:39.848081Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46438","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-21T23:56:39.872914Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46452","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-21T23:56:39.931265Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46474","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-11-21T23:57:05.225564Z","caller":"osutil/interrupt_unix.go:65","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2025-11-21T23:57:05.225632Z","caller":"embed/etcd.go:426","msg":"closing etcd server","name":"functional-354825","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.49.2:2380"],"advertise-client-urls":["https://192.168.49.2:2379"]}
	{"level":"error","ts":"2025-11-21T23:57:05.225743Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"http: Server closed","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*serveCtx).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/serve.go:90"}
	{"level":"error","ts":"2025-11-21T23:57:05.364317Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"http: Server closed","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*serveCtx).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/serve.go:90"}
	{"level":"error","ts":"2025-11-21T23:57:05.365502Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 127.0.0.1:2381: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-11-21T23:57:05.365555Z","caller":"etcdserver/server.go:1281","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"aec36adc501070cc","current-leader-member-id":"aec36adc501070cc"}
	{"level":"info","ts":"2025-11-21T23:57:05.365618Z","caller":"etcdserver/server.go:2319","msg":"server has stopped; stopping cluster version's monitor"}
	{"level":"info","ts":"2025-11-21T23:57:05.365638Z","caller":"etcdserver/server.go:2342","msg":"server has stopped; stopping storage version's monitor"}
	{"level":"warn","ts":"2025-11-21T23:57:05.365629Z","caller":"embed/serve.go:245","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-11-21T23:57:05.365728Z","caller":"embed/serve.go:247","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"error","ts":"2025-11-21T23:57:05.365760Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 127.0.0.1:2379: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"warn","ts":"2025-11-21T23:57:05.365875Z","caller":"embed/serve.go:245","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.49.2:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-11-21T23:57:05.365897Z","caller":"embed/serve.go:247","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.49.2:2379: use of closed network connection"}
	{"level":"error","ts":"2025-11-21T23:57:05.365951Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 192.168.49.2:2379: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-11-21T23:57:05.369782Z","caller":"embed/etcd.go:621","msg":"stopping serving peer traffic","address":"192.168.49.2:2380"}
	{"level":"error","ts":"2025-11-21T23:57:05.369913Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 192.168.49.2:2380: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-11-21T23:57:05.369956Z","caller":"embed/etcd.go:626","msg":"stopped serving peer traffic","address":"192.168.49.2:2380"}
	{"level":"info","ts":"2025-11-21T23:57:05.369974Z","caller":"embed/etcd.go:428","msg":"closed etcd server","name":"functional-354825","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.49.2:2380"],"advertise-client-urls":["https://192.168.49.2:2379"]}
	
	
	==> etcd [efcbbb3102237ebd441d7f70bc7028b0b88c47cc0c7a269fb997a689b92ebe58] <==
	{"level":"warn","ts":"2025-11-21T23:57:18.191670Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46614","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-21T23:57:18.244127Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46638","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-21T23:57:18.244490Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46644","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-21T23:57:18.255522Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46670","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-21T23:57:18.265102Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46688","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-21T23:57:18.284971Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46704","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-21T23:57:18.314381Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46714","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-21T23:57:18.339370Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46726","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-21T23:57:18.354883Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46744","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-21T23:57:18.371405Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46754","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-21T23:57:18.390779Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46760","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-21T23:57:18.410059Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46778","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-21T23:57:18.426904Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46788","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-21T23:57:18.439912Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46802","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-21T23:57:18.486030Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46812","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-21T23:57:18.513529Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46828","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-21T23:57:18.530175Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46844","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-21T23:57:18.554013Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46864","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-21T23:57:18.590520Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46882","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-21T23:57:18.610587Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46898","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-21T23:57:18.634762Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46918","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-21T23:57:18.685861Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46928","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-11-22T00:07:17.048680Z","caller":"mvcc/index.go:194","msg":"compact tree index","revision":1099}
	{"level":"info","ts":"2025-11-22T00:07:17.072679Z","caller":"mvcc/kvstore_compaction.go:70","msg":"finished scheduled compaction","compact-revision":1099,"took":"23.620578ms","hash":1162976398,"current-db-size-bytes":3227648,"current-db-size":"3.2 MB","current-db-size-in-use-bytes":1372160,"current-db-size-in-use":"1.4 MB"}
	{"level":"info","ts":"2025-11-22T00:07:17.072728Z","caller":"mvcc/hash.go:157","msg":"storing new hash","hash":1162976398,"revision":1099,"compact-revision":-1}
	
	
	==> kernel <==
	 00:07:56 up  4:49,  0 user,  load average: 0.10, 0.28, 0.69
	Linux functional-354825 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [b91e8f3f7d8601c8a2d4fb2b3910f243dd8a4603d4f2d57afe93bc6630fe4e6e] <==
	I1122 00:05:50.425021       1 main.go:301] handling current node
	I1122 00:06:00.423628       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1122 00:06:00.423674       1 main.go:301] handling current node
	I1122 00:06:10.420592       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1122 00:06:10.420643       1 main.go:301] handling current node
	I1122 00:06:20.420563       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1122 00:06:20.420673       1 main.go:301] handling current node
	I1122 00:06:30.420523       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1122 00:06:30.420557       1 main.go:301] handling current node
	I1122 00:06:40.421901       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1122 00:06:40.421938       1 main.go:301] handling current node
	I1122 00:06:50.427990       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1122 00:06:50.428023       1 main.go:301] handling current node
	I1122 00:07:00.425966       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1122 00:07:00.426005       1 main.go:301] handling current node
	I1122 00:07:10.420637       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1122 00:07:10.420680       1 main.go:301] handling current node
	I1122 00:07:20.420637       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1122 00:07:20.420756       1 main.go:301] handling current node
	I1122 00:07:30.425102       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1122 00:07:30.425138       1 main.go:301] handling current node
	I1122 00:07:40.420604       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1122 00:07:40.420635       1 main.go:301] handling current node
	I1122 00:07:50.424688       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1122 00:07:50.424795       1 main.go:301] handling current node
	
	
	==> kindnet [d40326817fd4686bc9ffd91958858d457a036787ee0862725a68831dcc544c77] <==
	I1121 23:56:36.456869       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1121 23:56:36.457195       1 main.go:139] hostIP = 192.168.49.2
	podIP = 192.168.49.2
	I1121 23:56:36.457327       1 main.go:148] setting mtu 1500 for CNI 
	I1121 23:56:36.457340       1 main.go:178] kindnetd IP family: "ipv4"
	I1121 23:56:36.457357       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-11-21T23:56:36Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1121 23:56:36.718736       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1121 23:56:36.718760       1 controller.go:381] "Waiting for informer caches to sync"
	I1121 23:56:36.718769       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1121 23:56:36.718878       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	E1121 23:56:40.936480       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:serviceaccount:kube-system:kindnet\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Node"
	E1121 23:56:40.936681       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:serviceaccount:kube-system:kindnet\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Pod"
	E1121 23:56:40.937972       1 reflector.go:200] "Failed to watch" err="failed to list *v1.NetworkPolicy: networkpolicies.networking.k8s.io is forbidden: User \"system:serviceaccount:kube-system:kindnet\" cannot list resource \"networkpolicies\" in API group \"networking.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.NetworkPolicy"
	E1121 23:56:40.953716       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:serviceaccount:kube-system:kindnet\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Namespace"
	I1121 23:56:42.619480       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1121 23:56:42.619603       1 metrics.go:72] Registering metrics
	I1121 23:56:42.619679       1 controller.go:711] "Syncing nftables rules"
	I1121 23:56:46.707237       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1121 23:56:46.707304       1 main.go:301] handling current node
	I1121 23:56:56.707327       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1121 23:56:56.707405       1 main.go:301] handling current node
	
	
	==> kube-apiserver [6b65e5995c4782e20ae49828a060d00a8ec2d8b597207de62f3530a9c5a42700] <==
	I1121 23:57:19.622947       1 cache.go:39] Caches are synced for autoregister controller
	I1121 23:57:19.621832       1 shared_informer.go:356] "Caches are synced" controller="configmaps"
	I1121 23:57:19.621992       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I1121 23:57:19.621786       1 shared_informer.go:356] "Caches are synced" controller="cluster_authentication_trust_controller"
	E1121 23:57:19.625485       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I1121 23:57:19.626205       1 shared_informer.go:356] "Caches are synced" controller="ipallocator-repair-controller"
	I1121 23:57:19.626759       1 shared_informer.go:356] "Caches are synced" controller="node_authorizer"
	I1121 23:57:19.636015       1 shared_informer.go:356] "Caches are synced" controller="*generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]"
	I1121 23:57:19.636042       1 policy_source.go:240] refreshing policies
	I1121 23:57:19.673919       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1121 23:57:19.690021       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1121 23:57:20.407961       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1121 23:57:21.587939       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I1121 23:57:21.756915       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1121 23:57:21.841178       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1121 23:57:21.848926       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1121 23:57:23.029329       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1121 23:57:23.309388       1 controller.go:667] quota admission added evaluator for: endpoints
	I1121 23:57:23.408930       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1121 23:57:39.467903       1 alloc.go:328] "allocated clusterIPs" service="default/invalid-svc" clusterIPs={"IPv4":"10.99.29.74"}
	I1121 23:57:45.406714       1 alloc.go:328] "allocated clusterIPs" service="default/nginx-svc" clusterIPs={"IPv4":"10.109.83.242"}
	I1121 23:57:54.117633       1 alloc.go:328] "allocated clusterIPs" service="default/hello-node-connect" clusterIPs={"IPv4":"10.98.36.84"}
	E1121 23:58:10.234007       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8441->192.168.49.1:60150: use of closed network connection
	I1121 23:58:10.448005       1 alloc.go:328] "allocated clusterIPs" service="default/hello-node" clusterIPs={"IPv4":"10.107.166.192"}
	I1122 00:07:19.572921       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	
	
	==> kube-controller-manager [21b012a2c915b29b087ee09086411311859280f0dbdd3ccc609367ccd81fafb6] <==
	I1121 23:56:44.442339       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I1121 23:56:44.445505       1 shared_informer.go:356] "Caches are synced" controller="service-cidr-controller"
	I1121 23:56:44.447750       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	I1121 23:56:44.450095       1 shared_informer.go:356] "Caches are synced" controller="TTL"
	I1121 23:56:44.452272       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1121 23:56:44.454241       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I1121 23:56:44.473870       1 shared_informer.go:356] "Caches are synced" controller="validatingadmissionpolicy-status"
	I1121 23:56:44.474020       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1121 23:56:44.477215       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-serving"
	I1121 23:56:44.477362       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-client"
	I1121 23:56:44.478455       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-legacy-unknown"
	I1121 23:56:44.478466       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kube-apiserver-client"
	I1121 23:56:44.481824       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrapproving"
	I1121 23:56:44.488140       1 shared_informer.go:356] "Caches are synced" controller="PV protection"
	I1121 23:56:44.490453       1 shared_informer.go:356] "Caches are synced" controller="crt configmap"
	I1121 23:56:44.490656       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1121 23:56:44.491670       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I1121 23:56:44.493039       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I1121 23:56:44.493110       1 shared_informer.go:356] "Caches are synced" controller="taint"
	I1121 23:56:44.493232       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I1121 23:56:44.493331       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="functional-354825"
	I1121 23:56:44.493399       1 node_lifecycle_controller.go:1067] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I1121 23:56:44.495412       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	I1121 23:56:44.498356       1 shared_informer.go:356] "Caches are synced" controller="disruption"
	I1121 23:56:44.500602       1 shared_informer.go:356] "Caches are synced" controller="bootstrap_signer"
	
	
	==> kube-controller-manager [ac5a651c54424437554491e8b2940f04af93e9a7ed6ee1391db847bb09c41891] <==
	I1121 23:57:23.027402       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I1121 23:57:23.029057       1 shared_informer.go:356] "Caches are synced" controller="VAC protection"
	I1121 23:57:23.032263       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1121 23:57:23.034867       1 shared_informer.go:356] "Caches are synced" controller="TTL after finished"
	I1121 23:57:23.038265       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I1121 23:57:23.038452       1 shared_informer.go:356] "Caches are synced" controller="GC"
	I1121 23:57:23.043547       1 shared_informer.go:356] "Caches are synced" controller="crt configmap"
	I1121 23:57:23.046813       1 shared_informer.go:356] "Caches are synced" controller="legacy-service-account-token-cleaner"
	I1121 23:57:23.051215       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	I1121 23:57:23.052568       1 shared_informer.go:356] "Caches are synced" controller="bootstrap_signer"
	I1121 23:57:23.052590       1 shared_informer.go:356] "Caches are synced" controller="disruption"
	I1121 23:57:23.052618       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I1121 23:57:23.052629       1 shared_informer.go:356] "Caches are synced" controller="ReplicationController"
	I1121 23:57:23.056793       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1121 23:57:23.061902       1 shared_informer.go:356] "Caches are synced" controller="node"
	I1121 23:57:23.062042       1 range_allocator.go:177] "Sending events to api server" logger="node-ipam-controller"
	I1121 23:57:23.062093       1 range_allocator.go:183] "Starting range CIDR allocator" logger="node-ipam-controller"
	I1121 23:57:23.062134       1 shared_informer.go:349] "Waiting for caches to sync" controller="cidrallocator"
	I1121 23:57:23.062162       1 shared_informer.go:356] "Caches are synced" controller="cidrallocator"
	I1121 23:57:23.067220       1 shared_informer.go:356] "Caches are synced" controller="HPA"
	I1121 23:57:23.067391       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1121 23:57:23.085926       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1121 23:57:23.107800       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1121 23:57:23.107829       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1121 23:57:23.107838       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	
	
	==> kube-proxy [0ffac8dca99689cdc0b642991d6bf7f592a5431a4520c75680c0bcc8afe93ab9] <==
	I1121 23:56:38.270939       1 server_linux.go:53] "Using iptables proxy"
	I1121 23:56:39.654805       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1121 23:56:41.138723       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1121 23:56:41.138769       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.49.2"]
	E1121 23:56:41.138838       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1121 23:56:41.653230       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1121 23:56:41.653304       1 server_linux.go:132] "Using iptables Proxier"
	I1121 23:56:41.829503       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1121 23:56:41.875653       1 server.go:527] "Version info" version="v1.34.1"
	I1121 23:56:41.895670       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1121 23:56:41.898062       1 config.go:106] "Starting endpoint slice config controller"
	I1121 23:56:41.899448       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1121 23:56:41.899890       1 config.go:200] "Starting service config controller"
	I1121 23:56:41.903684       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1121 23:56:41.900209       1 config.go:403] "Starting serviceCIDR config controller"
	I1121 23:56:41.903823       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1121 23:56:41.900626       1 config.go:309] "Starting node config controller"
	I1121 23:56:41.903904       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1121 23:56:41.903956       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1121 23:56:42.003221       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1121 23:56:42.004314       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1121 23:56:42.004456       1 shared_informer.go:356] "Caches are synced" controller="service config"
	
	
	==> kube-proxy [d9c96c6cc858a0c1f7228d9b34afb8b03377e573780786adca6992069c316713] <==
	I1121 23:57:20.269849       1 server_linux.go:53] "Using iptables proxy"
	I1121 23:57:20.498295       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1121 23:57:20.609870       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1121 23:57:20.611525       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.49.2"]
	E1121 23:57:20.611697       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1121 23:57:20.702257       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1121 23:57:20.702387       1 server_linux.go:132] "Using iptables Proxier"
	I1121 23:57:20.711831       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1121 23:57:20.712132       1 server.go:527] "Version info" version="v1.34.1"
	I1121 23:57:20.712158       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1121 23:57:20.713288       1 config.go:200] "Starting service config controller"
	I1121 23:57:20.713309       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1121 23:57:20.716582       1 config.go:106] "Starting endpoint slice config controller"
	I1121 23:57:20.716605       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1121 23:57:20.716620       1 config.go:403] "Starting serviceCIDR config controller"
	I1121 23:57:20.716626       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1121 23:57:20.717010       1 config.go:309] "Starting node config controller"
	I1121 23:57:20.717027       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1121 23:57:20.717033       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1121 23:57:20.813960       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1121 23:57:20.817514       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1121 23:57:20.817617       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [2ff36b9c200a15e34b87c21ebf8aaa61fe4a5908d3317e71c9d71ca708945fc9] <==
	I1121 23:56:39.656389       1 serving.go:386] Generated self-signed cert in-memory
	I1121 23:56:41.868728       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.1"
	I1121 23:56:41.868944       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1121 23:56:41.885462       1 requestheader_controller.go:180] Starting RequestHeaderAuthRequestController
	I1121 23:56:41.885994       1 shared_informer.go:349] "Waiting for caches to sync" controller="RequestHeaderAuthRequestController"
	I1121 23:56:41.886340       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1121 23:56:41.886364       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1121 23:56:41.886402       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1121 23:56:41.886410       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1121 23:56:41.893303       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1121 23:56:41.893402       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1121 23:56:41.986569       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1121 23:56:41.986651       1 shared_informer.go:356] "Caches are synced" controller="RequestHeaderAuthRequestController"
	I1121 23:56:41.986770       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1121 23:57:05.226757       1 tlsconfig.go:258] "Shutting down DynamicServingCertificateController"
	I1121 23:57:05.226853       1 secure_serving.go:259] Stopped listening on 127.0.0.1:10259
	I1121 23:57:05.226865       1 server.go:263] "[graceful-termination] secure server has stopped listening"
	I1121 23:57:05.226885       1 configmap_cafile_content.go:226] "Shutting down controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1121 23:57:05.226903       1 requestheader_controller.go:194] Shutting down RequestHeaderAuthRequestController
	I1121 23:57:05.226921       1 configmap_cafile_content.go:226] "Shutting down controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1121 23:57:05.227199       1 server.go:265] "[graceful-termination] secure server is exiting"
	E1121 23:57:05.227224       1 run.go:72] "command failed" err="finished without leader elect"
	
	
	==> kube-scheduler [a5d32c150e16390c75aca03683079bb529914ec19d201599b85f944bb0efb494] <==
	I1121 23:57:18.640277       1 serving.go:386] Generated self-signed cert in-memory
	I1121 23:57:20.743351       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.1"
	I1121 23:57:20.743440       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1121 23:57:20.748030       1 requestheader_controller.go:180] Starting RequestHeaderAuthRequestController
	I1121 23:57:20.748082       1 shared_informer.go:349] "Waiting for caches to sync" controller="RequestHeaderAuthRequestController"
	I1121 23:57:20.748118       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1121 23:57:20.748135       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1121 23:57:20.748237       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1121 23:57:20.748250       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1121 23:57:20.749092       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1121 23:57:20.749168       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1121 23:57:20.848368       1 shared_informer.go:356] "Caches are synced" controller="RequestHeaderAuthRequestController"
	I1121 23:57:20.848637       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1121 23:57:20.848322       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	
	
	==> kubelet <==
	Nov 22 00:05:14 functional-354825 kubelet[3861]: E1122 00:05:14.634398    3861 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-connect-7d85dfc575-fglvv" podUID="065ee2fd-7122-4a1b-ab5d-ca8bf81bcd00"
	Nov 22 00:05:21 functional-354825 kubelet[3861]: E1122 00:05:21.634561    3861 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-75c85bcc94-hzm54" podUID="05dad3cf-08e8-419e-bfb3-42a931a1093a"
	Nov 22 00:05:25 functional-354825 kubelet[3861]: E1122 00:05:25.635596    3861 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-connect-7d85dfc575-fglvv" podUID="065ee2fd-7122-4a1b-ab5d-ca8bf81bcd00"
	Nov 22 00:05:34 functional-354825 kubelet[3861]: E1122 00:05:34.633640    3861 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-75c85bcc94-hzm54" podUID="05dad3cf-08e8-419e-bfb3-42a931a1093a"
	Nov 22 00:05:40 functional-354825 kubelet[3861]: E1122 00:05:40.634607    3861 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-connect-7d85dfc575-fglvv" podUID="065ee2fd-7122-4a1b-ab5d-ca8bf81bcd00"
	Nov 22 00:05:47 functional-354825 kubelet[3861]: E1122 00:05:47.635702    3861 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-75c85bcc94-hzm54" podUID="05dad3cf-08e8-419e-bfb3-42a931a1093a"
	Nov 22 00:05:54 functional-354825 kubelet[3861]: E1122 00:05:54.634621    3861 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-connect-7d85dfc575-fglvv" podUID="065ee2fd-7122-4a1b-ab5d-ca8bf81bcd00"
	Nov 22 00:06:02 functional-354825 kubelet[3861]: E1122 00:06:02.634670    3861 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-75c85bcc94-hzm54" podUID="05dad3cf-08e8-419e-bfb3-42a931a1093a"
	Nov 22 00:06:05 functional-354825 kubelet[3861]: E1122 00:06:05.634815    3861 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-connect-7d85dfc575-fglvv" podUID="065ee2fd-7122-4a1b-ab5d-ca8bf81bcd00"
	Nov 22 00:06:15 functional-354825 kubelet[3861]: E1122 00:06:15.635485    3861 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-75c85bcc94-hzm54" podUID="05dad3cf-08e8-419e-bfb3-42a931a1093a"
	Nov 22 00:06:16 functional-354825 kubelet[3861]: E1122 00:06:16.633912    3861 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-connect-7d85dfc575-fglvv" podUID="065ee2fd-7122-4a1b-ab5d-ca8bf81bcd00"
	Nov 22 00:06:28 functional-354825 kubelet[3861]: E1122 00:06:28.634052    3861 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-75c85bcc94-hzm54" podUID="05dad3cf-08e8-419e-bfb3-42a931a1093a"
	Nov 22 00:06:31 functional-354825 kubelet[3861]: E1122 00:06:31.633982    3861 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-connect-7d85dfc575-fglvv" podUID="065ee2fd-7122-4a1b-ab5d-ca8bf81bcd00"
	Nov 22 00:06:41 functional-354825 kubelet[3861]: E1122 00:06:41.635556    3861 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-75c85bcc94-hzm54" podUID="05dad3cf-08e8-419e-bfb3-42a931a1093a"
	Nov 22 00:06:46 functional-354825 kubelet[3861]: E1122 00:06:46.633672    3861 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-connect-7d85dfc575-fglvv" podUID="065ee2fd-7122-4a1b-ab5d-ca8bf81bcd00"
	Nov 22 00:06:56 functional-354825 kubelet[3861]: E1122 00:06:56.633698    3861 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-75c85bcc94-hzm54" podUID="05dad3cf-08e8-419e-bfb3-42a931a1093a"
	Nov 22 00:06:59 functional-354825 kubelet[3861]: E1122 00:06:59.634215    3861 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-connect-7d85dfc575-fglvv" podUID="065ee2fd-7122-4a1b-ab5d-ca8bf81bcd00"
	Nov 22 00:07:10 functional-354825 kubelet[3861]: E1122 00:07:10.634541    3861 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-75c85bcc94-hzm54" podUID="05dad3cf-08e8-419e-bfb3-42a931a1093a"
	Nov 22 00:07:13 functional-354825 kubelet[3861]: E1122 00:07:13.634456    3861 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-connect-7d85dfc575-fglvv" podUID="065ee2fd-7122-4a1b-ab5d-ca8bf81bcd00"
	Nov 22 00:07:21 functional-354825 kubelet[3861]: E1122 00:07:21.634378    3861 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-75c85bcc94-hzm54" podUID="05dad3cf-08e8-419e-bfb3-42a931a1093a"
	Nov 22 00:07:26 functional-354825 kubelet[3861]: E1122 00:07:26.634580    3861 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-connect-7d85dfc575-fglvv" podUID="065ee2fd-7122-4a1b-ab5d-ca8bf81bcd00"
	Nov 22 00:07:34 functional-354825 kubelet[3861]: E1122 00:07:34.634487    3861 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-75c85bcc94-hzm54" podUID="05dad3cf-08e8-419e-bfb3-42a931a1093a"
	Nov 22 00:07:39 functional-354825 kubelet[3861]: E1122 00:07:39.634964    3861 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-connect-7d85dfc575-fglvv" podUID="065ee2fd-7122-4a1b-ab5d-ca8bf81bcd00"
	Nov 22 00:07:48 functional-354825 kubelet[3861]: E1122 00:07:48.634585    3861 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-75c85bcc94-hzm54" podUID="05dad3cf-08e8-419e-bfb3-42a931a1093a"
	Nov 22 00:07:54 functional-354825 kubelet[3861]: E1122 00:07:54.634359    3861 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-connect-7d85dfc575-fglvv" podUID="065ee2fd-7122-4a1b-ab5d-ca8bf81bcd00"
	
	
	==> storage-provisioner [4ec9b22907cb542b63d71cd01f64224af3315e50b2b522081c6dc9ee631267ba] <==
	I1121 23:56:37.472489       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1121 23:56:41.103145       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1121 23:56:41.103205       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W1121 23:56:41.148071       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1121 23:56:44.673870       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1121 23:56:48.934181       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1121 23:56:52.532835       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1121 23:56:55.586472       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1121 23:56:58.608186       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1121 23:56:58.613098       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1121 23:56:58.613243       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1121 23:56:58.613413       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_functional-354825_6164fb15-1e4a-4614-94f0-e51707100712!
	I1121 23:56:58.614102       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"df982c44-534a-461c-b5f6-c9d6532e6a5c", APIVersion:"v1", ResourceVersion:"527", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' functional-354825_6164fb15-1e4a-4614-94f0-e51707100712 became leader
	W1121 23:56:58.616837       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1121 23:56:58.622263       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1121 23:56:58.714117       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_functional-354825_6164fb15-1e4a-4614-94f0-e51707100712!
	W1121 23:57:00.625211       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1121 23:57:00.630338       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1121 23:57:02.634747       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1121 23:57:02.643910       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1121 23:57:04.647618       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1121 23:57:04.653548       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	
	
	==> storage-provisioner [815bac2a26dc7fed13dc73551b9a96d78ba831a56464cedd66c363234daca557] <==
	W1122 00:07:32.353721       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1122 00:07:34.357449       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1122 00:07:34.361629       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1122 00:07:36.364318       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1122 00:07:36.368933       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1122 00:07:38.372272       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1122 00:07:38.379534       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1122 00:07:40.382246       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1122 00:07:40.386558       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1122 00:07:42.390260       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1122 00:07:42.397678       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1122 00:07:44.400970       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1122 00:07:44.405340       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1122 00:07:46.408833       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1122 00:07:46.415487       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1122 00:07:48.418102       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1122 00:07:48.422380       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1122 00:07:50.425090       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1122 00:07:50.429517       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1122 00:07:52.432015       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1122 00:07:52.436390       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1122 00:07:54.440002       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1122 00:07:54.446010       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1122 00:07:56.448739       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1122 00:07:56.453633       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-354825 -n functional-354825
helpers_test.go:269: (dbg) Run:  kubectl --context functional-354825 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:280: non-running pods: hello-node-75c85bcc94-hzm54 hello-node-connect-7d85dfc575-fglvv
helpers_test.go:282: ======> post-mortem[TestFunctional/parallel/ServiceCmdConnect]: describe non-running pods <======
helpers_test.go:285: (dbg) Run:  kubectl --context functional-354825 describe pod hello-node-75c85bcc94-hzm54 hello-node-connect-7d85dfc575-fglvv
helpers_test.go:290: (dbg) kubectl --context functional-354825 describe pod hello-node-75c85bcc94-hzm54 hello-node-connect-7d85dfc575-fglvv:

                                                
                                                
-- stdout --
	Name:             hello-node-75c85bcc94-hzm54
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-354825/192.168.49.2
	Start Time:       Fri, 21 Nov 2025 23:58:10 +0000
	Labels:           app=hello-node
	                  pod-template-hash=75c85bcc94
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.8
	IPs:
	  IP:           10.244.0.8
	Controlled By:  ReplicaSet/hello-node-75c85bcc94
	Containers:
	  echo-server:
	    Container ID:   
	    Image:          kicbase/echo-server
	    Image ID:       
	    Port:           <none>
	    Host Port:      <none>
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-cj5gz (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-cj5gz:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                     From               Message
	  ----     ------     ----                    ----               -------
	  Normal   Scheduled  9m47s                   default-scheduler  Successfully assigned default/hello-node-75c85bcc94-hzm54 to functional-354825
	  Normal   Pulling    6m53s (x5 over 9m47s)   kubelet            Pulling image "kicbase/echo-server"
	  Warning  Failed     6m53s (x5 over 9m47s)   kubelet            Failed to pull image "kicbase/echo-server": short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list
	  Warning  Failed     6m53s (x5 over 9m47s)   kubelet            Error: ErrImagePull
	  Normal   BackOff    4m33s (x21 over 9m47s)  kubelet            Back-off pulling image "kicbase/echo-server"
	  Warning  Failed     4m33s (x21 over 9m47s)  kubelet            Error: ImagePullBackOff
	
	
	Name:             hello-node-connect-7d85dfc575-fglvv
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-354825/192.168.49.2
	Start Time:       Fri, 21 Nov 2025 23:57:53 +0000
	Labels:           app=hello-node-connect
	                  pod-template-hash=7d85dfc575
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.6
	IPs:
	  IP:           10.244.0.6
	Controlled By:  ReplicaSet/hello-node-connect-7d85dfc575
	Containers:
	  echo-server:
	    Container ID:   
	    Image:          kicbase/echo-server
	    Image ID:       
	    Port:           <none>
	    Host Port:      <none>
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-x6bqg (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-x6bqg:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                   From               Message
	  ----     ------     ----                  ----               -------
	  Normal   Scheduled  10m                   default-scheduler  Successfully assigned default/hello-node-connect-7d85dfc575-fglvv to functional-354825
	  Normal   Pulling    7m11s (x5 over 10m)   kubelet            Pulling image "kicbase/echo-server"
	  Warning  Failed     7m11s (x5 over 10m)   kubelet            Failed to pull image "kicbase/echo-server": short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list
	  Warning  Failed     7m11s (x5 over 10m)   kubelet            Error: ErrImagePull
	  Warning  Failed     5m (x20 over 10m)     kubelet            Error: ImagePullBackOff
	  Normal   BackOff    4m48s (x21 over 10m)  kubelet            Back-off pulling image "kicbase/echo-server"

                                                
                                                
-- /stdout --
helpers_test.go:293: <<< TestFunctional/parallel/ServiceCmdConnect FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestFunctional/parallel/ServiceCmdConnect (603.53s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (600.93s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1451: (dbg) Run:  kubectl --context functional-354825 create deployment hello-node --image kicbase/echo-server
functional_test.go:1455: (dbg) Run:  kubectl --context functional-354825 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1460: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:352: "hello-node-75c85bcc94-hzm54" [05dad3cf-08e8-419e-bfb3-42a931a1093a] Pending / Ready:ContainersNotReady (containers with unready status: [echo-server]) / ContainersReady:ContainersNotReady (containers with unready status: [echo-server])
E1121 23:58:24.078391  516937 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21934-513600/.minikube/profiles/addons-882841/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1122 00:00:40.217558  516937 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21934-513600/.minikube/profiles/addons-882841/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1122 00:01:07.920688  516937 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21934-513600/.minikube/profiles/addons-882841/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1122 00:05:40.217657  516937 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21934-513600/.minikube/profiles/addons-882841/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:337: TestFunctional/parallel/ServiceCmd/DeployApp: WARNING: pod list for "default" "app=hello-node" returned: client rate limiter Wait returned an error: context deadline exceeded
functional_test.go:1460: ***** TestFunctional/parallel/ServiceCmd/DeployApp: pod "app=hello-node" failed to start within 10m0s: context deadline exceeded ****
functional_test.go:1460: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-354825 -n functional-354825
functional_test.go:1460: TestFunctional/parallel/ServiceCmd/DeployApp: showing logs for failed pods as of 2025-11-22 00:08:10.927436533 +0000 UTC m=+1268.157244608
functional_test.go:1460: (dbg) Run:  kubectl --context functional-354825 describe po hello-node-75c85bcc94-hzm54 -n default
functional_test.go:1460: (dbg) kubectl --context functional-354825 describe po hello-node-75c85bcc94-hzm54 -n default:
Name:             hello-node-75c85bcc94-hzm54
Namespace:        default
Priority:         0
Service Account:  default
Node:             functional-354825/192.168.49.2
Start Time:       Fri, 21 Nov 2025 23:58:10 +0000
Labels:           app=hello-node
pod-template-hash=75c85bcc94
Annotations:      <none>
Status:           Pending
IP:               10.244.0.8
IPs:
IP:           10.244.0.8
Controlled By:  ReplicaSet/hello-node-75c85bcc94
Containers:
echo-server:
Container ID:   
Image:          kicbase/echo-server
Image ID:       
Port:           <none>
Host Port:      <none>
State:          Waiting
Reason:       ImagePullBackOff
Ready:          False
Restart Count:  0
Environment:    <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-cj5gz (ro)
Conditions:
Type                        Status
PodReadyToStartContainers   True 
Initialized                 True 
Ready                       False 
ContainersReady             False 
PodScheduled                True 
Volumes:
kube-api-access-cj5gz:
Type:                    Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds:  3607
ConfigMapName:           kube-root-ca.crt
Optional:                false
DownwardAPI:             true
QoS Class:                   BestEffort
Node-Selectors:              <none>
Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type     Reason     Age                   From               Message
----     ------     ----                  ----               -------
Normal   Scheduled  10m                   default-scheduler  Successfully assigned default/hello-node-75c85bcc94-hzm54 to functional-354825
Normal   Pulling    7m7s (x5 over 10m)    kubelet            Pulling image "kicbase/echo-server"
Warning  Failed     7m7s (x5 over 10m)    kubelet            Failed to pull image "kicbase/echo-server": short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list
Warning  Failed     7m7s (x5 over 10m)    kubelet            Error: ErrImagePull
Normal   BackOff    4m47s (x21 over 10m)  kubelet            Back-off pulling image "kicbase/echo-server"
Warning  Failed     4m47s (x21 over 10m)  kubelet            Error: ImagePullBackOff
functional_test.go:1460: (dbg) Run:  kubectl --context functional-354825 logs hello-node-75c85bcc94-hzm54 -n default
functional_test.go:1460: (dbg) Non-zero exit: kubectl --context functional-354825 logs hello-node-75c85bcc94-hzm54 -n default: exit status 1 (130.02414ms)

                                                
                                                
** stderr ** 
	Error from server (BadRequest): container "echo-server" in pod "hello-node-75c85bcc94-hzm54" is waiting to start: trying and failing to pull image

                                                
                                                
** /stderr **
functional_test.go:1460: kubectl --context functional-354825 logs hello-node-75c85bcc94-hzm54 -n default: exit status 1
functional_test.go:1461: failed waiting for hello-node pod: app=hello-node within 10m0s: context deadline exceeded
--- FAIL: TestFunctional/parallel/ServiceCmd/DeployApp (600.93s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (0.51s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1519: (dbg) Run:  out/minikube-linux-arm64 -p functional-354825 service --namespace=default --https --url hello-node
functional_test.go:1519: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-354825 service --namespace=default --https --url hello-node: exit status 115 (511.095431ms)

                                                
                                                
-- stdout --
	https://192.168.49.2:32051
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service hello-node found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_3af0dd3f106bd0c134df3d834cbdbb288a06d35d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:1521: failed to get service url. args "out/minikube-linux-arm64 -p functional-354825 service --namespace=default --https --url hello-node" : exit status 115
--- FAIL: TestFunctional/parallel/ServiceCmd/HTTPS (0.51s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (0.48s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1550: (dbg) Run:  out/minikube-linux-arm64 -p functional-354825 service hello-node --url --format={{.IP}}
functional_test.go:1550: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-354825 service hello-node --url --format={{.IP}}: exit status 115 (480.292855ms)

                                                
                                                
-- stdout --
	192.168.49.2
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service hello-node found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_7cc4328ee572bf2be3730700e5bda4ff5ee9066f_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:1552: failed to get service url with custom format. args "out/minikube-linux-arm64 -p functional-354825 service hello-node --url --format={{.IP}}": exit status 115
--- FAIL: TestFunctional/parallel/ServiceCmd/Format (0.48s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (0.49s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1569: (dbg) Run:  out/minikube-linux-arm64 -p functional-354825 service hello-node --url
functional_test.go:1569: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-354825 service hello-node --url: exit status 115 (488.752109ms)

                                                
                                                
-- stdout --
	http://192.168.49.2:32051
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service hello-node found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_7cc4328ee572bf2be3730700e5bda4ff5ee9066f_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:1571: failed to get service url. args: "out/minikube-linux-arm64 -p functional-354825 service hello-node --url": exit status 115
functional_test.go:1575: found endpoint for hello-node: http://192.168.49.2:32051
--- FAIL: TestFunctional/parallel/ServiceCmd/URL (0.49s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (1.31s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:370: (dbg) Run:  out/minikube-linux-arm64 -p functional-354825 image load --daemon kicbase/echo-server:functional-354825 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-arm64 -p functional-354825 image ls
functional_test.go:461: expected "kicbase/echo-server:functional-354825" to be loaded into minikube but the image is not there
--- FAIL: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (1.31s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (0.94s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:380: (dbg) Run:  out/minikube-linux-arm64 -p functional-354825 image load --daemon kicbase/echo-server:functional-354825 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-arm64 -p functional-354825 image ls
functional_test.go:461: expected "kicbase/echo-server:functional-354825" to be loaded into minikube but the image is not there
--- FAIL: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (0.94s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.16s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:250: (dbg) Run:  docker pull kicbase/echo-server:latest
functional_test.go:255: (dbg) Run:  docker tag kicbase/echo-server:latest kicbase/echo-server:functional-354825
functional_test.go:260: (dbg) Run:  out/minikube-linux-arm64 -p functional-354825 image load --daemon kicbase/echo-server:functional-354825 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-arm64 -p functional-354825 image ls
functional_test.go:461: expected "kicbase/echo-server:functional-354825" to be loaded into minikube but the image is not there
--- FAIL: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.16s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.31s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:395: (dbg) Run:  out/minikube-linux-arm64 -p functional-354825 image save kicbase/echo-server:functional-354825 /home/jenkins/workspace/Docker_Linux_crio_arm64/echo-server-save.tar --alsologtostderr
functional_test.go:401: expected "/home/jenkins/workspace/Docker_Linux_crio_arm64/echo-server-save.tar" to exist after `image save`, but doesn't exist
--- FAIL: TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.31s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.2s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:424: (dbg) Run:  out/minikube-linux-arm64 -p functional-354825 image load /home/jenkins/workspace/Docker_Linux_crio_arm64/echo-server-save.tar --alsologtostderr
functional_test.go:426: loading image into minikube from file: <nil>

                                                
                                                
** stderr ** 
	I1122 00:08:23.305602  544320 out.go:360] Setting OutFile to fd 1 ...
	I1122 00:08:23.306356  544320 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1122 00:08:23.306372  544320 out.go:374] Setting ErrFile to fd 2...
	I1122 00:08:23.306376  544320 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1122 00:08:23.306660  544320 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21934-513600/.minikube/bin
	I1122 00:08:23.307472  544320 config.go:182] Loaded profile config "functional-354825": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1122 00:08:23.307648  544320 config.go:182] Loaded profile config "functional-354825": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1122 00:08:23.308236  544320 cli_runner.go:164] Run: docker container inspect functional-354825 --format={{.State.Status}}
	I1122 00:08:23.324794  544320 ssh_runner.go:195] Run: systemctl --version
	I1122 00:08:23.324885  544320 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-354825
	I1122 00:08:23.343165  544320 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33505 SSHKeyPath:/home/jenkins/minikube-integration/21934-513600/.minikube/machines/functional-354825/id_rsa Username:docker}
	I1122 00:08:23.445617  544320 cache_images.go:291] Loading image from: /home/jenkins/workspace/Docker_Linux_crio_arm64/echo-server-save.tar
	W1122 00:08:23.445673  544320 cache_images.go:255] Failed to load cached images for "functional-354825": loading images: stat /home/jenkins/workspace/Docker_Linux_crio_arm64/echo-server-save.tar: no such file or directory
	I1122 00:08:23.445697  544320 cache_images.go:267] failed pushing to: functional-354825

                                                
                                                
** /stderr **
--- FAIL: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.20s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.46s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:434: (dbg) Run:  docker rmi kicbase/echo-server:functional-354825
functional_test.go:439: (dbg) Run:  out/minikube-linux-arm64 -p functional-354825 image save --daemon kicbase/echo-server:functional-354825 --alsologtostderr
functional_test.go:447: (dbg) Run:  docker image inspect localhost/kicbase/echo-server:functional-354825
functional_test.go:447: (dbg) Non-zero exit: docker image inspect localhost/kicbase/echo-server:functional-354825: exit status 1 (27.213041ms)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error response from daemon: No such image: localhost/kicbase/echo-server:functional-354825

                                                
                                                
** /stderr **
functional_test.go:449: expected image to be loaded into Docker, but image was not found: exit status 1

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error response from daemon: No such image: localhost/kicbase/echo-server:functional-354825

                                                
                                                
** /stderr **
--- FAIL: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.46s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartClusterKeepsNodes (532.4s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartClusterKeepsNodes
ha_test.go:458: (dbg) Run:  out/minikube-linux-arm64 -p ha-561110 node list --alsologtostderr -v 5
ha_test.go:464: (dbg) Run:  out/minikube-linux-arm64 -p ha-561110 stop --alsologtostderr -v 5
ha_test.go:464: (dbg) Done: out/minikube-linux-arm64 -p ha-561110 stop --alsologtostderr -v 5: (27.569719309s)
ha_test.go:469: (dbg) Run:  out/minikube-linux-arm64 -p ha-561110 start --wait true --alsologtostderr -v 5
E1122 00:15:28.476781  516937 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21934-513600/.minikube/profiles/functional-354825/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1122 00:15:40.221469  516937 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21934-513600/.minikube/profiles/addons-882841/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1122 00:17:44.615893  516937 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21934-513600/.minikube/profiles/functional-354825/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1122 00:18:12.318659  516937 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21934-513600/.minikube/profiles/functional-354825/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1122 00:20:40.218067  516937 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21934-513600/.minikube/profiles/addons-882841/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1122 00:22:44.617534  516937 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21934-513600/.minikube/profiles/functional-354825/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:469: (dbg) Non-zero exit: out/minikube-linux-arm64 -p ha-561110 start --wait true --alsologtostderr -v 5: exit status 80 (8m21.785074131s)

                                                
                                                
-- stdout --
	* [ha-561110] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=21934
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21934-513600/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21934-513600/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on existing profile
	* Starting "ha-561110" primary control-plane node in "ha-561110" cluster
	* Pulling base image v0.0.48-1763588073-21934 ...
	* Preparing Kubernetes v1.34.1 on CRI-O 1.34.2 ...
	* Enabled addons: 
	
	* Starting "ha-561110-m02" control-plane node in "ha-561110" cluster
	* Pulling base image v0.0.48-1763588073-21934 ...
	* Found network options:
	  - NO_PROXY=192.168.49.2
	* Preparing Kubernetes v1.34.1 on CRI-O 1.34.2 ...
	  - env NO_PROXY=192.168.49.2
	* Verifying Kubernetes components...
	
	* Starting "ha-561110-m03" control-plane node in "ha-561110" cluster
	* Pulling base image v0.0.48-1763588073-21934 ...
	* Found network options:
	  - NO_PROXY=192.168.49.2,192.168.49.3
	* Preparing Kubernetes v1.34.1 on CRI-O 1.34.2 ...
	  - env NO_PROXY=192.168.49.2
	  - env NO_PROXY=192.168.49.2,192.168.49.3
	* Verifying Kubernetes components...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1122 00:14:41.051374  563925 out.go:360] Setting OutFile to fd 1 ...
	I1122 00:14:41.051556  563925 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1122 00:14:41.051586  563925 out.go:374] Setting ErrFile to fd 2...
	I1122 00:14:41.051607  563925 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1122 00:14:41.051880  563925 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21934-513600/.minikube/bin
	I1122 00:14:41.052266  563925 out.go:368] Setting JSON to false
	I1122 00:14:41.053166  563925 start.go:133] hostinfo: {"hostname":"ip-172-31-21-244","uptime":17797,"bootTime":1763752684,"procs":152,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"da8ac1fd-6236-412a-a346-95873c98230d"}
	I1122 00:14:41.053270  563925 start.go:143] virtualization:  
	I1122 00:14:41.056667  563925 out.go:179] * [ha-561110] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1122 00:14:41.060532  563925 out.go:179]   - MINIKUBE_LOCATION=21934
	I1122 00:14:41.060603  563925 notify.go:221] Checking for updates...
	I1122 00:14:41.067352  563925 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1122 00:14:41.070297  563925 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21934-513600/kubeconfig
	I1122 00:14:41.073934  563925 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21934-513600/.minikube
	I1122 00:14:41.076934  563925 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1122 00:14:41.079898  563925 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1122 00:14:41.083494  563925 config.go:182] Loaded profile config "ha-561110": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1122 00:14:41.083606  563925 driver.go:422] Setting default libvirt URI to qemu:///system
	I1122 00:14:41.111284  563925 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1122 00:14:41.111387  563925 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1122 00:14:41.175037  563925 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:4 ContainersRunning:0 ContainersPaused:0 ContainersStopped:4 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:27 OomKillDisable:true NGoroutines:42 SystemTime:2025-11-22 00:14:41.165296296 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1122 00:14:41.175148  563925 docker.go:319] overlay module found
	I1122 00:14:41.178250  563925 out.go:179] * Using the docker driver based on existing profile
	I1122 00:14:41.180953  563925 start.go:309] selected driver: docker
	I1122 00:14:41.180971  563925 start.go:930] validating driver "docker" against &{Name:ha-561110 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763588073-21934@sha256:19d3da0413e1bfa354cbb88004c6796f8e9772a083e0230b0f6e50212ee04c7e Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:ha-561110 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName
:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.49.4 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.49.5 Port:0 KubernetesVersion:v1.34.1 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:f
alse ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false Dis
ableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1122 00:14:41.181129  563925 start.go:941] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1122 00:14:41.181235  563925 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1122 00:14:41.238102  563925 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:4 ContainersRunning:0 ContainersPaused:0 ContainersStopped:4 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:27 OomKillDisable:true NGoroutines:42 SystemTime:2025-11-22 00:14:41.228646014 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1122 00:14:41.238520  563925 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1122 00:14:41.238556  563925 cni.go:84] Creating CNI manager for ""
	I1122 00:14:41.238614  563925 cni.go:136] multinode detected (4 nodes found), recommending kindnet
	I1122 00:14:41.238661  563925 start.go:353] cluster config:
	{Name:ha-561110 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763588073-21934@sha256:19d3da0413e1bfa354cbb88004c6796f8e9772a083e0230b0f6e50212ee04c7e Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:ha-561110 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerR
untime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.49.4 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.49.5 Port:0 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false isti
o-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMne
tClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1122 00:14:41.241877  563925 out.go:179] * Starting "ha-561110" primary control-plane node in "ha-561110" cluster
	I1122 00:14:41.244623  563925 cache.go:134] Beginning downloading kic base image for docker with crio
	I1122 00:14:41.247356  563925 out.go:179] * Pulling base image v0.0.48-1763588073-21934 ...
	I1122 00:14:41.250191  563925 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1122 00:14:41.250238  563925 preload.go:203] Found local preload: /home/jenkins/minikube-integration/21934-513600/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4
	I1122 00:14:41.250251  563925 cache.go:65] Caching tarball of preloaded images
	I1122 00:14:41.250256  563925 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763588073-21934@sha256:19d3da0413e1bfa354cbb88004c6796f8e9772a083e0230b0f6e50212ee04c7e in local docker daemon
	I1122 00:14:41.250328  563925 preload.go:238] Found /home/jenkins/minikube-integration/21934-513600/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1122 00:14:41.250339  563925 cache.go:68] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1122 00:14:41.250480  563925 profile.go:143] Saving config to /home/jenkins/minikube-integration/21934-513600/.minikube/profiles/ha-561110/config.json ...
	I1122 00:14:41.275134  563925 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763588073-21934@sha256:19d3da0413e1bfa354cbb88004c6796f8e9772a083e0230b0f6e50212ee04c7e in local docker daemon, skipping pull
	I1122 00:14:41.275155  563925 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763588073-21934@sha256:19d3da0413e1bfa354cbb88004c6796f8e9772a083e0230b0f6e50212ee04c7e exists in daemon, skipping load
	I1122 00:14:41.275171  563925 cache.go:243] Successfully downloaded all kic artifacts
	I1122 00:14:41.275193  563925 start.go:360] acquireMachinesLock for ha-561110: {Name:mkb487371897d491a1a254bbfa266b10650bf7bc Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1122 00:14:41.275256  563925 start.go:364] duration metric: took 36.265µs to acquireMachinesLock for "ha-561110"
	I1122 00:14:41.275288  563925 start.go:96] Skipping create...Using existing machine configuration
	I1122 00:14:41.275297  563925 fix.go:54] fixHost starting: 
	I1122 00:14:41.275560  563925 cli_runner.go:164] Run: docker container inspect ha-561110 --format={{.State.Status}}
	I1122 00:14:41.292644  563925 fix.go:112] recreateIfNeeded on ha-561110: state=Stopped err=<nil>
	W1122 00:14:41.292679  563925 fix.go:138] unexpected machine state, will restart: <nil>
	I1122 00:14:41.295991  563925 out.go:252] * Restarting existing docker container for "ha-561110" ...
	I1122 00:14:41.296094  563925 cli_runner.go:164] Run: docker start ha-561110
	I1122 00:14:41.567342  563925 cli_runner.go:164] Run: docker container inspect ha-561110 --format={{.State.Status}}
	I1122 00:14:41.593759  563925 kic.go:430] container "ha-561110" state is running.
	I1122 00:14:41.594265  563925 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-561110
	I1122 00:14:41.625087  563925 profile.go:143] Saving config to /home/jenkins/minikube-integration/21934-513600/.minikube/profiles/ha-561110/config.json ...
	I1122 00:14:41.625337  563925 machine.go:94] provisionDockerMachine start ...
	I1122 00:14:41.625405  563925 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-561110
	I1122 00:14:41.644350  563925 main.go:143] libmachine: Using SSH client type: native
	I1122 00:14:41.644684  563925 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33535 <nil> <nil>}
	I1122 00:14:41.644692  563925 main.go:143] libmachine: About to run SSH command:
	hostname
	I1122 00:14:41.645633  563925 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I1122 00:14:44.789929  563925 main.go:143] libmachine: SSH cmd err, output: <nil>: ha-561110
	
	I1122 00:14:44.789988  563925 ubuntu.go:182] provisioning hostname "ha-561110"
	I1122 00:14:44.790089  563925 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-561110
	I1122 00:14:44.809008  563925 main.go:143] libmachine: Using SSH client type: native
	I1122 00:14:44.809338  563925 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33535 <nil> <nil>}
	I1122 00:14:44.809354  563925 main.go:143] libmachine: About to run SSH command:
	sudo hostname ha-561110 && echo "ha-561110" | sudo tee /etc/hostname
	I1122 00:14:44.959054  563925 main.go:143] libmachine: SSH cmd err, output: <nil>: ha-561110
	
	I1122 00:14:44.959174  563925 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-561110
	I1122 00:14:44.977402  563925 main.go:143] libmachine: Using SSH client type: native
	I1122 00:14:44.977725  563925 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33535 <nil> <nil>}
	I1122 00:14:44.977747  563925 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-561110' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-561110/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-561110' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1122 00:14:45.148701  563925 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1122 00:14:45.148780  563925 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21934-513600/.minikube CaCertPath:/home/jenkins/minikube-integration/21934-513600/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21934-513600/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21934-513600/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21934-513600/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21934-513600/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21934-513600/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21934-513600/.minikube}
	I1122 00:14:45.148894  563925 ubuntu.go:190] setting up certificates
	I1122 00:14:45.148911  563925 provision.go:84] configureAuth start
	I1122 00:14:45.149003  563925 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-561110
	I1122 00:14:45.178821  563925 provision.go:143] copyHostCerts
	I1122 00:14:45.178872  563925 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21934-513600/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/21934-513600/.minikube/ca.pem
	I1122 00:14:45.178980  563925 exec_runner.go:144] found /home/jenkins/minikube-integration/21934-513600/.minikube/ca.pem, removing ...
	I1122 00:14:45.179051  563925 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21934-513600/.minikube/ca.pem
	I1122 00:14:45.179147  563925 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21934-513600/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21934-513600/.minikube/ca.pem (1078 bytes)
	I1122 00:14:45.179368  563925 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21934-513600/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/21934-513600/.minikube/cert.pem
	I1122 00:14:45.179396  563925 exec_runner.go:144] found /home/jenkins/minikube-integration/21934-513600/.minikube/cert.pem, removing ...
	I1122 00:14:45.179408  563925 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21934-513600/.minikube/cert.pem
	I1122 00:14:45.179513  563925 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21934-513600/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21934-513600/.minikube/cert.pem (1123 bytes)
	I1122 00:14:45.179582  563925 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21934-513600/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/21934-513600/.minikube/key.pem
	I1122 00:14:45.179688  563925 exec_runner.go:144] found /home/jenkins/minikube-integration/21934-513600/.minikube/key.pem, removing ...
	I1122 00:14:45.179693  563925 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21934-513600/.minikube/key.pem
	I1122 00:14:45.179763  563925 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21934-513600/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21934-513600/.minikube/key.pem (1675 bytes)
	I1122 00:14:45.179869  563925 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21934-513600/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21934-513600/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21934-513600/.minikube/certs/ca-key.pem org=jenkins.ha-561110 san=[127.0.0.1 192.168.49.2 ha-561110 localhost minikube]
	I1122 00:14:45.360921  563925 provision.go:177] copyRemoteCerts
	I1122 00:14:45.360991  563925 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1122 00:14:45.361031  563925 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-561110
	I1122 00:14:45.379675  563925 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33535 SSHKeyPath:/home/jenkins/minikube-integration/21934-513600/.minikube/machines/ha-561110/id_rsa Username:docker}
	I1122 00:14:45.481986  563925 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21934-513600/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1122 00:14:45.482096  563925 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21934-513600/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1122 00:14:45.500661  563925 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21934-513600/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1122 00:14:45.500750  563925 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21934-513600/.minikube/machines/server.pem --> /etc/docker/server.pem (1196 bytes)
	I1122 00:14:45.519280  563925 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21934-513600/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1122 00:14:45.519388  563925 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21934-513600/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1122 00:14:45.538099  563925 provision.go:87] duration metric: took 389.17288ms to configureAuth
	I1122 00:14:45.538126  563925 ubuntu.go:206] setting minikube options for container-runtime
	I1122 00:14:45.538361  563925 config.go:182] Loaded profile config "ha-561110": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1122 00:14:45.538464  563925 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-561110
	I1122 00:14:45.557843  563925 main.go:143] libmachine: Using SSH client type: native
	I1122 00:14:45.558153  563925 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33535 <nil> <nil>}
	I1122 00:14:45.558173  563925 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1122 00:14:45.916699  563925 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1122 00:14:45.916722  563925 machine.go:97] duration metric: took 4.291375262s to provisionDockerMachine
	I1122 00:14:45.916734  563925 start.go:293] postStartSetup for "ha-561110" (driver="docker")
	I1122 00:14:45.916744  563925 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1122 00:14:45.916808  563925 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1122 00:14:45.916864  563925 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-561110
	I1122 00:14:45.937454  563925 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33535 SSHKeyPath:/home/jenkins/minikube-integration/21934-513600/.minikube/machines/ha-561110/id_rsa Username:docker}
	I1122 00:14:46.038557  563925 ssh_runner.go:195] Run: cat /etc/os-release
	I1122 00:14:46.042104  563925 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1122 00:14:46.042148  563925 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1122 00:14:46.042162  563925 filesync.go:126] Scanning /home/jenkins/minikube-integration/21934-513600/.minikube/addons for local assets ...
	I1122 00:14:46.042244  563925 filesync.go:126] Scanning /home/jenkins/minikube-integration/21934-513600/.minikube/files for local assets ...
	I1122 00:14:46.042340  563925 filesync.go:149] local asset: /home/jenkins/minikube-integration/21934-513600/.minikube/files/etc/ssl/certs/5169372.pem -> 5169372.pem in /etc/ssl/certs
	I1122 00:14:46.042358  563925 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21934-513600/.minikube/files/etc/ssl/certs/5169372.pem -> /etc/ssl/certs/5169372.pem
	I1122 00:14:46.042519  563925 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1122 00:14:46.050335  563925 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21934-513600/.minikube/files/etc/ssl/certs/5169372.pem --> /etc/ssl/certs/5169372.pem (1708 bytes)
	I1122 00:14:46.070075  563925 start.go:296] duration metric: took 153.324249ms for postStartSetup
	I1122 00:14:46.070158  563925 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1122 00:14:46.070200  563925 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-561110
	I1122 00:14:46.089314  563925 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33535 SSHKeyPath:/home/jenkins/minikube-integration/21934-513600/.minikube/machines/ha-561110/id_rsa Username:docker}
	I1122 00:14:46.187250  563925 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1122 00:14:46.192065  563925 fix.go:56] duration metric: took 4.916761973s for fixHost
	I1122 00:14:46.192091  563925 start.go:83] releasing machines lock for "ha-561110", held for 4.916821031s
	I1122 00:14:46.192188  563925 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-561110
	I1122 00:14:46.209139  563925 ssh_runner.go:195] Run: cat /version.json
	I1122 00:14:46.209197  563925 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-561110
	I1122 00:14:46.209461  563925 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1122 00:14:46.209511  563925 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-561110
	I1122 00:14:46.233161  563925 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33535 SSHKeyPath:/home/jenkins/minikube-integration/21934-513600/.minikube/machines/ha-561110/id_rsa Username:docker}
	I1122 00:14:46.237608  563925 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33535 SSHKeyPath:/home/jenkins/minikube-integration/21934-513600/.minikube/machines/ha-561110/id_rsa Username:docker}
	I1122 00:14:46.417414  563925 ssh_runner.go:195] Run: systemctl --version
	I1122 00:14:46.423708  563925 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1122 00:14:46.459853  563925 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1122 00:14:46.464430  563925 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1122 00:14:46.464499  563925 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1122 00:14:46.472070  563925 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1122 00:14:46.472092  563925 start.go:496] detecting cgroup driver to use...
	I1122 00:14:46.472140  563925 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1122 00:14:46.472192  563925 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1122 00:14:46.487805  563925 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1122 00:14:46.501008  563925 docker.go:218] disabling cri-docker service (if available) ...
	I1122 00:14:46.501113  563925 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1122 00:14:46.517083  563925 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1122 00:14:46.530035  563925 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1122 00:14:46.634532  563925 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1122 00:14:46.753160  563925 docker.go:234] disabling docker service ...
	I1122 00:14:46.753271  563925 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1122 00:14:46.768112  563925 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1122 00:14:46.781109  563925 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1122 00:14:46.889282  563925 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1122 00:14:47.012744  563925 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1122 00:14:47.026639  563925 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1122 00:14:47.040275  563925 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1122 00:14:47.040386  563925 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1122 00:14:47.049142  563925 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1122 00:14:47.049222  563925 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1122 00:14:47.057948  563925 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1122 00:14:47.066761  563925 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1122 00:14:47.076164  563925 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1122 00:14:47.085123  563925 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1122 00:14:47.094801  563925 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1122 00:14:47.102952  563925 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1122 00:14:47.111641  563925 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1122 00:14:47.119239  563925 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1122 00:14:47.126541  563925 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1122 00:14:47.233256  563925 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1122 00:14:47.384501  563925 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1122 00:14:47.384567  563925 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1122 00:14:47.388356  563925 start.go:564] Will wait 60s for crictl version
	I1122 00:14:47.388468  563925 ssh_runner.go:195] Run: which crictl
	I1122 00:14:47.392030  563925 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1122 00:14:47.416283  563925 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.2
	RuntimeApiVersion:  v1
	I1122 00:14:47.416422  563925 ssh_runner.go:195] Run: crio --version
	I1122 00:14:47.444890  563925 ssh_runner.go:195] Run: crio --version
	I1122 00:14:47.480934  563925 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.2 ...
	I1122 00:14:47.483635  563925 cli_runner.go:164] Run: docker network inspect ha-561110 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1122 00:14:47.499516  563925 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1122 00:14:47.503369  563925 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1122 00:14:47.513239  563925 kubeadm.go:884] updating cluster {Name:ha-561110 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763588073-21934@sha256:19d3da0413e1bfa354cbb88004c6796f8e9772a083e0230b0f6e50212ee04c7e Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:ha-561110 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APISe
rverNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.49.4 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.49.5 Port:0 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:
false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:f
alse DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1122 00:14:47.513386  563925 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1122 00:14:47.513453  563925 ssh_runner.go:195] Run: sudo crictl images --output json
	I1122 00:14:47.547714  563925 crio.go:514] all images are preloaded for cri-o runtime.
	I1122 00:14:47.547741  563925 crio.go:433] Images already preloaded, skipping extraction
	I1122 00:14:47.547794  563925 ssh_runner.go:195] Run: sudo crictl images --output json
	I1122 00:14:47.572446  563925 crio.go:514] all images are preloaded for cri-o runtime.
	I1122 00:14:47.572474  563925 cache_images.go:86] Images are preloaded, skipping loading
	I1122 00:14:47.572483  563925 kubeadm.go:935] updating node { 192.168.49.2 8443 v1.34.1 crio true true} ...
	I1122 00:14:47.572577  563925 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=ha-561110 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:ha-561110 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1122 00:14:47.572661  563925 ssh_runner.go:195] Run: crio config
	I1122 00:14:47.634066  563925 cni.go:84] Creating CNI manager for ""
	I1122 00:14:47.634094  563925 cni.go:136] multinode detected (4 nodes found), recommending kindnet
	I1122 00:14:47.634114  563925 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1122 00:14:47.634156  563925 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-561110 NodeName:ha-561110 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/mani
fests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1122 00:14:47.634316  563925 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "ha-561110"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1122 00:14:47.634340  563925 kube-vip.go:115] generating kube-vip config ...
	I1122 00:14:47.634397  563925 ssh_runner.go:195] Run: sudo sh -c "lsmod | grep ip_vs"
	I1122 00:14:47.646470  563925 kube-vip.go:163] giving up enabling control-plane load-balancing as ipvs kernel modules appears not to be available: sudo sh -c "lsmod | grep ip_vs": Process exited with status 1
	stdout:
	
	stderr:
	I1122 00:14:47.646593  563925 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.49.254
	    - name: prometheus_server
	      value: :2112
	    image: ghcr.io/kube-vip/kube-vip:v1.0.1
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I1122 00:14:47.646695  563925 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1122 00:14:47.654183  563925 binaries.go:51] Found k8s binaries, skipping transfer
	I1122 00:14:47.654249  563925 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I1122 00:14:47.661699  563925 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (359 bytes)
	I1122 00:14:47.674165  563925 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1122 00:14:47.686331  563925 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2206 bytes)
	I1122 00:14:47.698542  563925 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1358 bytes)
	I1122 00:14:47.711254  563925 ssh_runner.go:195] Run: grep 192.168.49.254	control-plane.minikube.internal$ /etc/hosts
	I1122 00:14:47.714862  563925 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1122 00:14:47.724174  563925 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1122 00:14:47.839371  563925 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1122 00:14:47.853685  563925 certs.go:69] Setting up /home/jenkins/minikube-integration/21934-513600/.minikube/profiles/ha-561110 for IP: 192.168.49.2
	I1122 00:14:47.853753  563925 certs.go:195] generating shared ca certs ...
	I1122 00:14:47.853787  563925 certs.go:227] acquiring lock for ca certs: {Name:mkaf4c79493334cb2058b349c36be7473837f9b7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1122 00:14:47.853987  563925 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21934-513600/.minikube/ca.key
	I1122 00:14:47.854075  563925 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21934-513600/.minikube/proxy-client-ca.key
	I1122 00:14:47.854111  563925 certs.go:257] generating profile certs ...
	I1122 00:14:47.854232  563925 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21934-513600/.minikube/profiles/ha-561110/client.key
	I1122 00:14:47.854280  563925 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21934-513600/.minikube/profiles/ha-561110/apiserver.key.17887f76
	I1122 00:14:47.854319  563925 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21934-513600/.minikube/profiles/ha-561110/apiserver.crt.17887f76 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2 192.168.49.3 192.168.49.4 192.168.49.254]
	I1122 00:14:47.941434  563925 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21934-513600/.minikube/profiles/ha-561110/apiserver.crt.17887f76 ...
	I1122 00:14:47.941949  563925 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21934-513600/.minikube/profiles/ha-561110/apiserver.crt.17887f76: {Name:mk196d114e0b17147f8bed35c49f594a2533cc5a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1122 00:14:47.942154  563925 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21934-513600/.minikube/profiles/ha-561110/apiserver.key.17887f76 ...
	I1122 00:14:47.942191  563925 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21934-513600/.minikube/profiles/ha-561110/apiserver.key.17887f76: {Name:mk34aa50af1cad4bd0a7687c2b98f2a65013e746 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1122 00:14:47.942314  563925 certs.go:382] copying /home/jenkins/minikube-integration/21934-513600/.minikube/profiles/ha-561110/apiserver.crt.17887f76 -> /home/jenkins/minikube-integration/21934-513600/.minikube/profiles/ha-561110/apiserver.crt
	I1122 00:14:47.942500  563925 certs.go:386] copying /home/jenkins/minikube-integration/21934-513600/.minikube/profiles/ha-561110/apiserver.key.17887f76 -> /home/jenkins/minikube-integration/21934-513600/.minikube/profiles/ha-561110/apiserver.key
	I1122 00:14:47.942693  563925 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21934-513600/.minikube/profiles/ha-561110/proxy-client.key
	I1122 00:14:47.942729  563925 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21934-513600/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1122 00:14:47.942772  563925 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21934-513600/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1122 00:14:47.942814  563925 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21934-513600/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1122 00:14:47.942845  563925 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21934-513600/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1122 00:14:47.942881  563925 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21934-513600/.minikube/profiles/ha-561110/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1122 00:14:47.942927  563925 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21934-513600/.minikube/profiles/ha-561110/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1122 00:14:47.942960  563925 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21934-513600/.minikube/profiles/ha-561110/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1122 00:14:47.942996  563925 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21934-513600/.minikube/profiles/ha-561110/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1122 00:14:47.943078  563925 certs.go:484] found cert: /home/jenkins/minikube-integration/21934-513600/.minikube/certs/516937.pem (1338 bytes)
	W1122 00:14:47.943133  563925 certs.go:480] ignoring /home/jenkins/minikube-integration/21934-513600/.minikube/certs/516937_empty.pem, impossibly tiny 0 bytes
	I1122 00:14:47.943156  563925 certs.go:484] found cert: /home/jenkins/minikube-integration/21934-513600/.minikube/certs/ca-key.pem (1679 bytes)
	I1122 00:14:47.943215  563925 certs.go:484] found cert: /home/jenkins/minikube-integration/21934-513600/.minikube/certs/ca.pem (1078 bytes)
	I1122 00:14:47.943265  563925 certs.go:484] found cert: /home/jenkins/minikube-integration/21934-513600/.minikube/certs/cert.pem (1123 bytes)
	I1122 00:14:47.943352  563925 certs.go:484] found cert: /home/jenkins/minikube-integration/21934-513600/.minikube/certs/key.pem (1675 bytes)
	I1122 00:14:47.943431  563925 certs.go:484] found cert: /home/jenkins/minikube-integration/21934-513600/.minikube/files/etc/ssl/certs/5169372.pem (1708 bytes)
	I1122 00:14:47.943512  563925 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21934-513600/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1122 00:14:47.943556  563925 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21934-513600/.minikube/certs/516937.pem -> /usr/share/ca-certificates/516937.pem
	I1122 00:14:47.943584  563925 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21934-513600/.minikube/files/etc/ssl/certs/5169372.pem -> /usr/share/ca-certificates/5169372.pem
	I1122 00:14:47.944164  563925 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21934-513600/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1122 00:14:47.970032  563925 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21934-513600/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1122 00:14:47.993299  563925 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21934-513600/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1122 00:14:48.024732  563925 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21934-513600/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1122 00:14:48.049916  563925 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21934-513600/.minikube/profiles/ha-561110/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I1122 00:14:48.074841  563925 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21934-513600/.minikube/profiles/ha-561110/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1122 00:14:48.093300  563925 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21934-513600/.minikube/profiles/ha-561110/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1122 00:14:48.113386  563925 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21934-513600/.minikube/profiles/ha-561110/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1122 00:14:48.133760  563925 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21934-513600/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1122 00:14:48.153049  563925 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21934-513600/.minikube/certs/516937.pem --> /usr/share/ca-certificates/516937.pem (1338 bytes)
	I1122 00:14:48.173569  563925 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21934-513600/.minikube/files/etc/ssl/certs/5169372.pem --> /usr/share/ca-certificates/5169372.pem (1708 bytes)
	I1122 00:14:48.198292  563925 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1122 00:14:48.211957  563925 ssh_runner.go:195] Run: openssl version
	I1122 00:14:48.218515  563925 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/516937.pem && ln -fs /usr/share/ca-certificates/516937.pem /etc/ssl/certs/516937.pem"
	I1122 00:14:48.228447  563925 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/516937.pem
	I1122 00:14:48.232426  563925 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov 21 23:55 /usr/share/ca-certificates/516937.pem
	I1122 00:14:48.232551  563925 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/516937.pem
	I1122 00:14:48.273469  563925 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/516937.pem /etc/ssl/certs/51391683.0"
	I1122 00:14:48.281348  563925 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/5169372.pem && ln -fs /usr/share/ca-certificates/5169372.pem /etc/ssl/certs/5169372.pem"
	I1122 00:14:48.289635  563925 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/5169372.pem
	I1122 00:14:48.293430  563925 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov 21 23:55 /usr/share/ca-certificates/5169372.pem
	I1122 00:14:48.293550  563925 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/5169372.pem
	I1122 00:14:48.335324  563925 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/5169372.pem /etc/ssl/certs/3ec20f2e.0"
	I1122 00:14:48.343382  563925 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1122 00:14:48.351346  563925 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1122 00:14:48.354892  563925 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov 21 23:48 /usr/share/ca-certificates/minikubeCA.pem
	I1122 00:14:48.354958  563925 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1122 00:14:48.398958  563925 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1122 00:14:48.406910  563925 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1122 00:14:48.410614  563925 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1122 00:14:48.451560  563925 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1122 00:14:48.492804  563925 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1122 00:14:48.540013  563925 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1122 00:14:48.585271  563925 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1122 00:14:48.653970  563925 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1122 00:14:48.747548  563925 kubeadm.go:401] StartCluster: {Name:ha-561110 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763588073-21934@sha256:19d3da0413e1bfa354cbb88004c6796f8e9772a083e0230b0f6e50212ee04c7e Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:ha-561110 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServe
rNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.49.4 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.49.5 Port:0 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:fal
se ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:fals
e DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1122 00:14:48.747694  563925 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1122 00:14:48.747775  563925 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1122 00:14:48.836090  563925 cri.go:89] found id: "4cbb3fde391bd86e756416ec260b0b8a5501d5139da802107965d9e012c4eca5"
	I1122 00:14:48.836127  563925 cri.go:89] found id: "4360f5517fd5eb7d570a98dee1b801419d3b650d7e890d5ddecc79946fba46db"
	I1122 00:14:48.836132  563925 cri.go:89] found id: "a395e7473ffe2b7999ae75a70e19b4f153d459c8ccae48aeeb71b5b6248cc1f2"
	I1122 00:14:48.836136  563925 cri.go:89] found id: "9fdf72902e6e01af8761552bc83ad83cdf5a34600401d1ee9126ac6a25ae0e37"
	I1122 00:14:48.836140  563925 cri.go:89] found id: "1c929db60119ab54f03020d00f2063dc6672d329ea34f4504e502142bffbe644"
	I1122 00:14:48.836148  563925 cri.go:89] found id: ""
	I1122 00:14:48.836216  563925 ssh_runner.go:195] Run: sudo runc list -f json
	W1122 00:14:48.857525  563925 kubeadm.go:408] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-22T00:14:48Z" level=error msg="open /run/runc: no such file or directory"
	I1122 00:14:48.857613  563925 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1122 00:14:48.878520  563925 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1122 00:14:48.878565  563925 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1122 00:14:48.878624  563925 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1122 00:14:48.898381  563925 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1122 00:14:48.898972  563925 kubeconfig.go:47] verify endpoint returned: get endpoint: "ha-561110" does not appear in /home/jenkins/minikube-integration/21934-513600/kubeconfig
	I1122 00:14:48.899101  563925 kubeconfig.go:62] /home/jenkins/minikube-integration/21934-513600/kubeconfig needs updating (will repair): [kubeconfig missing "ha-561110" cluster setting kubeconfig missing "ha-561110" context setting]
	I1122 00:14:48.900028  563925 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21934-513600/kubeconfig: {Name:mkcd094d8a765e5a81369c0fb33cb5e696b17bb8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1122 00:14:48.901567  563925 kapi.go:59] client config for ha-561110: &rest.Config{Host:"https://192.168.49.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21934-513600/.minikube/profiles/ha-561110/client.crt", KeyFile:"/home/jenkins/minikube-integration/21934-513600/.minikube/profiles/ha-561110/client.key", CAFile:"/home/jenkins/minikube-integration/21934-513600/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil
)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1fb2df0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1122 00:14:48.907943  563925 envvar.go:172] "Feature gate default state" feature="InOrderInformers" enabled=true
	I1122 00:14:48.907972  563925 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I1122 00:14:48.907979  563925 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
	I1122 00:14:48.907984  563925 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
	I1122 00:14:48.907993  563925 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I1122 00:14:48.908413  563925 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1122 00:14:48.908668  563925 cert_rotation.go:141] "Starting client certificate rotation controller" logger="tls-transport-cache"
	I1122 00:14:48.938459  563925 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.49.2
	I1122 00:14:48.938496  563925 kubeadm.go:602] duration metric: took 59.924061ms to restartPrimaryControlPlane
	I1122 00:14:48.938507  563925 kubeadm.go:403] duration metric: took 190.97977ms to StartCluster
	I1122 00:14:48.938533  563925 settings.go:142] acquiring lock: {Name:mk6c31eb57ec65b047b78b4e1046e03fe7cc77bb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1122 00:14:48.938632  563925 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21934-513600/kubeconfig
	I1122 00:14:48.939442  563925 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21934-513600/kubeconfig: {Name:mkcd094d8a765e5a81369c0fb33cb5e696b17bb8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1122 00:14:48.939701  563925 start.go:234] HA (multi-control plane) cluster: will skip waiting for primary control-plane node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1122 00:14:48.939739  563925 start.go:242] waiting for startup goroutines ...
	I1122 00:14:48.939758  563925 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1122 00:14:48.940342  563925 config.go:182] Loaded profile config "ha-561110": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1122 00:14:48.944134  563925 out.go:179] * Enabled addons: 
	I1122 00:14:48.947186  563925 addons.go:530] duration metric: took 7.425265ms for enable addons: enabled=[]
	I1122 00:14:48.947258  563925 start.go:247] waiting for cluster config update ...
	I1122 00:14:48.947278  563925 start.go:256] writing updated cluster config ...
	I1122 00:14:48.950835  563925 out.go:203] 
	I1122 00:14:48.954183  563925 config.go:182] Loaded profile config "ha-561110": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1122 00:14:48.954390  563925 profile.go:143] Saving config to /home/jenkins/minikube-integration/21934-513600/.minikube/profiles/ha-561110/config.json ...
	I1122 00:14:48.958001  563925 out.go:179] * Starting "ha-561110-m02" control-plane node in "ha-561110" cluster
	I1122 00:14:48.961037  563925 cache.go:134] Beginning downloading kic base image for docker with crio
	I1122 00:14:48.964123  563925 out.go:179] * Pulling base image v0.0.48-1763588073-21934 ...
	I1122 00:14:48.966981  563925 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1122 00:14:48.967024  563925 cache.go:65] Caching tarball of preloaded images
	I1122 00:14:48.967169  563925 preload.go:238] Found /home/jenkins/minikube-integration/21934-513600/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1122 00:14:48.967185  563925 cache.go:68] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1122 00:14:48.967352  563925 profile.go:143] Saving config to /home/jenkins/minikube-integration/21934-513600/.minikube/profiles/ha-561110/config.json ...
	I1122 00:14:48.967608  563925 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763588073-21934@sha256:19d3da0413e1bfa354cbb88004c6796f8e9772a083e0230b0f6e50212ee04c7e in local docker daemon
	I1122 00:14:49.000604  563925 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763588073-21934@sha256:19d3da0413e1bfa354cbb88004c6796f8e9772a083e0230b0f6e50212ee04c7e in local docker daemon, skipping pull
	I1122 00:14:49.000625  563925 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763588073-21934@sha256:19d3da0413e1bfa354cbb88004c6796f8e9772a083e0230b0f6e50212ee04c7e exists in daemon, skipping load
	I1122 00:14:49.000646  563925 cache.go:243] Successfully downloaded all kic artifacts
	I1122 00:14:49.000671  563925 start.go:360] acquireMachinesLock for ha-561110-m02: {Name:mkb358f78002efa4c17b8c7cead5ae57992aea2d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1122 00:14:49.000737  563925 start.go:364] duration metric: took 50.534µs to acquireMachinesLock for "ha-561110-m02"
	I1122 00:14:49.000757  563925 start.go:96] Skipping create...Using existing machine configuration
	I1122 00:14:49.000763  563925 fix.go:54] fixHost starting: m02
	I1122 00:14:49.001076  563925 cli_runner.go:164] Run: docker container inspect ha-561110-m02 --format={{.State.Status}}
	I1122 00:14:49.034056  563925 fix.go:112] recreateIfNeeded on ha-561110-m02: state=Stopped err=<nil>
	W1122 00:14:49.034088  563925 fix.go:138] unexpected machine state, will restart: <nil>
	I1122 00:14:49.037399  563925 out.go:252] * Restarting existing docker container for "ha-561110-m02" ...
	I1122 00:14:49.037518  563925 cli_runner.go:164] Run: docker start ha-561110-m02
	I1122 00:14:49.451675  563925 cli_runner.go:164] Run: docker container inspect ha-561110-m02 --format={{.State.Status}}
	I1122 00:14:49.475681  563925 kic.go:430] container "ha-561110-m02" state is running.
	I1122 00:14:49.476112  563925 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-561110-m02
	I1122 00:14:49.506374  563925 profile.go:143] Saving config to /home/jenkins/minikube-integration/21934-513600/.minikube/profiles/ha-561110/config.json ...
	I1122 00:14:49.506719  563925 machine.go:94] provisionDockerMachine start ...
	I1122 00:14:49.506835  563925 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-561110-m02
	I1122 00:14:49.550202  563925 main.go:143] libmachine: Using SSH client type: native
	I1122 00:14:49.550557  563925 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33540 <nil> <nil>}
	I1122 00:14:49.550573  563925 main.go:143] libmachine: About to run SSH command:
	hostname
	I1122 00:14:49.551331  563925 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:37062->127.0.0.1:33540: read: connection reset by peer
	I1122 00:14:52.908642  563925 main.go:143] libmachine: SSH cmd err, output: <nil>: ha-561110-m02
	
	I1122 00:14:52.908715  563925 ubuntu.go:182] provisioning hostname "ha-561110-m02"
	I1122 00:14:52.908805  563925 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-561110-m02
	I1122 00:14:52.953932  563925 main.go:143] libmachine: Using SSH client type: native
	I1122 00:14:52.954246  563925 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33540 <nil> <nil>}
	I1122 00:14:52.954258  563925 main.go:143] libmachine: About to run SSH command:
	sudo hostname ha-561110-m02 && echo "ha-561110-m02" | sudo tee /etc/hostname
	I1122 00:14:53.345252  563925 main.go:143] libmachine: SSH cmd err, output: <nil>: ha-561110-m02
	
	I1122 00:14:53.345401  563925 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-561110-m02
	I1122 00:14:53.377691  563925 main.go:143] libmachine: Using SSH client type: native
	I1122 00:14:53.378150  563925 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33540 <nil> <nil>}
	I1122 00:14:53.378172  563925 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-561110-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-561110-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-561110-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1122 00:14:53.591463  563925 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1122 00:14:53.591496  563925 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21934-513600/.minikube CaCertPath:/home/jenkins/minikube-integration/21934-513600/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21934-513600/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21934-513600/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21934-513600/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21934-513600/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21934-513600/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21934-513600/.minikube}
	I1122 00:14:53.591513  563925 ubuntu.go:190] setting up certificates
	I1122 00:14:53.591526  563925 provision.go:84] configureAuth start
	I1122 00:14:53.591597  563925 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-561110-m02
	I1122 00:14:53.618168  563925 provision.go:143] copyHostCerts
	I1122 00:14:53.618211  563925 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21934-513600/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/21934-513600/.minikube/ca.pem
	I1122 00:14:53.618242  563925 exec_runner.go:144] found /home/jenkins/minikube-integration/21934-513600/.minikube/ca.pem, removing ...
	I1122 00:14:53.618253  563925 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21934-513600/.minikube/ca.pem
	I1122 00:14:53.618333  563925 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21934-513600/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21934-513600/.minikube/ca.pem (1078 bytes)
	I1122 00:14:53.618435  563925 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21934-513600/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/21934-513600/.minikube/cert.pem
	I1122 00:14:53.618458  563925 exec_runner.go:144] found /home/jenkins/minikube-integration/21934-513600/.minikube/cert.pem, removing ...
	I1122 00:14:53.618465  563925 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21934-513600/.minikube/cert.pem
	I1122 00:14:53.618494  563925 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21934-513600/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21934-513600/.minikube/cert.pem (1123 bytes)
	I1122 00:14:53.618552  563925 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21934-513600/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/21934-513600/.minikube/key.pem
	I1122 00:14:53.618576  563925 exec_runner.go:144] found /home/jenkins/minikube-integration/21934-513600/.minikube/key.pem, removing ...
	I1122 00:14:53.618584  563925 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21934-513600/.minikube/key.pem
	I1122 00:14:53.618612  563925 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21934-513600/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21934-513600/.minikube/key.pem (1675 bytes)
	I1122 00:14:53.618665  563925 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21934-513600/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21934-513600/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21934-513600/.minikube/certs/ca-key.pem org=jenkins.ha-561110-m02 san=[127.0.0.1 192.168.49.3 ha-561110-m02 localhost minikube]
	I1122 00:14:53.787782  563925 provision.go:177] copyRemoteCerts
	I1122 00:14:53.787855  563925 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1122 00:14:53.787902  563925 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-561110-m02
	I1122 00:14:53.805764  563925 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33540 SSHKeyPath:/home/jenkins/minikube-integration/21934-513600/.minikube/machines/ha-561110-m02/id_rsa Username:docker}
	I1122 00:14:53.914816  563925 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21934-513600/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1122 00:14:53.914879  563925 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21934-513600/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1122 00:14:53.944075  563925 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21934-513600/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1122 00:14:53.944134  563925 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21934-513600/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1122 00:14:53.978384  563925 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21934-513600/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1122 00:14:53.978443  563925 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21934-513600/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1122 00:14:54.007139  563925 provision.go:87] duration metric: took 415.59481ms to configureAuth
	I1122 00:14:54.007174  563925 ubuntu.go:206] setting minikube options for container-runtime
	I1122 00:14:54.007455  563925 config.go:182] Loaded profile config "ha-561110": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1122 00:14:54.007583  563925 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-561110-m02
	I1122 00:14:54.047939  563925 main.go:143] libmachine: Using SSH client type: native
	I1122 00:14:54.048267  563925 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33540 <nil> <nil>}
	I1122 00:14:54.048291  563925 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1122 00:14:54.482099  563925 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1122 00:14:54.482120  563925 machine.go:97] duration metric: took 4.975378731s to provisionDockerMachine
	I1122 00:14:54.482133  563925 start.go:293] postStartSetup for "ha-561110-m02" (driver="docker")
	I1122 00:14:54.482144  563925 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1122 00:14:54.482209  563925 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1122 00:14:54.482252  563925 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-561110-m02
	I1122 00:14:54.500164  563925 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33540 SSHKeyPath:/home/jenkins/minikube-integration/21934-513600/.minikube/machines/ha-561110-m02/id_rsa Username:docker}
	I1122 00:14:54.602698  563925 ssh_runner.go:195] Run: cat /etc/os-release
	I1122 00:14:54.606253  563925 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1122 00:14:54.606285  563925 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1122 00:14:54.606296  563925 filesync.go:126] Scanning /home/jenkins/minikube-integration/21934-513600/.minikube/addons for local assets ...
	I1122 00:14:54.606352  563925 filesync.go:126] Scanning /home/jenkins/minikube-integration/21934-513600/.minikube/files for local assets ...
	I1122 00:14:54.606439  563925 filesync.go:149] local asset: /home/jenkins/minikube-integration/21934-513600/.minikube/files/etc/ssl/certs/5169372.pem -> 5169372.pem in /etc/ssl/certs
	I1122 00:14:54.606450  563925 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21934-513600/.minikube/files/etc/ssl/certs/5169372.pem -> /etc/ssl/certs/5169372.pem
	I1122 00:14:54.606572  563925 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1122 00:14:54.614732  563925 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21934-513600/.minikube/files/etc/ssl/certs/5169372.pem --> /etc/ssl/certs/5169372.pem (1708 bytes)
	I1122 00:14:54.633198  563925 start.go:296] duration metric: took 151.050123ms for postStartSetup
	I1122 00:14:54.633327  563925 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1122 00:14:54.633378  563925 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-561110-m02
	I1122 00:14:54.651888  563925 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33540 SSHKeyPath:/home/jenkins/minikube-integration/21934-513600/.minikube/machines/ha-561110-m02/id_rsa Username:docker}
	I1122 00:14:54.751498  563925 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1122 00:14:54.757858  563925 fix.go:56] duration metric: took 5.757088169s for fixHost
	I1122 00:14:54.757886  563925 start.go:83] releasing machines lock for "ha-561110-m02", held for 5.757140204s
	I1122 00:14:54.757958  563925 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-561110-m02
	I1122 00:14:54.778371  563925 out.go:179] * Found network options:
	I1122 00:14:54.781341  563925 out.go:179]   - NO_PROXY=192.168.49.2
	W1122 00:14:54.784285  563925 proxy.go:120] fail to check proxy env: Error ip not in block
	W1122 00:14:54.784332  563925 proxy.go:120] fail to check proxy env: Error ip not in block
	I1122 00:14:54.784409  563925 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1122 00:14:54.784457  563925 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-561110-m02
	I1122 00:14:54.784734  563925 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1122 00:14:54.784793  563925 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-561110-m02
	I1122 00:14:54.806895  563925 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33540 SSHKeyPath:/home/jenkins/minikube-integration/21934-513600/.minikube/machines/ha-561110-m02/id_rsa Username:docker}
	I1122 00:14:54.810601  563925 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33540 SSHKeyPath:/home/jenkins/minikube-integration/21934-513600/.minikube/machines/ha-561110-m02/id_rsa Username:docker}
	I1122 00:14:54.952580  563925 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1122 00:14:55.010644  563925 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1122 00:14:55.010736  563925 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1122 00:14:55.020151  563925 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1122 00:14:55.020182  563925 start.go:496] detecting cgroup driver to use...
	I1122 00:14:55.020226  563925 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1122 00:14:55.020299  563925 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1122 00:14:55.036774  563925 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1122 00:14:55.050901  563925 docker.go:218] disabling cri-docker service (if available) ...
	I1122 00:14:55.051008  563925 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1122 00:14:55.067844  563925 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1122 00:14:55.088601  563925 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1122 00:14:55.315735  563925 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1122 00:14:55.558850  563925 docker.go:234] disabling docker service ...
	I1122 00:14:55.558960  563925 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1122 00:14:55.576438  563925 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1122 00:14:55.595046  563925 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1122 00:14:55.815234  563925 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1122 00:14:56.006098  563925 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1122 00:14:56.021481  563925 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1122 00:14:56.044364  563925 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1122 00:14:56.044478  563925 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1122 00:14:56.068864  563925 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1122 00:14:56.068980  563925 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1122 00:14:56.084397  563925 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1122 00:14:56.114539  563925 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1122 00:14:56.145163  563925 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1122 00:14:56.167039  563925 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1122 00:14:56.186342  563925 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1122 00:14:56.205126  563925 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1122 00:14:56.216422  563925 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1122 00:14:56.246320  563925 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1122 00:14:56.266882  563925 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1122 00:14:56.589643  563925 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1122 00:14:56.984258  563925 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1122 00:14:56.984384  563925 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1122 00:14:56.988684  563925 start.go:564] Will wait 60s for crictl version
	I1122 00:14:56.988823  563925 ssh_runner.go:195] Run: which crictl
	I1122 00:14:56.993930  563925 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1122 00:14:57.036836  563925 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.2
	RuntimeApiVersion:  v1
	I1122 00:14:57.036996  563925 ssh_runner.go:195] Run: crio --version
	I1122 00:14:57.084070  563925 ssh_runner.go:195] Run: crio --version
	I1122 00:14:57.125443  563925 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.2 ...
	I1122 00:14:57.128539  563925 out.go:179]   - env NO_PROXY=192.168.49.2
	I1122 00:14:57.131626  563925 cli_runner.go:164] Run: docker network inspect ha-561110 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1122 00:14:57.158795  563925 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1122 00:14:57.173001  563925 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1122 00:14:57.195629  563925 mustload.go:66] Loading cluster: ha-561110
	I1122 00:14:57.195865  563925 config.go:182] Loaded profile config "ha-561110": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1122 00:14:57.196127  563925 cli_runner.go:164] Run: docker container inspect ha-561110 --format={{.State.Status}}
	I1122 00:14:57.223215  563925 host.go:66] Checking if "ha-561110" exists ...
	I1122 00:14:57.223486  563925 certs.go:69] Setting up /home/jenkins/minikube-integration/21934-513600/.minikube/profiles/ha-561110 for IP: 192.168.49.3
	I1122 00:14:57.223499  563925 certs.go:195] generating shared ca certs ...
	I1122 00:14:57.223514  563925 certs.go:227] acquiring lock for ca certs: {Name:mkaf4c79493334cb2058b349c36be7473837f9b7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1122 00:14:57.223627  563925 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21934-513600/.minikube/ca.key
	I1122 00:14:57.223673  563925 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21934-513600/.minikube/proxy-client-ca.key
	I1122 00:14:57.223683  563925 certs.go:257] generating profile certs ...
	I1122 00:14:57.223760  563925 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21934-513600/.minikube/profiles/ha-561110/client.key
	I1122 00:14:57.223818  563925 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21934-513600/.minikube/profiles/ha-561110/apiserver.key.1995a48d
	I1122 00:14:57.223886  563925 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21934-513600/.minikube/profiles/ha-561110/proxy-client.key
	I1122 00:14:57.223904  563925 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21934-513600/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1122 00:14:57.223916  563925 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21934-513600/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1122 00:14:57.223932  563925 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21934-513600/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1122 00:14:57.223943  563925 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21934-513600/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1122 00:14:57.223958  563925 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21934-513600/.minikube/profiles/ha-561110/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1122 00:14:57.223970  563925 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21934-513600/.minikube/profiles/ha-561110/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1122 00:14:57.223985  563925 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21934-513600/.minikube/profiles/ha-561110/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1122 00:14:57.223995  563925 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21934-513600/.minikube/profiles/ha-561110/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1122 00:14:57.224044  563925 certs.go:484] found cert: /home/jenkins/minikube-integration/21934-513600/.minikube/certs/516937.pem (1338 bytes)
	W1122 00:14:57.224081  563925 certs.go:480] ignoring /home/jenkins/minikube-integration/21934-513600/.minikube/certs/516937_empty.pem, impossibly tiny 0 bytes
	I1122 00:14:57.224093  563925 certs.go:484] found cert: /home/jenkins/minikube-integration/21934-513600/.minikube/certs/ca-key.pem (1679 bytes)
	I1122 00:14:57.224122  563925 certs.go:484] found cert: /home/jenkins/minikube-integration/21934-513600/.minikube/certs/ca.pem (1078 bytes)
	I1122 00:14:57.224153  563925 certs.go:484] found cert: /home/jenkins/minikube-integration/21934-513600/.minikube/certs/cert.pem (1123 bytes)
	I1122 00:14:57.224179  563925 certs.go:484] found cert: /home/jenkins/minikube-integration/21934-513600/.minikube/certs/key.pem (1675 bytes)
	I1122 00:14:57.224229  563925 certs.go:484] found cert: /home/jenkins/minikube-integration/21934-513600/.minikube/files/etc/ssl/certs/5169372.pem (1708 bytes)
	I1122 00:14:57.224300  563925 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21934-513600/.minikube/files/etc/ssl/certs/5169372.pem -> /usr/share/ca-certificates/5169372.pem
	I1122 00:14:57.224317  563925 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21934-513600/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1122 00:14:57.224334  563925 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21934-513600/.minikube/certs/516937.pem -> /usr/share/ca-certificates/516937.pem
	I1122 00:14:57.224393  563925 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-561110
	I1122 00:14:57.252760  563925 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33535 SSHKeyPath:/home/jenkins/minikube-integration/21934-513600/.minikube/machines/ha-561110/id_rsa Username:docker}
	I1122 00:14:57.354098  563925 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.pub
	I1122 00:14:57.358457  563925 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I1122 00:14:57.367394  563925 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.key
	I1122 00:14:57.371898  563925 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1679 bytes)
	I1122 00:14:57.380426  563925 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.crt
	I1122 00:14:57.384846  563925 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I1122 00:14:57.393409  563925 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.key
	I1122 00:14:57.397317  563925 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1679 bytes)
	I1122 00:14:57.405462  563925 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.crt
	I1122 00:14:57.409765  563925 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I1122 00:14:57.418123  563925 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.key
	I1122 00:14:57.422240  563925 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1675 bytes)
	I1122 00:14:57.430625  563925 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21934-513600/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1122 00:14:57.448740  563925 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21934-513600/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1122 00:14:57.466976  563925 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21934-513600/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1122 00:14:57.489136  563925 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21934-513600/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1122 00:14:57.510655  563925 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21934-513600/.minikube/profiles/ha-561110/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I1122 00:14:57.531352  563925 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21934-513600/.minikube/profiles/ha-561110/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1122 00:14:57.551538  563925 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21934-513600/.minikube/profiles/ha-561110/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1122 00:14:57.572743  563925 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21934-513600/.minikube/profiles/ha-561110/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1122 00:14:57.593047  563925 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21934-513600/.minikube/files/etc/ssl/certs/5169372.pem --> /usr/share/ca-certificates/5169372.pem (1708 bytes)
	I1122 00:14:57.616537  563925 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21934-513600/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1122 00:14:57.636347  563925 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21934-513600/.minikube/certs/516937.pem --> /usr/share/ca-certificates/516937.pem (1338 bytes)
	I1122 00:14:57.655714  563925 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I1122 00:14:57.671132  563925 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1679 bytes)
	I1122 00:14:57.686013  563925 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I1122 00:14:57.702655  563925 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1679 bytes)
	I1122 00:14:57.717580  563925 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I1122 00:14:57.733104  563925 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1675 bytes)
	I1122 00:14:57.748086  563925 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I1122 00:14:57.762829  563925 ssh_runner.go:195] Run: openssl version
	I1122 00:14:57.770255  563925 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/5169372.pem && ln -fs /usr/share/ca-certificates/5169372.pem /etc/ssl/certs/5169372.pem"
	I1122 00:14:57.779598  563925 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/5169372.pem
	I1122 00:14:57.784055  563925 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov 21 23:55 /usr/share/ca-certificates/5169372.pem
	I1122 00:14:57.784140  563925 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/5169372.pem
	I1122 00:14:57.827123  563925 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/5169372.pem /etc/ssl/certs/3ec20f2e.0"
	I1122 00:14:57.836065  563925 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1122 00:14:57.845341  563925 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1122 00:14:57.849594  563925 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov 21 23:48 /usr/share/ca-certificates/minikubeCA.pem
	I1122 00:14:57.849679  563925 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1122 00:14:57.893282  563925 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1122 00:14:57.903127  563925 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/516937.pem && ln -fs /usr/share/ca-certificates/516937.pem /etc/ssl/certs/516937.pem"
	I1122 00:14:57.912201  563925 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/516937.pem
	I1122 00:14:57.916336  563925 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov 21 23:55 /usr/share/ca-certificates/516937.pem
	I1122 00:14:57.916418  563925 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/516937.pem
	I1122 00:14:57.959761  563925 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/516937.pem /etc/ssl/certs/51391683.0"
	I1122 00:14:57.969369  563925 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1122 00:14:57.974254  563925 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1122 00:14:58.017064  563925 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1122 00:14:58.070486  563925 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1122 00:14:58.116182  563925 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1122 00:14:58.158146  563925 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1122 00:14:58.220397  563925 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1122 00:14:58.263034  563925 kubeadm.go:935] updating node {m02 192.168.49.3 8443 v1.34.1 crio true true} ...
	I1122 00:14:58.263156  563925 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=ha-561110-m02 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.3
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:ha-561110 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1122 00:14:58.263186  563925 kube-vip.go:115] generating kube-vip config ...
	I1122 00:14:58.263244  563925 ssh_runner.go:195] Run: sudo sh -c "lsmod | grep ip_vs"
	I1122 00:14:58.282844  563925 kube-vip.go:163] giving up enabling control-plane load-balancing as ipvs kernel modules appears not to be available: sudo sh -c "lsmod | grep ip_vs": Process exited with status 1
	stdout:
	
	stderr:
	I1122 00:14:58.282918  563925 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.49.254
	    - name: prometheus_server
	      value: :2112
	    image: ghcr.io/kube-vip/kube-vip:v1.0.1
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I1122 00:14:58.282999  563925 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1122 00:14:58.293245  563925 binaries.go:51] Found k8s binaries, skipping transfer
	I1122 00:14:58.293334  563925 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I1122 00:14:58.306481  563925 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (363 bytes)
	I1122 00:14:58.327177  563925 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1122 00:14:58.341755  563925 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1358 bytes)
	I1122 00:14:58.358483  563925 ssh_runner.go:195] Run: grep 192.168.49.254	control-plane.minikube.internal$ /etc/hosts
	I1122 00:14:58.362397  563925 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1122 00:14:58.372758  563925 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1122 00:14:58.574763  563925 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1122 00:14:58.589366  563925 config.go:182] Loaded profile config "ha-561110": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1122 00:14:58.589071  563925 start.go:236] Will wait 6m0s for node &{Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1122 00:14:58.595464  563925 out.go:179] * Verifying Kubernetes components...
	I1122 00:14:58.597975  563925 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1122 00:14:58.780512  563925 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1122 00:14:58.804624  563925 kapi.go:59] client config for ha-561110: &rest.Config{Host:"https://192.168.49.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21934-513600/.minikube/profiles/ha-561110/client.crt", KeyFile:"/home/jenkins/minikube-integration/21934-513600/.minikube/profiles/ha-561110/client.key", CAFile:"/home/jenkins/minikube-integration/21934-513600/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(n
il)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1fb2df0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W1122 00:14:58.804704  563925 kubeadm.go:492] Overriding stale ClientConfig host https://192.168.49.254:8443 with https://192.168.49.2:8443
	I1122 00:14:58.804940  563925 node_ready.go:35] waiting up to 6m0s for node "ha-561110-m02" to be "Ready" ...
	I1122 00:15:18.370415  563925 node_ready.go:49] node "ha-561110-m02" is "Ready"
	I1122 00:15:18.370443  563925 node_ready.go:38] duration metric: took 19.565489572s for node "ha-561110-m02" to be "Ready" ...
	I1122 00:15:18.370457  563925 api_server.go:52] waiting for apiserver process to appear ...
	I1122 00:15:18.370519  563925 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1122 00:15:18.871467  563925 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1122 00:15:19.371300  563925 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1122 00:15:19.387145  563925 api_server.go:72] duration metric: took 20.797721396s to wait for apiserver process to appear ...
	I1122 00:15:19.387224  563925 api_server.go:88] waiting for apiserver healthz status ...
	I1122 00:15:19.387265  563925 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1122 00:15:19.396105  563925 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/start-kubernetes-service-cidr-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1122 00:15:19.396183  563925 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/start-kubernetes-service-cidr-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1122 00:15:19.887636  563925 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1122 00:15:19.899172  563925 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1122 00:15:19.899202  563925 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1122 00:15:20.387390  563925 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1122 00:15:20.399975  563925 api_server.go:279] https://192.168.49.2:8443/healthz returned 200:
	ok
	I1122 00:15:20.401338  563925 api_server.go:141] control plane version: v1.34.1
	I1122 00:15:20.401367  563925 api_server.go:131] duration metric: took 1.014115281s to wait for apiserver health ...
	I1122 00:15:20.401377  563925 system_pods.go:43] waiting for kube-system pods to appear ...
	I1122 00:15:20.428331  563925 system_pods.go:59] 25 kube-system pods found
	I1122 00:15:20.428372  563925 system_pods.go:61] "coredns-66bc5c9577-rrkkw" [97c7e1c9-e499-4131-957e-6da8bd29c994] Running
	I1122 00:15:20.428379  563925 system_pods.go:61] "coredns-66bc5c9577-vp8f5" [6d945620-203b-4e4e-b9e2-ef07e6b0f89b] Running
	I1122 00:15:20.428413  563925 system_pods.go:61] "etcd-ha-561110" [5a87193f-0871-4a4c-a409-4d52da31b88b] Running
	I1122 00:15:20.428428  563925 system_pods.go:61] "etcd-ha-561110-m02" [2c4dde3d-3a4c-4d47-b52c-980920facb09] Running
	I1122 00:15:20.428433  563925 system_pods.go:61] "etcd-ha-561110-m03" [d9d64b02-a6c9-48d1-9633-71cfae997fa8] Running
	I1122 00:15:20.428436  563925 system_pods.go:61] "kindnet-4tkd6" [63b063bf-1876-47e2-acb2-a5561b22b975] Running
	I1122 00:15:20.428440  563925 system_pods.go:61] "kindnet-7g65m" [edeca4a6-de24-4444-be9c-cdcbf744f52a] Running
	I1122 00:15:20.428448  563925 system_pods.go:61] "kindnet-dltvw" [ec75f262-ca6c-4766-bc81-60a4e51e94f0] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I1122 00:15:20.428457  563925 system_pods.go:61] "kindnet-w4kh7" [61649d36-e515-4c70-831e-2a509e3b67f3] Running
	I1122 00:15:20.428464  563925 system_pods.go:61] "kube-apiserver-ha-561110" [e94b2c4e-8cc8-45e3-9b89-d1805b254c99] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1122 00:15:20.428469  563925 system_pods.go:61] "kube-apiserver-ha-561110-m02" [98ee0c6b-6094-4264-98e8-69d3f1bd0c04] Running
	I1122 00:15:20.428491  563925 system_pods.go:61] "kube-apiserver-ha-561110-m03" [5b0131a7-0af0-48ff-8889-e82b8a2a2e43] Running
	I1122 00:15:20.428503  563925 system_pods.go:61] "kube-controller-manager-ha-561110" [db7b105b-9fa2-43a8-a08d-837b9960db31] Running
	I1122 00:15:20.428508  563925 system_pods.go:61] "kube-controller-manager-ha-561110-m02" [2bb17b90-45c6-4c74-96a1-81f05c51a0cf] Running
	I1122 00:15:20.428511  563925 system_pods.go:61] "kube-controller-manager-ha-561110-m03" [a1fefba1-3967-4b58-b8e7-2bec4a7b896b] Running
	I1122 00:15:20.428516  563925 system_pods.go:61] "kube-proxy-2vctt" [f89e3d32-bca1-4b9a-8531-7eab74e6e0da] Running
	I1122 00:15:20.428527  563925 system_pods.go:61] "kube-proxy-b8wb5" [ac8e8b19-cd59-454e-ab83-b9d08cf4cea0] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1122 00:15:20.428533  563925 system_pods.go:61] "kube-proxy-fh5cv" [318c6763-fea1-4564-86f6-18cfad691213] Running
	I1122 00:15:20.428542  563925 system_pods.go:61] "kube-proxy-v5ndg" [5e85dc4a-71dd-40c6-86f6-5c79b7f45194] Running
	I1122 00:15:20.428546  563925 system_pods.go:61] "kube-scheduler-ha-561110" [3267ceff-350f-471c-8e2b-9be8b8bdc471] Running
	I1122 00:15:20.428567  563925 system_pods.go:61] "kube-scheduler-ha-561110-m02" [75edb16c-cd99-46b4-bd49-e0646746877f] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1122 00:15:20.428578  563925 system_pods.go:61] "kube-scheduler-ha-561110-m03" [6763f28e-1726-4a48-bac3-1a7e5f82595e] Running
	I1122 00:15:20.428582  563925 system_pods.go:61] "kube-vip-ha-561110-m02" [e4be1217-de52-4c2a-8cfb-a411559af009] Running
	I1122 00:15:20.428596  563925 system_pods.go:61] "kube-vip-ha-561110-m03" [5e7072f7-2a3d-4add-bc1d-e69a03dd28cb] Running
	I1122 00:15:20.428608  563925 system_pods.go:61] "storage-provisioner" [6bf95a26-263b-4088-904d-b344d4826342] Running
	I1122 00:15:20.428614  563925 system_pods.go:74] duration metric: took 27.23022ms to wait for pod list to return data ...
	I1122 00:15:20.428622  563925 default_sa.go:34] waiting for default service account to be created ...
	I1122 00:15:20.444498  563925 default_sa.go:45] found service account: "default"
	I1122 00:15:20.444536  563925 default_sa.go:55] duration metric: took 15.88117ms for default service account to be created ...
	I1122 00:15:20.444583  563925 system_pods.go:116] waiting for k8s-apps to be running ...
	I1122 00:15:20.468591  563925 system_pods.go:86] 25 kube-system pods found
	I1122 00:15:20.468633  563925 system_pods.go:89] "coredns-66bc5c9577-rrkkw" [97c7e1c9-e499-4131-957e-6da8bd29c994] Running
	I1122 00:15:20.468662  563925 system_pods.go:89] "coredns-66bc5c9577-vp8f5" [6d945620-203b-4e4e-b9e2-ef07e6b0f89b] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1122 00:15:20.468674  563925 system_pods.go:89] "etcd-ha-561110" [5a87193f-0871-4a4c-a409-4d52da31b88b] Running
	I1122 00:15:20.468681  563925 system_pods.go:89] "etcd-ha-561110-m02" [2c4dde3d-3a4c-4d47-b52c-980920facb09] Running
	I1122 00:15:20.468703  563925 system_pods.go:89] "etcd-ha-561110-m03" [d9d64b02-a6c9-48d1-9633-71cfae997fa8] Running
	I1122 00:15:20.468713  563925 system_pods.go:89] "kindnet-4tkd6" [63b063bf-1876-47e2-acb2-a5561b22b975] Running
	I1122 00:15:20.468719  563925 system_pods.go:89] "kindnet-7g65m" [edeca4a6-de24-4444-be9c-cdcbf744f52a] Running
	I1122 00:15:20.468727  563925 system_pods.go:89] "kindnet-dltvw" [ec75f262-ca6c-4766-bc81-60a4e51e94f0] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I1122 00:15:20.468736  563925 system_pods.go:89] "kindnet-w4kh7" [61649d36-e515-4c70-831e-2a509e3b67f3] Running
	I1122 00:15:20.468743  563925 system_pods.go:89] "kube-apiserver-ha-561110" [e94b2c4e-8cc8-45e3-9b89-d1805b254c99] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1122 00:15:20.468753  563925 system_pods.go:89] "kube-apiserver-ha-561110-m02" [98ee0c6b-6094-4264-98e8-69d3f1bd0c04] Running
	I1122 00:15:20.468758  563925 system_pods.go:89] "kube-apiserver-ha-561110-m03" [5b0131a7-0af0-48ff-8889-e82b8a2a2e43] Running
	I1122 00:15:20.468762  563925 system_pods.go:89] "kube-controller-manager-ha-561110" [db7b105b-9fa2-43a8-a08d-837b9960db31] Running
	I1122 00:15:20.468785  563925 system_pods.go:89] "kube-controller-manager-ha-561110-m02" [2bb17b90-45c6-4c74-96a1-81f05c51a0cf] Running
	I1122 00:15:20.468796  563925 system_pods.go:89] "kube-controller-manager-ha-561110-m03" [a1fefba1-3967-4b58-b8e7-2bec4a7b896b] Running
	I1122 00:15:20.468800  563925 system_pods.go:89] "kube-proxy-2vctt" [f89e3d32-bca1-4b9a-8531-7eab74e6e0da] Running
	I1122 00:15:20.468809  563925 system_pods.go:89] "kube-proxy-b8wb5" [ac8e8b19-cd59-454e-ab83-b9d08cf4cea0] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1122 00:15:20.468818  563925 system_pods.go:89] "kube-proxy-fh5cv" [318c6763-fea1-4564-86f6-18cfad691213] Running
	I1122 00:15:20.468823  563925 system_pods.go:89] "kube-proxy-v5ndg" [5e85dc4a-71dd-40c6-86f6-5c79b7f45194] Running
	I1122 00:15:20.468827  563925 system_pods.go:89] "kube-scheduler-ha-561110" [3267ceff-350f-471c-8e2b-9be8b8bdc471] Running
	I1122 00:15:20.468833  563925 system_pods.go:89] "kube-scheduler-ha-561110-m02" [75edb16c-cd99-46b4-bd49-e0646746877f] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1122 00:15:20.468841  563925 system_pods.go:89] "kube-scheduler-ha-561110-m03" [6763f28e-1726-4a48-bac3-1a7e5f82595e] Running
	I1122 00:15:20.468869  563925 system_pods.go:89] "kube-vip-ha-561110-m02" [e4be1217-de52-4c2a-8cfb-a411559af009] Running
	I1122 00:15:20.468881  563925 system_pods.go:89] "kube-vip-ha-561110-m03" [5e7072f7-2a3d-4add-bc1d-e69a03dd28cb] Running
	I1122 00:15:20.468887  563925 system_pods.go:89] "storage-provisioner" [6bf95a26-263b-4088-904d-b344d4826342] Running
	I1122 00:15:20.468911  563925 system_pods.go:126] duration metric: took 24.319558ms to wait for k8s-apps to be running ...
	I1122 00:15:20.468936  563925 system_svc.go:44] waiting for kubelet service to be running ....
	I1122 00:15:20.469011  563925 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1122 00:15:20.486178  563925 system_svc.go:56] duration metric: took 17.232261ms WaitForService to wait for kubelet
	I1122 00:15:20.486213  563925 kubeadm.go:587] duration metric: took 21.896794227s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1122 00:15:20.486246  563925 node_conditions.go:102] verifying NodePressure condition ...
	I1122 00:15:20.505594  563925 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1122 00:15:20.505637  563925 node_conditions.go:123] node cpu capacity is 2
	I1122 00:15:20.505651  563925 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1122 00:15:20.505673  563925 node_conditions.go:123] node cpu capacity is 2
	I1122 00:15:20.505684  563925 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1122 00:15:20.505689  563925 node_conditions.go:123] node cpu capacity is 2
	I1122 00:15:20.505693  563925 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1122 00:15:20.505697  563925 node_conditions.go:123] node cpu capacity is 2
	I1122 00:15:20.505716  563925 node_conditions.go:105] duration metric: took 19.443078ms to run NodePressure ...
	I1122 00:15:20.505736  563925 start.go:242] waiting for startup goroutines ...
	I1122 00:15:20.505776  563925 start.go:256] writing updated cluster config ...
	I1122 00:15:20.509517  563925 out.go:203] 
	I1122 00:15:20.512839  563925 config.go:182] Loaded profile config "ha-561110": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1122 00:15:20.513009  563925 profile.go:143] Saving config to /home/jenkins/minikube-integration/21934-513600/.minikube/profiles/ha-561110/config.json ...
	I1122 00:15:20.516821  563925 out.go:179] * Starting "ha-561110-m03" control-plane node in "ha-561110" cluster
	I1122 00:15:20.520742  563925 cache.go:134] Beginning downloading kic base image for docker with crio
	I1122 00:15:20.524203  563925 out.go:179] * Pulling base image v0.0.48-1763588073-21934 ...
	I1122 00:15:20.527654  563925 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1122 00:15:20.527732  563925 cache.go:65] Caching tarball of preloaded images
	I1122 00:15:20.527695  563925 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763588073-21934@sha256:19d3da0413e1bfa354cbb88004c6796f8e9772a083e0230b0f6e50212ee04c7e in local docker daemon
	I1122 00:15:20.528031  563925 preload.go:238] Found /home/jenkins/minikube-integration/21934-513600/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1122 00:15:20.528049  563925 cache.go:68] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1122 00:15:20.528201  563925 profile.go:143] Saving config to /home/jenkins/minikube-integration/21934-513600/.minikube/profiles/ha-561110/config.json ...
	I1122 00:15:20.552866  563925 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763588073-21934@sha256:19d3da0413e1bfa354cbb88004c6796f8e9772a083e0230b0f6e50212ee04c7e in local docker daemon, skipping pull
	I1122 00:15:20.552887  563925 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763588073-21934@sha256:19d3da0413e1bfa354cbb88004c6796f8e9772a083e0230b0f6e50212ee04c7e exists in daemon, skipping load
	I1122 00:15:20.552899  563925 cache.go:243] Successfully downloaded all kic artifacts
	I1122 00:15:20.552922  563925 start.go:360] acquireMachinesLock for ha-561110-m03: {Name:mk8a19cfae84d78ad843d3f8169a3190cadb2d92 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1122 00:15:20.552971  563925 start.go:364] duration metric: took 34.805µs to acquireMachinesLock for "ha-561110-m03"
	I1122 00:15:20.552989  563925 start.go:96] Skipping create...Using existing machine configuration
	I1122 00:15:20.552994  563925 fix.go:54] fixHost starting: m03
	I1122 00:15:20.553255  563925 cli_runner.go:164] Run: docker container inspect ha-561110-m03 --format={{.State.Status}}
	I1122 00:15:20.581965  563925 fix.go:112] recreateIfNeeded on ha-561110-m03: state=Stopped err=<nil>
	W1122 00:15:20.581999  563925 fix.go:138] unexpected machine state, will restart: <nil>
	I1122 00:15:20.586013  563925 out.go:252] * Restarting existing docker container for "ha-561110-m03" ...
	I1122 00:15:20.586099  563925 cli_runner.go:164] Run: docker start ha-561110-m03
	I1122 00:15:20.954348  563925 cli_runner.go:164] Run: docker container inspect ha-561110-m03 --format={{.State.Status}}
	I1122 00:15:20.979345  563925 kic.go:430] container "ha-561110-m03" state is running.
	I1122 00:15:20.979708  563925 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-561110-m03
	I1122 00:15:21.002371  563925 profile.go:143] Saving config to /home/jenkins/minikube-integration/21934-513600/.minikube/profiles/ha-561110/config.json ...
	I1122 00:15:21.002682  563925 machine.go:94] provisionDockerMachine start ...
	I1122 00:15:21.002758  563925 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-561110-m03
	I1122 00:15:21.032872  563925 main.go:143] libmachine: Using SSH client type: native
	I1122 00:15:21.033195  563925 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33545 <nil> <nil>}
	I1122 00:15:21.033211  563925 main.go:143] libmachine: About to run SSH command:
	hostname
	I1122 00:15:21.033881  563925 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I1122 00:15:24.293634  563925 main.go:143] libmachine: SSH cmd err, output: <nil>: ha-561110-m03
	
	I1122 00:15:24.293664  563925 ubuntu.go:182] provisioning hostname "ha-561110-m03"
	I1122 00:15:24.293763  563925 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-561110-m03
	I1122 00:15:24.324599  563925 main.go:143] libmachine: Using SSH client type: native
	I1122 00:15:24.324926  563925 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33545 <nil> <nil>}
	I1122 00:15:24.324939  563925 main.go:143] libmachine: About to run SSH command:
	sudo hostname ha-561110-m03 && echo "ha-561110-m03" | sudo tee /etc/hostname
	I1122 00:15:24.595129  563925 main.go:143] libmachine: SSH cmd err, output: <nil>: ha-561110-m03
	
	I1122 00:15:24.595249  563925 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-561110-m03
	I1122 00:15:24.620733  563925 main.go:143] libmachine: Using SSH client type: native
	I1122 00:15:24.621049  563925 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33545 <nil> <nil>}
	I1122 00:15:24.621676  563925 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-561110-m03' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-561110-m03/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-561110-m03' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1122 00:15:24.856356  563925 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1122 00:15:24.856384  563925 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21934-513600/.minikube CaCertPath:/home/jenkins/minikube-integration/21934-513600/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21934-513600/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21934-513600/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21934-513600/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21934-513600/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21934-513600/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21934-513600/.minikube}
	I1122 00:15:24.856400  563925 ubuntu.go:190] setting up certificates
	I1122 00:15:24.856434  563925 provision.go:84] configureAuth start
	I1122 00:15:24.856521  563925 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-561110-m03
	I1122 00:15:24.885855  563925 provision.go:143] copyHostCerts
	I1122 00:15:24.885898  563925 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21934-513600/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/21934-513600/.minikube/ca.pem
	I1122 00:15:24.885930  563925 exec_runner.go:144] found /home/jenkins/minikube-integration/21934-513600/.minikube/ca.pem, removing ...
	I1122 00:15:24.885941  563925 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21934-513600/.minikube/ca.pem
	I1122 00:15:24.886031  563925 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21934-513600/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21934-513600/.minikube/ca.pem (1078 bytes)
	I1122 00:15:24.886116  563925 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21934-513600/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/21934-513600/.minikube/cert.pem
	I1122 00:15:24.886139  563925 exec_runner.go:144] found /home/jenkins/minikube-integration/21934-513600/.minikube/cert.pem, removing ...
	I1122 00:15:24.886147  563925 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21934-513600/.minikube/cert.pem
	I1122 00:15:24.886175  563925 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21934-513600/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21934-513600/.minikube/cert.pem (1123 bytes)
	I1122 00:15:24.886221  563925 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21934-513600/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/21934-513600/.minikube/key.pem
	I1122 00:15:24.886242  563925 exec_runner.go:144] found /home/jenkins/minikube-integration/21934-513600/.minikube/key.pem, removing ...
	I1122 00:15:24.886246  563925 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21934-513600/.minikube/key.pem
	I1122 00:15:24.886271  563925 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21934-513600/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21934-513600/.minikube/key.pem (1675 bytes)
	I1122 00:15:24.886322  563925 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21934-513600/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21934-513600/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21934-513600/.minikube/certs/ca-key.pem org=jenkins.ha-561110-m03 san=[127.0.0.1 192.168.49.4 ha-561110-m03 localhost minikube]
	I1122 00:15:25.343405  563925 provision.go:177] copyRemoteCerts
	I1122 00:15:25.343499  563925 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1122 00:15:25.343569  563925 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-561110-m03
	I1122 00:15:25.363935  563925 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33545 SSHKeyPath:/home/jenkins/minikube-integration/21934-513600/.minikube/machines/ha-561110-m03/id_rsa Username:docker}
	I1122 00:15:25.550286  563925 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21934-513600/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1122 00:15:25.550350  563925 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21934-513600/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1122 00:15:25.575299  563925 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21934-513600/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1122 00:15:25.575374  563925 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21934-513600/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1122 00:15:25.598237  563925 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21934-513600/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1122 00:15:25.598338  563925 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21934-513600/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1122 00:15:25.628049  563925 provision.go:87] duration metric: took 771.594834ms to configureAuth
	I1122 00:15:25.628077  563925 ubuntu.go:206] setting minikube options for container-runtime
	I1122 00:15:25.628358  563925 config.go:182] Loaded profile config "ha-561110": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1122 00:15:25.628508  563925 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-561110-m03
	I1122 00:15:25.662079  563925 main.go:143] libmachine: Using SSH client type: native
	I1122 00:15:25.662398  563925 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33545 <nil> <nil>}
	I1122 00:15:25.662419  563925 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1122 00:15:26.350066  563925 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1122 00:15:26.350092  563925 machine.go:97] duration metric: took 5.34739065s to provisionDockerMachine
	I1122 00:15:26.350164  563925 start.go:293] postStartSetup for "ha-561110-m03" (driver="docker")
	I1122 00:15:26.350184  563925 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1122 00:15:26.350274  563925 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1122 00:15:26.350334  563925 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-561110-m03
	I1122 00:15:26.375980  563925 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33545 SSHKeyPath:/home/jenkins/minikube-integration/21934-513600/.minikube/machines/ha-561110-m03/id_rsa Username:docker}
	I1122 00:15:26.492303  563925 ssh_runner.go:195] Run: cat /etc/os-release
	I1122 00:15:26.496241  563925 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1122 00:15:26.496272  563925 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1122 00:15:26.496284  563925 filesync.go:126] Scanning /home/jenkins/minikube-integration/21934-513600/.minikube/addons for local assets ...
	I1122 00:15:26.496339  563925 filesync.go:126] Scanning /home/jenkins/minikube-integration/21934-513600/.minikube/files for local assets ...
	I1122 00:15:26.496422  563925 filesync.go:149] local asset: /home/jenkins/minikube-integration/21934-513600/.minikube/files/etc/ssl/certs/5169372.pem -> 5169372.pem in /etc/ssl/certs
	I1122 00:15:26.496433  563925 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21934-513600/.minikube/files/etc/ssl/certs/5169372.pem -> /etc/ssl/certs/5169372.pem
	I1122 00:15:26.496535  563925 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1122 00:15:26.505321  563925 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21934-513600/.minikube/files/etc/ssl/certs/5169372.pem --> /etc/ssl/certs/5169372.pem (1708 bytes)
	I1122 00:15:26.526339  563925 start.go:296] duration metric: took 176.150409ms for postStartSetup
	I1122 00:15:26.526443  563925 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1122 00:15:26.526504  563925 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-561110-m03
	I1122 00:15:26.550085  563925 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33545 SSHKeyPath:/home/jenkins/minikube-integration/21934-513600/.minikube/machines/ha-561110-m03/id_rsa Username:docker}
	I1122 00:15:26.663353  563925 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1122 00:15:26.670831  563925 fix.go:56] duration metric: took 6.117814975s for fixHost
	I1122 00:15:26.670857  563925 start.go:83] releasing machines lock for "ha-561110-m03", held for 6.117877799s
	I1122 00:15:26.670925  563925 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-561110-m03
	I1122 00:15:26.706528  563925 out.go:179] * Found network options:
	I1122 00:15:26.709469  563925 out.go:179]   - NO_PROXY=192.168.49.2,192.168.49.3
	W1122 00:15:26.712333  563925 proxy.go:120] fail to check proxy env: Error ip not in block
	W1122 00:15:26.712371  563925 proxy.go:120] fail to check proxy env: Error ip not in block
	W1122 00:15:26.712395  563925 proxy.go:120] fail to check proxy env: Error ip not in block
	W1122 00:15:26.712406  563925 proxy.go:120] fail to check proxy env: Error ip not in block
	I1122 00:15:26.712494  563925 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1122 00:15:26.712541  563925 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-561110-m03
	I1122 00:15:26.712807  563925 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1122 00:15:26.712873  563925 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-561110-m03
	I1122 00:15:26.749585  563925 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33545 SSHKeyPath:/home/jenkins/minikube-integration/21934-513600/.minikube/machines/ha-561110-m03/id_rsa Username:docker}
	I1122 00:15:26.751996  563925 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33545 SSHKeyPath:/home/jenkins/minikube-integration/21934-513600/.minikube/machines/ha-561110-m03/id_rsa Username:docker}
	I1122 00:15:27.082598  563925 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1122 00:15:27.101543  563925 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1122 00:15:27.101616  563925 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1122 00:15:27.126235  563925 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1122 00:15:27.126257  563925 start.go:496] detecting cgroup driver to use...
	I1122 00:15:27.126287  563925 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1122 00:15:27.126334  563925 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1122 00:15:27.165923  563925 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1122 00:15:27.239673  563925 docker.go:218] disabling cri-docker service (if available) ...
	I1122 00:15:27.239811  563925 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1122 00:15:27.293000  563925 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1122 00:15:27.338853  563925 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1122 00:15:27.741533  563925 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1122 00:15:28.092677  563925 docker.go:234] disabling docker service ...
	I1122 00:15:28.092771  563925 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1122 00:15:28.168796  563925 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1122 00:15:28.226242  563925 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1122 00:15:28.659941  563925 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1122 00:15:29.058606  563925 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1122 00:15:29.101920  563925 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1122 00:15:29.136744  563925 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1122 00:15:29.136856  563925 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1122 00:15:29.162030  563925 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1122 00:15:29.162149  563925 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1122 00:15:29.183947  563925 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1122 00:15:29.221891  563925 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1122 00:15:29.244672  563925 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1122 00:15:29.275560  563925 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1122 00:15:29.306222  563925 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1122 00:15:29.332094  563925 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1122 00:15:29.350775  563925 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1122 00:15:29.370006  563925 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1122 00:15:29.391362  563925 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1122 00:15:29.706214  563925 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1122 00:17:00.097219  563925 ssh_runner.go:235] Completed: sudo systemctl restart crio: (1m30.390962529s)
	I1122 00:17:00.097249  563925 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1122 00:17:00.097319  563925 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1122 00:17:00.113544  563925 start.go:564] Will wait 60s for crictl version
	I1122 00:17:00.113649  563925 ssh_runner.go:195] Run: which crictl
	I1122 00:17:00.136784  563925 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1122 00:17:00.321902  563925 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.2
	RuntimeApiVersion:  v1
	I1122 00:17:00.322038  563925 ssh_runner.go:195] Run: crio --version
	I1122 00:17:00.437751  563925 ssh_runner.go:195] Run: crio --version
	I1122 00:17:00.498700  563925 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.2 ...
	I1122 00:17:00.502322  563925 out.go:179]   - env NO_PROXY=192.168.49.2
	I1122 00:17:00.505365  563925 out.go:179]   - env NO_PROXY=192.168.49.2,192.168.49.3
	I1122 00:17:00.508493  563925 cli_runner.go:164] Run: docker network inspect ha-561110 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1122 00:17:00.538039  563925 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1122 00:17:00.545403  563925 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1122 00:17:00.558621  563925 mustload.go:66] Loading cluster: ha-561110
	I1122 00:17:00.558938  563925 config.go:182] Loaded profile config "ha-561110": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1122 00:17:00.559221  563925 cli_runner.go:164] Run: docker container inspect ha-561110 --format={{.State.Status}}
	I1122 00:17:00.586783  563925 host.go:66] Checking if "ha-561110" exists ...
	I1122 00:17:00.587143  563925 certs.go:69] Setting up /home/jenkins/minikube-integration/21934-513600/.minikube/profiles/ha-561110 for IP: 192.168.49.4
	I1122 00:17:00.587159  563925 certs.go:195] generating shared ca certs ...
	I1122 00:17:00.587181  563925 certs.go:227] acquiring lock for ca certs: {Name:mkaf4c79493334cb2058b349c36be7473837f9b7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1122 00:17:00.587353  563925 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21934-513600/.minikube/ca.key
	I1122 00:17:00.587400  563925 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21934-513600/.minikube/proxy-client-ca.key
	I1122 00:17:00.587412  563925 certs.go:257] generating profile certs ...
	I1122 00:17:00.587496  563925 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21934-513600/.minikube/profiles/ha-561110/client.key
	I1122 00:17:00.587573  563925 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21934-513600/.minikube/profiles/ha-561110/apiserver.key.be48eb15
	I1122 00:17:00.587622  563925 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21934-513600/.minikube/profiles/ha-561110/proxy-client.key
	I1122 00:17:00.587635  563925 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21934-513600/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1122 00:17:00.587651  563925 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21934-513600/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1122 00:17:00.587667  563925 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21934-513600/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1122 00:17:00.587723  563925 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21934-513600/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1122 00:17:00.587739  563925 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21934-513600/.minikube/profiles/ha-561110/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1122 00:17:00.587752  563925 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21934-513600/.minikube/profiles/ha-561110/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1122 00:17:00.587768  563925 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21934-513600/.minikube/profiles/ha-561110/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1122 00:17:00.587778  563925 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21934-513600/.minikube/profiles/ha-561110/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1122 00:17:00.587836  563925 certs.go:484] found cert: /home/jenkins/minikube-integration/21934-513600/.minikube/certs/516937.pem (1338 bytes)
	W1122 00:17:00.587877  563925 certs.go:480] ignoring /home/jenkins/minikube-integration/21934-513600/.minikube/certs/516937_empty.pem, impossibly tiny 0 bytes
	I1122 00:17:00.587891  563925 certs.go:484] found cert: /home/jenkins/minikube-integration/21934-513600/.minikube/certs/ca-key.pem (1679 bytes)
	I1122 00:17:00.587929  563925 certs.go:484] found cert: /home/jenkins/minikube-integration/21934-513600/.minikube/certs/ca.pem (1078 bytes)
	I1122 00:17:00.587961  563925 certs.go:484] found cert: /home/jenkins/minikube-integration/21934-513600/.minikube/certs/cert.pem (1123 bytes)
	I1122 00:17:00.587990  563925 certs.go:484] found cert: /home/jenkins/minikube-integration/21934-513600/.minikube/certs/key.pem (1675 bytes)
	I1122 00:17:00.588101  563925 certs.go:484] found cert: /home/jenkins/minikube-integration/21934-513600/.minikube/files/etc/ssl/certs/5169372.pem (1708 bytes)
	I1122 00:17:00.588199  563925 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21934-513600/.minikube/certs/516937.pem -> /usr/share/ca-certificates/516937.pem
	I1122 00:17:00.588226  563925 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21934-513600/.minikube/files/etc/ssl/certs/5169372.pem -> /usr/share/ca-certificates/5169372.pem
	I1122 00:17:00.588241  563925 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21934-513600/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1122 00:17:00.588312  563925 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-561110
	I1122 00:17:00.613873  563925 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33535 SSHKeyPath:/home/jenkins/minikube-integration/21934-513600/.minikube/machines/ha-561110/id_rsa Username:docker}
	I1122 00:17:00.714215  563925 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.pub
	I1122 00:17:00.718718  563925 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I1122 00:17:00.729019  563925 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.key
	I1122 00:17:00.733330  563925 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1679 bytes)
	I1122 00:17:00.743477  563925 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.crt
	I1122 00:17:00.747658  563925 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I1122 00:17:00.758201  563925 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.key
	I1122 00:17:00.763435  563925 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1679 bytes)
	I1122 00:17:00.773425  563925 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.crt
	I1122 00:17:00.777456  563925 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I1122 00:17:00.787246  563925 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.key
	I1122 00:17:00.791598  563925 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1675 bytes)
	I1122 00:17:00.801660  563925 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21934-513600/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1122 00:17:00.826055  563925 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21934-513600/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1122 00:17:00.848933  563925 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21934-513600/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1122 00:17:00.888604  563925 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21934-513600/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1122 00:17:00.921496  563925 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21934-513600/.minikube/profiles/ha-561110/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I1122 00:17:00.951086  563925 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21934-513600/.minikube/profiles/ha-561110/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1122 00:17:00.975145  563925 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21934-513600/.minikube/profiles/ha-561110/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1122 00:17:00.999138  563925 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21934-513600/.minikube/profiles/ha-561110/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1122 00:17:01.024534  563925 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21934-513600/.minikube/certs/516937.pem --> /usr/share/ca-certificates/516937.pem (1338 bytes)
	I1122 00:17:01.046560  563925 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21934-513600/.minikube/files/etc/ssl/certs/5169372.pem --> /usr/share/ca-certificates/5169372.pem (1708 bytes)
	I1122 00:17:01.072877  563925 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21934-513600/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1122 00:17:01.103089  563925 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I1122 00:17:01.119601  563925 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1679 bytes)
	I1122 00:17:01.136419  563925 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I1122 00:17:01.153380  563925 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1679 bytes)
	I1122 00:17:01.171240  563925 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I1122 00:17:01.202584  563925 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1675 bytes)
	I1122 00:17:01.223852  563925 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I1122 00:17:01.247292  563925 ssh_runner.go:195] Run: openssl version
	I1122 00:17:01.259516  563925 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1122 00:17:01.280780  563925 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1122 00:17:01.289039  563925 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov 21 23:48 /usr/share/ca-certificates/minikubeCA.pem
	I1122 00:17:01.289158  563925 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1122 00:17:01.373640  563925 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1122 00:17:01.395461  563925 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/516937.pem && ln -fs /usr/share/ca-certificates/516937.pem /etc/ssl/certs/516937.pem"
	I1122 00:17:01.420524  563925 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/516937.pem
	I1122 00:17:01.426623  563925 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov 21 23:55 /usr/share/ca-certificates/516937.pem
	I1122 00:17:01.426698  563925 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/516937.pem
	I1122 00:17:01.478449  563925 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/516937.pem /etc/ssl/certs/51391683.0"
	I1122 00:17:01.490493  563925 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/5169372.pem && ln -fs /usr/share/ca-certificates/5169372.pem /etc/ssl/certs/5169372.pem"
	I1122 00:17:01.502084  563925 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/5169372.pem
	I1122 00:17:01.507855  563925 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov 21 23:55 /usr/share/ca-certificates/5169372.pem
	I1122 00:17:01.507956  563925 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/5169372.pem
	I1122 00:17:01.587957  563925 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/5169372.pem /etc/ssl/certs/3ec20f2e.0"
	I1122 00:17:01.599719  563925 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1122 00:17:01.605126  563925 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1122 00:17:01.660029  563925 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1122 00:17:01.712345  563925 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1122 00:17:01.786467  563925 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1122 00:17:01.862166  563925 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1122 00:17:01.946187  563925 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1122 00:17:02.010384  563925 kubeadm.go:935] updating node {m03 192.168.49.4 8443 v1.34.1 crio true true} ...
	I1122 00:17:02.010523  563925 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=ha-561110-m03 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.4
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:ha-561110 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1122 00:17:02.010557  563925 kube-vip.go:115] generating kube-vip config ...
	I1122 00:17:02.010619  563925 ssh_runner.go:195] Run: sudo sh -c "lsmod | grep ip_vs"
	I1122 00:17:02.037246  563925 kube-vip.go:163] giving up enabling control-plane load-balancing as ipvs kernel modules appears not to be available: sudo sh -c "lsmod | grep ip_vs": Process exited with status 1
	stdout:
	
	stderr:
	I1122 00:17:02.037316  563925 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.49.254
	    - name: prometheus_server
	      value: :2112
	    image: ghcr.io/kube-vip/kube-vip:v1.0.1
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I1122 00:17:02.037405  563925 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1122 00:17:02.052472  563925 binaries.go:51] Found k8s binaries, skipping transfer
	I1122 00:17:02.052567  563925 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I1122 00:17:02.073857  563925 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (363 bytes)
	I1122 00:17:02.112139  563925 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1122 00:17:02.133854  563925 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1358 bytes)
	I1122 00:17:02.152649  563925 ssh_runner.go:195] Run: grep 192.168.49.254	control-plane.minikube.internal$ /etc/hosts
	I1122 00:17:02.158389  563925 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1122 00:17:02.184228  563925 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1122 00:17:02.493772  563925 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1122 00:17:02.514312  563925 start.go:236] Will wait 6m0s for node &{Name:m03 IP:192.168.49.4 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1122 00:17:02.514696  563925 config.go:182] Loaded profile config "ha-561110": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1122 00:17:02.518824  563925 out.go:179] * Verifying Kubernetes components...
	I1122 00:17:02.521919  563925 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1122 00:17:02.746981  563925 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1122 00:17:02.765468  563925 kapi.go:59] client config for ha-561110: &rest.Config{Host:"https://192.168.49.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21934-513600/.minikube/profiles/ha-561110/client.crt", KeyFile:"/home/jenkins/minikube-integration/21934-513600/.minikube/profiles/ha-561110/client.key", CAFile:"/home/jenkins/minikube-integration/21934-513600/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(n
il)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1fb2df0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W1122 00:17:02.765589  563925 kubeadm.go:492] Overriding stale ClientConfig host https://192.168.49.254:8443 with https://192.168.49.2:8443
	I1122 00:17:02.765898  563925 node_ready.go:35] waiting up to 6m0s for node "ha-561110-m03" to be "Ready" ...
	W1122 00:17:04.770183  563925 node_ready.go:57] node "ha-561110-m03" has "Ready":"Unknown" status (will retry)
	W1122 00:17:06.771513  563925 node_ready.go:57] node "ha-561110-m03" has "Ready":"Unknown" status (will retry)
	W1122 00:17:09.269611  563925 node_ready.go:57] node "ha-561110-m03" has "Ready":"Unknown" status (will retry)
	W1122 00:17:11.270683  563925 node_ready.go:57] node "ha-561110-m03" has "Ready":"Unknown" status (will retry)
	W1122 00:17:13.275612  563925 node_ready.go:57] node "ha-561110-m03" has "Ready":"Unknown" status (will retry)
	W1122 00:17:15.769660  563925 node_ready.go:57] node "ha-561110-m03" has "Ready":"Unknown" status (will retry)
	W1122 00:17:17.769933  563925 node_ready.go:57] node "ha-561110-m03" has "Ready":"Unknown" status (will retry)
	W1122 00:17:20.269315  563925 node_ready.go:57] node "ha-561110-m03" has "Ready":"Unknown" status (will retry)
	W1122 00:17:22.270943  563925 node_ready.go:57] node "ha-561110-m03" has "Ready":"Unknown" status (will retry)
	W1122 00:17:24.769260  563925 node_ready.go:57] node "ha-561110-m03" has "Ready":"Unknown" status (will retry)
	W1122 00:17:26.770369  563925 node_ready.go:57] node "ha-561110-m03" has "Ready":"Unknown" status (will retry)
	W1122 00:17:29.269015  563925 node_ready.go:57] node "ha-561110-m03" has "Ready":"Unknown" status (will retry)
	W1122 00:17:31.269858  563925 node_ready.go:57] node "ha-561110-m03" has "Ready":"Unknown" status (will retry)
	W1122 00:17:33.269945  563925 node_ready.go:57] node "ha-561110-m03" has "Ready":"Unknown" status (will retry)
	W1122 00:17:35.769971  563925 node_ready.go:57] node "ha-561110-m03" has "Ready":"Unknown" status (will retry)
	W1122 00:17:38.269922  563925 node_ready.go:57] node "ha-561110-m03" has "Ready":"Unknown" status (will retry)
	W1122 00:17:40.270335  563925 node_ready.go:57] node "ha-561110-m03" has "Ready":"Unknown" status (will retry)
	W1122 00:17:42.271149  563925 node_ready.go:57] node "ha-561110-m03" has "Ready":"Unknown" status (will retry)
	W1122 00:17:44.770140  563925 node_ready.go:57] node "ha-561110-m03" has "Ready":"Unknown" status (will retry)
	W1122 00:17:47.269690  563925 node_ready.go:57] node "ha-561110-m03" has "Ready":"Unknown" status (will retry)
	W1122 00:17:49.270654  563925 node_ready.go:57] node "ha-561110-m03" has "Ready":"Unknown" status (will retry)
	W1122 00:17:51.770465  563925 node_ready.go:57] node "ha-561110-m03" has "Ready":"Unknown" status (will retry)
	W1122 00:17:54.269768  563925 node_ready.go:57] node "ha-561110-m03" has "Ready":"Unknown" status (will retry)
	W1122 00:17:56.769254  563925 node_ready.go:57] node "ha-561110-m03" has "Ready":"Unknown" status (will retry)
	W1122 00:17:58.769625  563925 node_ready.go:57] node "ha-561110-m03" has "Ready":"Unknown" status (will retry)
	W1122 00:18:00.770202  563925 node_ready.go:57] node "ha-561110-m03" has "Ready":"Unknown" status (will retry)
	W1122 00:18:02.773270  563925 node_ready.go:57] node "ha-561110-m03" has "Ready":"Unknown" status (will retry)
	W1122 00:18:05.270130  563925 node_ready.go:57] node "ha-561110-m03" has "Ready":"Unknown" status (will retry)
	W1122 00:18:07.271583  563925 node_ready.go:57] node "ha-561110-m03" has "Ready":"Unknown" status (will retry)
	W1122 00:18:09.769397  563925 node_ready.go:57] node "ha-561110-m03" has "Ready":"Unknown" status (will retry)
	W1122 00:18:11.770012  563925 node_ready.go:57] node "ha-561110-m03" has "Ready":"Unknown" status (will retry)
	W1122 00:18:13.770106  563925 node_ready.go:57] node "ha-561110-m03" has "Ready":"Unknown" status (will retry)
	W1122 00:18:16.270008  563925 node_ready.go:57] node "ha-561110-m03" has "Ready":"Unknown" status (will retry)
	W1122 00:18:18.771373  563925 node_ready.go:57] node "ha-561110-m03" has "Ready":"Unknown" status (will retry)
	W1122 00:18:21.270047  563925 node_ready.go:57] node "ha-561110-m03" has "Ready":"Unknown" status (will retry)
	W1122 00:18:23.768948  563925 node_ready.go:57] node "ha-561110-m03" has "Ready":"Unknown" status (will retry)
	W1122 00:18:25.770213  563925 node_ready.go:57] node "ha-561110-m03" has "Ready":"Unknown" status (will retry)
	W1122 00:18:28.269635  563925 node_ready.go:57] node "ha-561110-m03" has "Ready":"Unknown" status (will retry)
	W1122 00:18:30.770096  563925 node_ready.go:57] node "ha-561110-m03" has "Ready":"Unknown" status (will retry)
	W1122 00:18:32.771794  563925 node_ready.go:57] node "ha-561110-m03" has "Ready":"Unknown" status (will retry)
	W1122 00:18:35.270059  563925 node_ready.go:57] node "ha-561110-m03" has "Ready":"Unknown" status (will retry)
	W1122 00:18:37.769842  563925 node_ready.go:57] node "ha-561110-m03" has "Ready":"Unknown" status (will retry)
	W1122 00:18:40.269289  563925 node_ready.go:57] node "ha-561110-m03" has "Ready":"Unknown" status (will retry)
	W1122 00:18:42.273345  563925 node_ready.go:57] node "ha-561110-m03" has "Ready":"Unknown" status (will retry)
	W1122 00:18:44.275125  563925 node_ready.go:57] node "ha-561110-m03" has "Ready":"Unknown" status (will retry)
	W1122 00:18:46.776656  563925 node_ready.go:57] node "ha-561110-m03" has "Ready":"Unknown" status (will retry)
	W1122 00:18:49.270280  563925 node_ready.go:57] node "ha-561110-m03" has "Ready":"Unknown" status (will retry)
	W1122 00:18:51.770076  563925 node_ready.go:57] node "ha-561110-m03" has "Ready":"Unknown" status (will retry)
	W1122 00:18:54.269588  563925 node_ready.go:57] node "ha-561110-m03" has "Ready":"Unknown" status (will retry)
	W1122 00:18:56.270135  563925 node_ready.go:57] node "ha-561110-m03" has "Ready":"Unknown" status (will retry)
	W1122 00:18:58.768991  563925 node_ready.go:57] node "ha-561110-m03" has "Ready":"Unknown" status (will retry)
	W1122 00:19:00.771422  563925 node_ready.go:57] node "ha-561110-m03" has "Ready":"Unknown" status (will retry)
	W1122 00:19:03.269840  563925 node_ready.go:57] node "ha-561110-m03" has "Ready":"Unknown" status (will retry)
	W1122 00:19:05.270420  563925 node_ready.go:57] node "ha-561110-m03" has "Ready":"Unknown" status (will retry)
	W1122 00:19:07.770020  563925 node_ready.go:57] node "ha-561110-m03" has "Ready":"Unknown" status (will retry)
	W1122 00:19:10.268980  563925 node_ready.go:57] node "ha-561110-m03" has "Ready":"Unknown" status (will retry)
	W1122 00:19:12.269695  563925 node_ready.go:57] node "ha-561110-m03" has "Ready":"Unknown" status (will retry)
	W1122 00:19:14.769271  563925 node_ready.go:57] node "ha-561110-m03" has "Ready":"Unknown" status (will retry)
	W1122 00:19:16.769509  563925 node_ready.go:57] node "ha-561110-m03" has "Ready":"Unknown" status (will retry)
	W1122 00:19:19.270240  563925 node_ready.go:57] node "ha-561110-m03" has "Ready":"Unknown" status (will retry)
	W1122 00:19:21.769249  563925 node_ready.go:57] node "ha-561110-m03" has "Ready":"Unknown" status (will retry)
	W1122 00:19:23.770580  563925 node_ready.go:57] node "ha-561110-m03" has "Ready":"Unknown" status (will retry)
	W1122 00:19:26.269982  563925 node_ready.go:57] node "ha-561110-m03" has "Ready":"Unknown" status (will retry)
	W1122 00:19:28.770054  563925 node_ready.go:57] node "ha-561110-m03" has "Ready":"Unknown" status (will retry)
	W1122 00:19:31.269163  563925 node_ready.go:57] node "ha-561110-m03" has "Ready":"Unknown" status (will retry)
	W1122 00:19:33.269886  563925 node_ready.go:57] node "ha-561110-m03" has "Ready":"Unknown" status (will retry)
	W1122 00:19:35.270677  563925 node_ready.go:57] node "ha-561110-m03" has "Ready":"Unknown" status (will retry)
	W1122 00:19:37.769622  563925 node_ready.go:57] node "ha-561110-m03" has "Ready":"Unknown" status (will retry)
	W1122 00:19:39.769703  563925 node_ready.go:57] node "ha-561110-m03" has "Ready":"Unknown" status (will retry)
	W1122 00:19:42.270956  563925 node_ready.go:57] node "ha-561110-m03" has "Ready":"Unknown" status (will retry)
	W1122 00:19:44.768762  563925 node_ready.go:57] node "ha-561110-m03" has "Ready":"Unknown" status (will retry)
	W1122 00:19:46.769989  563925 node_ready.go:57] node "ha-561110-m03" has "Ready":"Unknown" status (will retry)
	W1122 00:19:49.269515  563925 node_ready.go:57] node "ha-561110-m03" has "Ready":"Unknown" status (will retry)
	W1122 00:19:51.270122  563925 node_ready.go:57] node "ha-561110-m03" has "Ready":"Unknown" status (will retry)
	W1122 00:19:53.769467  563925 node_ready.go:57] node "ha-561110-m03" has "Ready":"Unknown" status (will retry)
	W1122 00:19:55.770293  563925 node_ready.go:57] node "ha-561110-m03" has "Ready":"Unknown" status (will retry)
	W1122 00:19:58.269947  563925 node_ready.go:57] node "ha-561110-m03" has "Ready":"Unknown" status (will retry)
	W1122 00:20:00.322810  563925 node_ready.go:57] node "ha-561110-m03" has "Ready":"Unknown" status (will retry)
	W1122 00:20:02.769554  563925 node_ready.go:57] node "ha-561110-m03" has "Ready":"Unknown" status (will retry)
	W1122 00:20:04.770551  563925 node_ready.go:57] node "ha-561110-m03" has "Ready":"Unknown" status (will retry)
	W1122 00:20:07.269784  563925 node_ready.go:57] node "ha-561110-m03" has "Ready":"Unknown" status (will retry)
	W1122 00:20:09.769344  563925 node_ready.go:57] node "ha-561110-m03" has "Ready":"Unknown" status (will retry)
	W1122 00:20:11.769990  563925 node_ready.go:57] node "ha-561110-m03" has "Ready":"Unknown" status (will retry)
	W1122 00:20:14.269132  563925 node_ready.go:57] node "ha-561110-m03" has "Ready":"Unknown" status (will retry)
	W1122 00:20:16.269765  563925 node_ready.go:57] node "ha-561110-m03" has "Ready":"Unknown" status (will retry)
	W1122 00:20:18.770174  563925 node_ready.go:57] node "ha-561110-m03" has "Ready":"Unknown" status (will retry)
	W1122 00:20:21.269837  563925 node_ready.go:57] node "ha-561110-m03" has "Ready":"Unknown" status (will retry)
	W1122 00:20:23.270065  563925 node_ready.go:57] node "ha-561110-m03" has "Ready":"Unknown" status (will retry)
	W1122 00:20:25.770172  563925 node_ready.go:57] node "ha-561110-m03" has "Ready":"Unknown" status (will retry)
	W1122 00:20:28.269279  563925 node_ready.go:57] node "ha-561110-m03" has "Ready":"Unknown" status (will retry)
	W1122 00:20:30.270734  563925 node_ready.go:57] node "ha-561110-m03" has "Ready":"Unknown" status (will retry)
	W1122 00:20:32.769392  563925 node_ready.go:57] node "ha-561110-m03" has "Ready":"Unknown" status (will retry)
	W1122 00:20:34.769668  563925 node_ready.go:57] node "ha-561110-m03" has "Ready":"Unknown" status (will retry)
	W1122 00:20:36.770010  563925 node_ready.go:57] node "ha-561110-m03" has "Ready":"Unknown" status (will retry)
	W1122 00:20:38.770203  563925 node_ready.go:57] node "ha-561110-m03" has "Ready":"Unknown" status (will retry)
	W1122 00:20:40.770721  563925 node_ready.go:57] node "ha-561110-m03" has "Ready":"Unknown" status (will retry)
	W1122 00:20:43.270389  563925 node_ready.go:57] node "ha-561110-m03" has "Ready":"Unknown" status (will retry)
	W1122 00:20:45.276123  563925 node_ready.go:57] node "ha-561110-m03" has "Ready":"Unknown" status (will retry)
	W1122 00:20:47.770112  563925 node_ready.go:57] node "ha-561110-m03" has "Ready":"Unknown" status (will retry)
	W1122 00:20:50.269310  563925 node_ready.go:57] node "ha-561110-m03" has "Ready":"Unknown" status (will retry)
	W1122 00:20:52.269861  563925 node_ready.go:57] node "ha-561110-m03" has "Ready":"Unknown" status (will retry)
	W1122 00:20:54.270570  563925 node_ready.go:57] node "ha-561110-m03" has "Ready":"Unknown" status (will retry)
	W1122 00:20:56.769591  563925 node_ready.go:57] node "ha-561110-m03" has "Ready":"Unknown" status (will retry)
	W1122 00:20:58.770126  563925 node_ready.go:57] node "ha-561110-m03" has "Ready":"Unknown" status (will retry)
	W1122 00:21:01.270099  563925 node_ready.go:57] node "ha-561110-m03" has "Ready":"Unknown" status (will retry)
	W1122 00:21:03.769793  563925 node_ready.go:57] node "ha-561110-m03" has "Ready":"Unknown" status (will retry)
	W1122 00:21:05.771503  563925 node_ready.go:57] node "ha-561110-m03" has "Ready":"Unknown" status (will retry)
	W1122 00:21:08.269537  563925 node_ready.go:57] node "ha-561110-m03" has "Ready":"Unknown" status (will retry)
	W1122 00:21:10.770347  563925 node_ready.go:57] node "ha-561110-m03" has "Ready":"Unknown" status (will retry)
	W1122 00:21:13.269687  563925 node_ready.go:57] node "ha-561110-m03" has "Ready":"Unknown" status (will retry)
	W1122 00:21:15.270464  563925 node_ready.go:57] node "ha-561110-m03" has "Ready":"Unknown" status (will retry)
	W1122 00:21:17.271724  563925 node_ready.go:57] node "ha-561110-m03" has "Ready":"Unknown" status (will retry)
	W1122 00:21:19.769950  563925 node_ready.go:57] node "ha-561110-m03" has "Ready":"Unknown" status (will retry)
	W1122 00:21:22.269581  563925 node_ready.go:57] node "ha-561110-m03" has "Ready":"Unknown" status (will retry)
	W1122 00:21:24.269903  563925 node_ready.go:57] node "ha-561110-m03" has "Ready":"Unknown" status (will retry)
	W1122 00:21:26.269977  563925 node_ready.go:57] node "ha-561110-m03" has "Ready":"Unknown" status (will retry)
	W1122 00:21:28.769453  563925 node_ready.go:57] node "ha-561110-m03" has "Ready":"Unknown" status (will retry)
	W1122 00:21:30.770323  563925 node_ready.go:57] node "ha-561110-m03" has "Ready":"Unknown" status (will retry)
	W1122 00:21:33.270153  563925 node_ready.go:57] node "ha-561110-m03" has "Ready":"Unknown" status (will retry)
	W1122 00:21:35.769486  563925 node_ready.go:57] node "ha-561110-m03" has "Ready":"Unknown" status (will retry)
	W1122 00:21:37.770126  563925 node_ready.go:57] node "ha-561110-m03" has "Ready":"Unknown" status (will retry)
	W1122 00:21:39.770389  563925 node_ready.go:57] node "ha-561110-m03" has "Ready":"Unknown" status (will retry)
	W1122 00:21:42.273464  563925 node_ready.go:57] node "ha-561110-m03" has "Ready":"Unknown" status (will retry)
	W1122 00:21:44.769688  563925 node_ready.go:57] node "ha-561110-m03" has "Ready":"Unknown" status (will retry)
	W1122 00:21:46.770370  563925 node_ready.go:57] node "ha-561110-m03" has "Ready":"Unknown" status (will retry)
	W1122 00:21:49.269335  563925 node_ready.go:57] node "ha-561110-m03" has "Ready":"Unknown" status (will retry)
	W1122 00:21:51.270430  563925 node_ready.go:57] node "ha-561110-m03" has "Ready":"Unknown" status (will retry)
	W1122 00:21:53.769776  563925 node_ready.go:57] node "ha-561110-m03" has "Ready":"Unknown" status (will retry)
	W1122 00:21:56.269697  563925 node_ready.go:57] node "ha-561110-m03" has "Ready":"Unknown" status (will retry)
	W1122 00:21:58.270251  563925 node_ready.go:57] node "ha-561110-m03" has "Ready":"Unknown" status (will retry)
	W1122 00:22:00.292924  563925 node_ready.go:57] node "ha-561110-m03" has "Ready":"Unknown" status (will retry)
	W1122 00:22:02.779828  563925 node_ready.go:57] node "ha-561110-m03" has "Ready":"Unknown" status (will retry)
	W1122 00:22:05.270290  563925 node_ready.go:57] node "ha-561110-m03" has "Ready":"Unknown" status (will retry)
	W1122 00:22:07.270475  563925 node_ready.go:57] node "ha-561110-m03" has "Ready":"Unknown" status (will retry)
	W1122 00:22:09.769072  563925 node_ready.go:57] node "ha-561110-m03" has "Ready":"Unknown" status (will retry)
	W1122 00:22:11.769917  563925 node_ready.go:57] node "ha-561110-m03" has "Ready":"Unknown" status (will retry)
	W1122 00:22:13.770097  563925 node_ready.go:57] node "ha-561110-m03" has "Ready":"Unknown" status (will retry)
	W1122 00:22:16.269780  563925 node_ready.go:57] node "ha-561110-m03" has "Ready":"Unknown" status (will retry)
	W1122 00:22:18.269850  563925 node_ready.go:57] node "ha-561110-m03" has "Ready":"Unknown" status (will retry)
	W1122 00:22:20.276178  563925 node_ready.go:57] node "ha-561110-m03" has "Ready":"Unknown" status (will retry)
	W1122 00:22:22.770032  563925 node_ready.go:57] node "ha-561110-m03" has "Ready":"Unknown" status (will retry)
	W1122 00:22:25.270326  563925 node_ready.go:57] node "ha-561110-m03" has "Ready":"Unknown" status (will retry)
	W1122 00:22:27.769736  563925 node_ready.go:57] node "ha-561110-m03" has "Ready":"Unknown" status (will retry)
	W1122 00:22:30.270331  563925 node_ready.go:57] node "ha-561110-m03" has "Ready":"Unknown" status (will retry)
	W1122 00:22:32.768987  563925 node_ready.go:57] node "ha-561110-m03" has "Ready":"Unknown" status (will retry)
	W1122 00:22:35.269587  563925 node_ready.go:57] node "ha-561110-m03" has "Ready":"Unknown" status (will retry)
	W1122 00:22:37.770642  563925 node_ready.go:57] node "ha-561110-m03" has "Ready":"Unknown" status (will retry)
	W1122 00:22:40.269226  563925 node_ready.go:57] node "ha-561110-m03" has "Ready":"Unknown" status (will retry)
	W1122 00:22:42.281918  563925 node_ready.go:57] node "ha-561110-m03" has "Ready":"Unknown" status (will retry)
	W1122 00:22:44.770302  563925 node_ready.go:57] node "ha-561110-m03" has "Ready":"Unknown" status (will retry)
	W1122 00:22:47.269651  563925 node_ready.go:57] node "ha-561110-m03" has "Ready":"Unknown" status (will retry)
	W1122 00:22:49.270011  563925 node_ready.go:57] node "ha-561110-m03" has "Ready":"Unknown" status (will retry)
	W1122 00:22:51.770305  563925 node_ready.go:57] node "ha-561110-m03" has "Ready":"Unknown" status (will retry)
	W1122 00:22:54.269848  563925 node_ready.go:57] node "ha-561110-m03" has "Ready":"Unknown" status (will retry)
	W1122 00:22:56.269962  563925 node_ready.go:57] node "ha-561110-m03" has "Ready":"Unknown" status (will retry)
	W1122 00:22:58.770073  563925 node_ready.go:57] node "ha-561110-m03" has "Ready":"Unknown" status (will retry)
	W1122 00:23:00.770445  563925 node_ready.go:57] node "ha-561110-m03" has "Ready":"Unknown" status (will retry)
	I1122 00:23:02.766152  563925 node_ready.go:38] duration metric: took 6m0.000206678s for node "ha-561110-m03" to be "Ready" ...
	I1122 00:23:02.769486  563925 out.go:203] 
	W1122 00:23:02.772416  563925 out.go:285] X Exiting due to GUEST_START: failed to start node: adding node: wait 6m0s for node: waiting for node to be ready: WaitNodeCondition: context deadline exceeded
	X Exiting due to GUEST_START: failed to start node: adding node: wait 6m0s for node: waiting for node to be ready: WaitNodeCondition: context deadline exceeded
	W1122 00:23:02.772436  563925 out.go:285] * 
	* 
	W1122 00:23:02.774635  563925 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1122 00:23:02.776836  563925 out.go:203] 

                                                
                                                
** /stderr **
ha_test.go:471: failed to run minikube start. args "out/minikube-linux-arm64 -p ha-561110 node list --alsologtostderr -v 5" : exit status 80
ha_test.go:474: (dbg) Run:  out/minikube-linux-arm64 -p ha-561110 node list --alsologtostderr -v 5
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestMultiControlPlane/serial/RestartClusterKeepsNodes]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestMultiControlPlane/serial/RestartClusterKeepsNodes]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect ha-561110
helpers_test.go:243: (dbg) docker inspect ha-561110:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "b491a219f5f6a6da2c04a012513c1a2266c783068f6131eddf3365f4209ece96",
	        "Created": "2025-11-22T00:08:39.249293688Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 564052,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-11-22T00:14:41.326793505Z",
	            "FinishedAt": "2025-11-22T00:14:40.718153366Z"
	        },
	        "Image": "sha256:97061622515a85470dad13cb046ad5fc5021fcd2b05aa921a620abb52951cd5d",
	        "ResolvConfPath": "/var/lib/docker/containers/b491a219f5f6a6da2c04a012513c1a2266c783068f6131eddf3365f4209ece96/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/b491a219f5f6a6da2c04a012513c1a2266c783068f6131eddf3365f4209ece96/hostname",
	        "HostsPath": "/var/lib/docker/containers/b491a219f5f6a6da2c04a012513c1a2266c783068f6131eddf3365f4209ece96/hosts",
	        "LogPath": "/var/lib/docker/containers/b491a219f5f6a6da2c04a012513c1a2266c783068f6131eddf3365f4209ece96/b491a219f5f6a6da2c04a012513c1a2266c783068f6131eddf3365f4209ece96-json.log",
	        "Name": "/ha-561110",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "ha-561110:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "ha-561110",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "b491a219f5f6a6da2c04a012513c1a2266c783068f6131eddf3365f4209ece96",
	                "LowerDir": "/var/lib/docker/overlay2/5b04665b7cab2ec18af91a710d518904c279e2a90668f078e04a26ace79c7488-init/diff:/var/lib/docker/overlay2/7e8788c6de692bc1c3758a2bb2c4b8da0fbba26855f855c0f3b655bfbdd92f8e/diff",
	                "MergedDir": "/var/lib/docker/overlay2/5b04665b7cab2ec18af91a710d518904c279e2a90668f078e04a26ace79c7488/merged",
	                "UpperDir": "/var/lib/docker/overlay2/5b04665b7cab2ec18af91a710d518904c279e2a90668f078e04a26ace79c7488/diff",
	                "WorkDir": "/var/lib/docker/overlay2/5b04665b7cab2ec18af91a710d518904c279e2a90668f078e04a26ace79c7488/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "ha-561110",
	                "Source": "/var/lib/docker/volumes/ha-561110/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "ha-561110",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763588073-21934@sha256:19d3da0413e1bfa354cbb88004c6796f8e9772a083e0230b0f6e50212ee04c7e",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "ha-561110",
	                "name.minikube.sigs.k8s.io": "ha-561110",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "63b3a8bfef41783609e300f295bd9c6ce0b188ddea8ed2fd34f5208c58b47581",
	            "SandboxKey": "/var/run/docker/netns/63b3a8bfef41",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33535"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33536"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33539"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33537"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33538"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "ha-561110": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "96:82:2a:2d:1a:a2",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "b16c782e3da877b947afab8daed1813e31e3d205de3fc5d50df3784dc479d217",
	                    "EndpointID": "61c267346b225270082d2c669fb1fa8e14bbb2c2c81a704ce5a2c8a50f3d07f7",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "ha-561110",
	                        "b491a219f5f6"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p ha-561110 -n ha-561110
helpers_test.go:252: <<< TestMultiControlPlane/serial/RestartClusterKeepsNodes FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestMultiControlPlane/serial/RestartClusterKeepsNodes]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p ha-561110 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p ha-561110 logs -n 25: (1.459709343s)
helpers_test.go:260: TestMultiControlPlane/serial/RestartClusterKeepsNodes logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                 ARGS                                                                 │  PROFILE  │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ cp      │ ha-561110 cp ha-561110-m03:/home/docker/cp-test.txt ha-561110-m02:/home/docker/cp-test_ha-561110-m03_ha-561110-m02.txt               │ ha-561110 │ jenkins │ v1.37.0 │ 22 Nov 25 00:13 UTC │ 22 Nov 25 00:13 UTC │
	│ ssh     │ ha-561110 ssh -n ha-561110-m03 sudo cat /home/docker/cp-test.txt                                                                     │ ha-561110 │ jenkins │ v1.37.0 │ 22 Nov 25 00:13 UTC │ 22 Nov 25 00:13 UTC │
	│ ssh     │ ha-561110 ssh -n ha-561110-m02 sudo cat /home/docker/cp-test_ha-561110-m03_ha-561110-m02.txt                                         │ ha-561110 │ jenkins │ v1.37.0 │ 22 Nov 25 00:13 UTC │ 22 Nov 25 00:13 UTC │
	│ cp      │ ha-561110 cp ha-561110-m03:/home/docker/cp-test.txt ha-561110-m04:/home/docker/cp-test_ha-561110-m03_ha-561110-m04.txt               │ ha-561110 │ jenkins │ v1.37.0 │ 22 Nov 25 00:13 UTC │ 22 Nov 25 00:13 UTC │
	│ ssh     │ ha-561110 ssh -n ha-561110-m03 sudo cat /home/docker/cp-test.txt                                                                     │ ha-561110 │ jenkins │ v1.37.0 │ 22 Nov 25 00:13 UTC │ 22 Nov 25 00:13 UTC │
	│ ssh     │ ha-561110 ssh -n ha-561110-m04 sudo cat /home/docker/cp-test_ha-561110-m03_ha-561110-m04.txt                                         │ ha-561110 │ jenkins │ v1.37.0 │ 22 Nov 25 00:13 UTC │ 22 Nov 25 00:13 UTC │
	│ cp      │ ha-561110 cp testdata/cp-test.txt ha-561110-m04:/home/docker/cp-test.txt                                                             │ ha-561110 │ jenkins │ v1.37.0 │ 22 Nov 25 00:13 UTC │ 22 Nov 25 00:13 UTC │
	│ ssh     │ ha-561110 ssh -n ha-561110-m04 sudo cat /home/docker/cp-test.txt                                                                     │ ha-561110 │ jenkins │ v1.37.0 │ 22 Nov 25 00:13 UTC │ 22 Nov 25 00:13 UTC │
	│ cp      │ ha-561110 cp ha-561110-m04:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile2616405813/001/cp-test_ha-561110-m04.txt │ ha-561110 │ jenkins │ v1.37.0 │ 22 Nov 25 00:13 UTC │ 22 Nov 25 00:13 UTC │
	│ ssh     │ ha-561110 ssh -n ha-561110-m04 sudo cat /home/docker/cp-test.txt                                                                     │ ha-561110 │ jenkins │ v1.37.0 │ 22 Nov 25 00:13 UTC │ 22 Nov 25 00:13 UTC │
	│ cp      │ ha-561110 cp ha-561110-m04:/home/docker/cp-test.txt ha-561110:/home/docker/cp-test_ha-561110-m04_ha-561110.txt                       │ ha-561110 │ jenkins │ v1.37.0 │ 22 Nov 25 00:13 UTC │ 22 Nov 25 00:13 UTC │
	│ ssh     │ ha-561110 ssh -n ha-561110-m04 sudo cat /home/docker/cp-test.txt                                                                     │ ha-561110 │ jenkins │ v1.37.0 │ 22 Nov 25 00:13 UTC │ 22 Nov 25 00:13 UTC │
	│ ssh     │ ha-561110 ssh -n ha-561110 sudo cat /home/docker/cp-test_ha-561110-m04_ha-561110.txt                                                 │ ha-561110 │ jenkins │ v1.37.0 │ 22 Nov 25 00:13 UTC │ 22 Nov 25 00:13 UTC │
	│ cp      │ ha-561110 cp ha-561110-m04:/home/docker/cp-test.txt ha-561110-m02:/home/docker/cp-test_ha-561110-m04_ha-561110-m02.txt               │ ha-561110 │ jenkins │ v1.37.0 │ 22 Nov 25 00:13 UTC │ 22 Nov 25 00:13 UTC │
	│ ssh     │ ha-561110 ssh -n ha-561110-m04 sudo cat /home/docker/cp-test.txt                                                                     │ ha-561110 │ jenkins │ v1.37.0 │ 22 Nov 25 00:13 UTC │ 22 Nov 25 00:13 UTC │
	│ ssh     │ ha-561110 ssh -n ha-561110-m02 sudo cat /home/docker/cp-test_ha-561110-m04_ha-561110-m02.txt                                         │ ha-561110 │ jenkins │ v1.37.0 │ 22 Nov 25 00:13 UTC │ 22 Nov 25 00:13 UTC │
	│ cp      │ ha-561110 cp ha-561110-m04:/home/docker/cp-test.txt ha-561110-m03:/home/docker/cp-test_ha-561110-m04_ha-561110-m03.txt               │ ha-561110 │ jenkins │ v1.37.0 │ 22 Nov 25 00:13 UTC │ 22 Nov 25 00:13 UTC │
	│ ssh     │ ha-561110 ssh -n ha-561110-m04 sudo cat /home/docker/cp-test.txt                                                                     │ ha-561110 │ jenkins │ v1.37.0 │ 22 Nov 25 00:13 UTC │ 22 Nov 25 00:13 UTC │
	│ ssh     │ ha-561110 ssh -n ha-561110-m03 sudo cat /home/docker/cp-test_ha-561110-m04_ha-561110-m03.txt                                         │ ha-561110 │ jenkins │ v1.37.0 │ 22 Nov 25 00:13 UTC │ 22 Nov 25 00:13 UTC │
	│ node    │ ha-561110 node stop m02 --alsologtostderr -v 5                                                                                       │ ha-561110 │ jenkins │ v1.37.0 │ 22 Nov 25 00:13 UTC │ 22 Nov 25 00:13 UTC │
	│ node    │ ha-561110 node start m02 --alsologtostderr -v 5                                                                                      │ ha-561110 │ jenkins │ v1.37.0 │ 22 Nov 25 00:13 UTC │ 22 Nov 25 00:14 UTC │
	│ node    │ ha-561110 node list --alsologtostderr -v 5                                                                                           │ ha-561110 │ jenkins │ v1.37.0 │ 22 Nov 25 00:14 UTC │                     │
	│ stop    │ ha-561110 stop --alsologtostderr -v 5                                                                                                │ ha-561110 │ jenkins │ v1.37.0 │ 22 Nov 25 00:14 UTC │ 22 Nov 25 00:14 UTC │
	│ start   │ ha-561110 start --wait true --alsologtostderr -v 5                                                                                   │ ha-561110 │ jenkins │ v1.37.0 │ 22 Nov 25 00:14 UTC │                     │
	│ node    │ ha-561110 node list --alsologtostderr -v 5                                                                                           │ ha-561110 │ jenkins │ v1.37.0 │ 22 Nov 25 00:23 UTC │                     │
	└─────────┴──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/22 00:14:41
	Running on machine: ip-172-31-21-244
	Binary: Built with gc go1.25.3 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1122 00:14:41.051374  563925 out.go:360] Setting OutFile to fd 1 ...
	I1122 00:14:41.051556  563925 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1122 00:14:41.051586  563925 out.go:374] Setting ErrFile to fd 2...
	I1122 00:14:41.051607  563925 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1122 00:14:41.051880  563925 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21934-513600/.minikube/bin
	I1122 00:14:41.052266  563925 out.go:368] Setting JSON to false
	I1122 00:14:41.053166  563925 start.go:133] hostinfo: {"hostname":"ip-172-31-21-244","uptime":17797,"bootTime":1763752684,"procs":152,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"da8ac1fd-6236-412a-a346-95873c98230d"}
	I1122 00:14:41.053270  563925 start.go:143] virtualization:  
	I1122 00:14:41.056667  563925 out.go:179] * [ha-561110] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1122 00:14:41.060532  563925 out.go:179]   - MINIKUBE_LOCATION=21934
	I1122 00:14:41.060603  563925 notify.go:221] Checking for updates...
	I1122 00:14:41.067352  563925 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1122 00:14:41.070297  563925 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21934-513600/kubeconfig
	I1122 00:14:41.073934  563925 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21934-513600/.minikube
	I1122 00:14:41.076934  563925 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1122 00:14:41.079898  563925 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1122 00:14:41.083494  563925 config.go:182] Loaded profile config "ha-561110": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1122 00:14:41.083606  563925 driver.go:422] Setting default libvirt URI to qemu:///system
	I1122 00:14:41.111284  563925 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1122 00:14:41.111387  563925 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1122 00:14:41.175037  563925 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:4 ContainersRunning:0 ContainersPaused:0 ContainersStopped:4 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:27 OomKillDisable:true NGoroutines:42 SystemTime:2025-11-22 00:14:41.165296296 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1122 00:14:41.175148  563925 docker.go:319] overlay module found
	I1122 00:14:41.178250  563925 out.go:179] * Using the docker driver based on existing profile
	I1122 00:14:41.180953  563925 start.go:309] selected driver: docker
	I1122 00:14:41.180971  563925 start.go:930] validating driver "docker" against &{Name:ha-561110 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763588073-21934@sha256:19d3da0413e1bfa354cbb88004c6796f8e9772a083e0230b0f6e50212ee04c7e Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:ha-561110 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName
:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.49.4 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.49.5 Port:0 KubernetesVersion:v1.34.1 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:f
alse ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false Dis
ableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1122 00:14:41.181129  563925 start.go:941] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1122 00:14:41.181235  563925 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1122 00:14:41.238102  563925 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:4 ContainersRunning:0 ContainersPaused:0 ContainersStopped:4 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:27 OomKillDisable:true NGoroutines:42 SystemTime:2025-11-22 00:14:41.228646014 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1122 00:14:41.238520  563925 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1122 00:14:41.238556  563925 cni.go:84] Creating CNI manager for ""
	I1122 00:14:41.238614  563925 cni.go:136] multinode detected (4 nodes found), recommending kindnet
	I1122 00:14:41.238661  563925 start.go:353] cluster config:
	{Name:ha-561110 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763588073-21934@sha256:19d3da0413e1bfa354cbb88004c6796f8e9772a083e0230b0f6e50212ee04c7e Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:ha-561110 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerR
untime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.49.4 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.49.5 Port:0 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false isti
o-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMne
tClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1122 00:14:41.241877  563925 out.go:179] * Starting "ha-561110" primary control-plane node in "ha-561110" cluster
	I1122 00:14:41.244623  563925 cache.go:134] Beginning downloading kic base image for docker with crio
	I1122 00:14:41.247356  563925 out.go:179] * Pulling base image v0.0.48-1763588073-21934 ...
	I1122 00:14:41.250191  563925 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1122 00:14:41.250238  563925 preload.go:203] Found local preload: /home/jenkins/minikube-integration/21934-513600/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4
	I1122 00:14:41.250251  563925 cache.go:65] Caching tarball of preloaded images
	I1122 00:14:41.250256  563925 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763588073-21934@sha256:19d3da0413e1bfa354cbb88004c6796f8e9772a083e0230b0f6e50212ee04c7e in local docker daemon
	I1122 00:14:41.250328  563925 preload.go:238] Found /home/jenkins/minikube-integration/21934-513600/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1122 00:14:41.250339  563925 cache.go:68] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1122 00:14:41.250480  563925 profile.go:143] Saving config to /home/jenkins/minikube-integration/21934-513600/.minikube/profiles/ha-561110/config.json ...
	I1122 00:14:41.275134  563925 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763588073-21934@sha256:19d3da0413e1bfa354cbb88004c6796f8e9772a083e0230b0f6e50212ee04c7e in local docker daemon, skipping pull
	I1122 00:14:41.275155  563925 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763588073-21934@sha256:19d3da0413e1bfa354cbb88004c6796f8e9772a083e0230b0f6e50212ee04c7e exists in daemon, skipping load
	I1122 00:14:41.275171  563925 cache.go:243] Successfully downloaded all kic artifacts
	I1122 00:14:41.275193  563925 start.go:360] acquireMachinesLock for ha-561110: {Name:mkb487371897d491a1a254bbfa266b10650bf7bc Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1122 00:14:41.275256  563925 start.go:364] duration metric: took 36.265µs to acquireMachinesLock for "ha-561110"
	I1122 00:14:41.275288  563925 start.go:96] Skipping create...Using existing machine configuration
	I1122 00:14:41.275297  563925 fix.go:54] fixHost starting: 
	I1122 00:14:41.275560  563925 cli_runner.go:164] Run: docker container inspect ha-561110 --format={{.State.Status}}
	I1122 00:14:41.292644  563925 fix.go:112] recreateIfNeeded on ha-561110: state=Stopped err=<nil>
	W1122 00:14:41.292679  563925 fix.go:138] unexpected machine state, will restart: <nil>
	I1122 00:14:41.295991  563925 out.go:252] * Restarting existing docker container for "ha-561110" ...
	I1122 00:14:41.296094  563925 cli_runner.go:164] Run: docker start ha-561110
	I1122 00:14:41.567342  563925 cli_runner.go:164] Run: docker container inspect ha-561110 --format={{.State.Status}}
	I1122 00:14:41.593759  563925 kic.go:430] container "ha-561110" state is running.
	I1122 00:14:41.594265  563925 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-561110
	I1122 00:14:41.625087  563925 profile.go:143] Saving config to /home/jenkins/minikube-integration/21934-513600/.minikube/profiles/ha-561110/config.json ...
	I1122 00:14:41.625337  563925 machine.go:94] provisionDockerMachine start ...
	I1122 00:14:41.625405  563925 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-561110
	I1122 00:14:41.644350  563925 main.go:143] libmachine: Using SSH client type: native
	I1122 00:14:41.644684  563925 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33535 <nil> <nil>}
	I1122 00:14:41.644692  563925 main.go:143] libmachine: About to run SSH command:
	hostname
	I1122 00:14:41.645633  563925 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I1122 00:14:44.789929  563925 main.go:143] libmachine: SSH cmd err, output: <nil>: ha-561110
	
	I1122 00:14:44.789988  563925 ubuntu.go:182] provisioning hostname "ha-561110"
	I1122 00:14:44.790089  563925 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-561110
	I1122 00:14:44.809008  563925 main.go:143] libmachine: Using SSH client type: native
	I1122 00:14:44.809338  563925 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33535 <nil> <nil>}
	I1122 00:14:44.809354  563925 main.go:143] libmachine: About to run SSH command:
	sudo hostname ha-561110 && echo "ha-561110" | sudo tee /etc/hostname
	I1122 00:14:44.959054  563925 main.go:143] libmachine: SSH cmd err, output: <nil>: ha-561110
	
	I1122 00:14:44.959174  563925 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-561110
	I1122 00:14:44.977402  563925 main.go:143] libmachine: Using SSH client type: native
	I1122 00:14:44.977725  563925 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33535 <nil> <nil>}
	I1122 00:14:44.977747  563925 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-561110' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-561110/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-561110' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1122 00:14:45.148701  563925 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1122 00:14:45.148780  563925 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21934-513600/.minikube CaCertPath:/home/jenkins/minikube-integration/21934-513600/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21934-513600/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21934-513600/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21934-513600/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21934-513600/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21934-513600/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21934-513600/.minikube}
	I1122 00:14:45.148894  563925 ubuntu.go:190] setting up certificates
	I1122 00:14:45.148911  563925 provision.go:84] configureAuth start
	I1122 00:14:45.149003  563925 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-561110
	I1122 00:14:45.178821  563925 provision.go:143] copyHostCerts
	I1122 00:14:45.178872  563925 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21934-513600/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/21934-513600/.minikube/ca.pem
	I1122 00:14:45.178980  563925 exec_runner.go:144] found /home/jenkins/minikube-integration/21934-513600/.minikube/ca.pem, removing ...
	I1122 00:14:45.179051  563925 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21934-513600/.minikube/ca.pem
	I1122 00:14:45.179147  563925 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21934-513600/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21934-513600/.minikube/ca.pem (1078 bytes)
	I1122 00:14:45.179368  563925 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21934-513600/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/21934-513600/.minikube/cert.pem
	I1122 00:14:45.179396  563925 exec_runner.go:144] found /home/jenkins/minikube-integration/21934-513600/.minikube/cert.pem, removing ...
	I1122 00:14:45.179408  563925 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21934-513600/.minikube/cert.pem
	I1122 00:14:45.179513  563925 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21934-513600/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21934-513600/.minikube/cert.pem (1123 bytes)
	I1122 00:14:45.179582  563925 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21934-513600/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/21934-513600/.minikube/key.pem
	I1122 00:14:45.179688  563925 exec_runner.go:144] found /home/jenkins/minikube-integration/21934-513600/.minikube/key.pem, removing ...
	I1122 00:14:45.179693  563925 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21934-513600/.minikube/key.pem
	I1122 00:14:45.179763  563925 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21934-513600/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21934-513600/.minikube/key.pem (1675 bytes)
	I1122 00:14:45.179869  563925 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21934-513600/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21934-513600/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21934-513600/.minikube/certs/ca-key.pem org=jenkins.ha-561110 san=[127.0.0.1 192.168.49.2 ha-561110 localhost minikube]
	I1122 00:14:45.360921  563925 provision.go:177] copyRemoteCerts
	I1122 00:14:45.360991  563925 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1122 00:14:45.361031  563925 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-561110
	I1122 00:14:45.379675  563925 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33535 SSHKeyPath:/home/jenkins/minikube-integration/21934-513600/.minikube/machines/ha-561110/id_rsa Username:docker}
	I1122 00:14:45.481986  563925 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21934-513600/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1122 00:14:45.482096  563925 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21934-513600/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1122 00:14:45.500661  563925 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21934-513600/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1122 00:14:45.500750  563925 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21934-513600/.minikube/machines/server.pem --> /etc/docker/server.pem (1196 bytes)
	I1122 00:14:45.519280  563925 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21934-513600/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1122 00:14:45.519388  563925 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21934-513600/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1122 00:14:45.538099  563925 provision.go:87] duration metric: took 389.17288ms to configureAuth
	I1122 00:14:45.538126  563925 ubuntu.go:206] setting minikube options for container-runtime
	I1122 00:14:45.538361  563925 config.go:182] Loaded profile config "ha-561110": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1122 00:14:45.538464  563925 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-561110
	I1122 00:14:45.557843  563925 main.go:143] libmachine: Using SSH client type: native
	I1122 00:14:45.558153  563925 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33535 <nil> <nil>}
	I1122 00:14:45.558173  563925 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1122 00:14:45.916699  563925 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1122 00:14:45.916722  563925 machine.go:97] duration metric: took 4.291375262s to provisionDockerMachine
	I1122 00:14:45.916734  563925 start.go:293] postStartSetup for "ha-561110" (driver="docker")
	I1122 00:14:45.916744  563925 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1122 00:14:45.916808  563925 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1122 00:14:45.916864  563925 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-561110
	I1122 00:14:45.937454  563925 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33535 SSHKeyPath:/home/jenkins/minikube-integration/21934-513600/.minikube/machines/ha-561110/id_rsa Username:docker}
	I1122 00:14:46.038557  563925 ssh_runner.go:195] Run: cat /etc/os-release
	I1122 00:14:46.042104  563925 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1122 00:14:46.042148  563925 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1122 00:14:46.042162  563925 filesync.go:126] Scanning /home/jenkins/minikube-integration/21934-513600/.minikube/addons for local assets ...
	I1122 00:14:46.042244  563925 filesync.go:126] Scanning /home/jenkins/minikube-integration/21934-513600/.minikube/files for local assets ...
	I1122 00:14:46.042340  563925 filesync.go:149] local asset: /home/jenkins/minikube-integration/21934-513600/.minikube/files/etc/ssl/certs/5169372.pem -> 5169372.pem in /etc/ssl/certs
	I1122 00:14:46.042358  563925 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21934-513600/.minikube/files/etc/ssl/certs/5169372.pem -> /etc/ssl/certs/5169372.pem
	I1122 00:14:46.042519  563925 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1122 00:14:46.050335  563925 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21934-513600/.minikube/files/etc/ssl/certs/5169372.pem --> /etc/ssl/certs/5169372.pem (1708 bytes)
	I1122 00:14:46.070075  563925 start.go:296] duration metric: took 153.324249ms for postStartSetup
	I1122 00:14:46.070158  563925 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1122 00:14:46.070200  563925 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-561110
	I1122 00:14:46.089314  563925 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33535 SSHKeyPath:/home/jenkins/minikube-integration/21934-513600/.minikube/machines/ha-561110/id_rsa Username:docker}
	I1122 00:14:46.187250  563925 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1122 00:14:46.192065  563925 fix.go:56] duration metric: took 4.916761973s for fixHost
	I1122 00:14:46.192091  563925 start.go:83] releasing machines lock for "ha-561110", held for 4.916821031s
	I1122 00:14:46.192188  563925 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-561110
	I1122 00:14:46.209139  563925 ssh_runner.go:195] Run: cat /version.json
	I1122 00:14:46.209197  563925 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-561110
	I1122 00:14:46.209461  563925 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1122 00:14:46.209511  563925 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-561110
	I1122 00:14:46.233161  563925 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33535 SSHKeyPath:/home/jenkins/minikube-integration/21934-513600/.minikube/machines/ha-561110/id_rsa Username:docker}
	I1122 00:14:46.237608  563925 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33535 SSHKeyPath:/home/jenkins/minikube-integration/21934-513600/.minikube/machines/ha-561110/id_rsa Username:docker}
	I1122 00:14:46.417414  563925 ssh_runner.go:195] Run: systemctl --version
	I1122 00:14:46.423708  563925 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1122 00:14:46.459853  563925 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1122 00:14:46.464430  563925 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1122 00:14:46.464499  563925 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1122 00:14:46.472070  563925 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1122 00:14:46.472092  563925 start.go:496] detecting cgroup driver to use...
	I1122 00:14:46.472140  563925 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1122 00:14:46.472192  563925 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1122 00:14:46.487805  563925 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1122 00:14:46.501008  563925 docker.go:218] disabling cri-docker service (if available) ...
	I1122 00:14:46.501113  563925 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1122 00:14:46.517083  563925 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1122 00:14:46.530035  563925 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1122 00:14:46.634532  563925 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1122 00:14:46.753160  563925 docker.go:234] disabling docker service ...
	I1122 00:14:46.753271  563925 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1122 00:14:46.768112  563925 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1122 00:14:46.781109  563925 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1122 00:14:46.889282  563925 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1122 00:14:47.012744  563925 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1122 00:14:47.026639  563925 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1122 00:14:47.040275  563925 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1122 00:14:47.040386  563925 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1122 00:14:47.049142  563925 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1122 00:14:47.049222  563925 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1122 00:14:47.057948  563925 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1122 00:14:47.066761  563925 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1122 00:14:47.076164  563925 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1122 00:14:47.085123  563925 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1122 00:14:47.094801  563925 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1122 00:14:47.102952  563925 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1122 00:14:47.111641  563925 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1122 00:14:47.119239  563925 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1122 00:14:47.126541  563925 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1122 00:14:47.233256  563925 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1122 00:14:47.384501  563925 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1122 00:14:47.384567  563925 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1122 00:14:47.388356  563925 start.go:564] Will wait 60s for crictl version
	I1122 00:14:47.388468  563925 ssh_runner.go:195] Run: which crictl
	I1122 00:14:47.392030  563925 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1122 00:14:47.416283  563925 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.2
	RuntimeApiVersion:  v1
	I1122 00:14:47.416422  563925 ssh_runner.go:195] Run: crio --version
	I1122 00:14:47.444890  563925 ssh_runner.go:195] Run: crio --version
	I1122 00:14:47.480934  563925 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.2 ...
	I1122 00:14:47.483635  563925 cli_runner.go:164] Run: docker network inspect ha-561110 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1122 00:14:47.499516  563925 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1122 00:14:47.503369  563925 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1122 00:14:47.513239  563925 kubeadm.go:884] updating cluster {Name:ha-561110 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763588073-21934@sha256:19d3da0413e1bfa354cbb88004c6796f8e9772a083e0230b0f6e50212ee04c7e Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:ha-561110 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APISe
rverNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.49.4 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.49.5 Port:0 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:
false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:f
alse DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1122 00:14:47.513386  563925 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1122 00:14:47.513453  563925 ssh_runner.go:195] Run: sudo crictl images --output json
	I1122 00:14:47.547714  563925 crio.go:514] all images are preloaded for cri-o runtime.
	I1122 00:14:47.547741  563925 crio.go:433] Images already preloaded, skipping extraction
	I1122 00:14:47.547794  563925 ssh_runner.go:195] Run: sudo crictl images --output json
	I1122 00:14:47.572446  563925 crio.go:514] all images are preloaded for cri-o runtime.
	I1122 00:14:47.572474  563925 cache_images.go:86] Images are preloaded, skipping loading
	I1122 00:14:47.572483  563925 kubeadm.go:935] updating node { 192.168.49.2 8443 v1.34.1 crio true true} ...
	I1122 00:14:47.572577  563925 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=ha-561110 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:ha-561110 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1122 00:14:47.572661  563925 ssh_runner.go:195] Run: crio config
	I1122 00:14:47.634066  563925 cni.go:84] Creating CNI manager for ""
	I1122 00:14:47.634094  563925 cni.go:136] multinode detected (4 nodes found), recommending kindnet
	I1122 00:14:47.634114  563925 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1122 00:14:47.634156  563925 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-561110 NodeName:ha-561110 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/mani
fests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1122 00:14:47.634316  563925 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "ha-561110"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1122 00:14:47.634340  563925 kube-vip.go:115] generating kube-vip config ...
	I1122 00:14:47.634397  563925 ssh_runner.go:195] Run: sudo sh -c "lsmod | grep ip_vs"
	I1122 00:14:47.646470  563925 kube-vip.go:163] giving up enabling control-plane load-balancing as ipvs kernel modules appears not to be available: sudo sh -c "lsmod | grep ip_vs": Process exited with status 1
	stdout:
	
	stderr:
	I1122 00:14:47.646593  563925 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.49.254
	    - name: prometheus_server
	      value: :2112
	    image: ghcr.io/kube-vip/kube-vip:v1.0.1
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I1122 00:14:47.646695  563925 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1122 00:14:47.654183  563925 binaries.go:51] Found k8s binaries, skipping transfer
	I1122 00:14:47.654249  563925 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I1122 00:14:47.661699  563925 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (359 bytes)
	I1122 00:14:47.674165  563925 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1122 00:14:47.686331  563925 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2206 bytes)
	I1122 00:14:47.698542  563925 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1358 bytes)
	I1122 00:14:47.711254  563925 ssh_runner.go:195] Run: grep 192.168.49.254	control-plane.minikube.internal$ /etc/hosts
	I1122 00:14:47.714862  563925 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1122 00:14:47.724174  563925 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1122 00:14:47.839371  563925 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1122 00:14:47.853685  563925 certs.go:69] Setting up /home/jenkins/minikube-integration/21934-513600/.minikube/profiles/ha-561110 for IP: 192.168.49.2
	I1122 00:14:47.853753  563925 certs.go:195] generating shared ca certs ...
	I1122 00:14:47.853787  563925 certs.go:227] acquiring lock for ca certs: {Name:mkaf4c79493334cb2058b349c36be7473837f9b7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1122 00:14:47.853987  563925 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21934-513600/.minikube/ca.key
	I1122 00:14:47.854075  563925 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21934-513600/.minikube/proxy-client-ca.key
	I1122 00:14:47.854111  563925 certs.go:257] generating profile certs ...
	I1122 00:14:47.854232  563925 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21934-513600/.minikube/profiles/ha-561110/client.key
	I1122 00:14:47.854280  563925 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21934-513600/.minikube/profiles/ha-561110/apiserver.key.17887f76
	I1122 00:14:47.854319  563925 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21934-513600/.minikube/profiles/ha-561110/apiserver.crt.17887f76 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2 192.168.49.3 192.168.49.4 192.168.49.254]
	I1122 00:14:47.941434  563925 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21934-513600/.minikube/profiles/ha-561110/apiserver.crt.17887f76 ...
	I1122 00:14:47.941949  563925 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21934-513600/.minikube/profiles/ha-561110/apiserver.crt.17887f76: {Name:mk196d114e0b17147f8bed35c49f594a2533cc5a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1122 00:14:47.942154  563925 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21934-513600/.minikube/profiles/ha-561110/apiserver.key.17887f76 ...
	I1122 00:14:47.942191  563925 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21934-513600/.minikube/profiles/ha-561110/apiserver.key.17887f76: {Name:mk34aa50af1cad4bd0a7687c2b98f2a65013e746 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1122 00:14:47.942314  563925 certs.go:382] copying /home/jenkins/minikube-integration/21934-513600/.minikube/profiles/ha-561110/apiserver.crt.17887f76 -> /home/jenkins/minikube-integration/21934-513600/.minikube/profiles/ha-561110/apiserver.crt
	I1122 00:14:47.942500  563925 certs.go:386] copying /home/jenkins/minikube-integration/21934-513600/.minikube/profiles/ha-561110/apiserver.key.17887f76 -> /home/jenkins/minikube-integration/21934-513600/.minikube/profiles/ha-561110/apiserver.key
	I1122 00:14:47.942693  563925 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21934-513600/.minikube/profiles/ha-561110/proxy-client.key
	I1122 00:14:47.942729  563925 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21934-513600/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1122 00:14:47.942772  563925 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21934-513600/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1122 00:14:47.942814  563925 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21934-513600/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1122 00:14:47.942845  563925 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21934-513600/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1122 00:14:47.942881  563925 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21934-513600/.minikube/profiles/ha-561110/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1122 00:14:47.942927  563925 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21934-513600/.minikube/profiles/ha-561110/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1122 00:14:47.942960  563925 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21934-513600/.minikube/profiles/ha-561110/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1122 00:14:47.942996  563925 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21934-513600/.minikube/profiles/ha-561110/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1122 00:14:47.943078  563925 certs.go:484] found cert: /home/jenkins/minikube-integration/21934-513600/.minikube/certs/516937.pem (1338 bytes)
	W1122 00:14:47.943133  563925 certs.go:480] ignoring /home/jenkins/minikube-integration/21934-513600/.minikube/certs/516937_empty.pem, impossibly tiny 0 bytes
	I1122 00:14:47.943156  563925 certs.go:484] found cert: /home/jenkins/minikube-integration/21934-513600/.minikube/certs/ca-key.pem (1679 bytes)
	I1122 00:14:47.943215  563925 certs.go:484] found cert: /home/jenkins/minikube-integration/21934-513600/.minikube/certs/ca.pem (1078 bytes)
	I1122 00:14:47.943265  563925 certs.go:484] found cert: /home/jenkins/minikube-integration/21934-513600/.minikube/certs/cert.pem (1123 bytes)
	I1122 00:14:47.943352  563925 certs.go:484] found cert: /home/jenkins/minikube-integration/21934-513600/.minikube/certs/key.pem (1675 bytes)
	I1122 00:14:47.943431  563925 certs.go:484] found cert: /home/jenkins/minikube-integration/21934-513600/.minikube/files/etc/ssl/certs/5169372.pem (1708 bytes)
	I1122 00:14:47.943512  563925 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21934-513600/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1122 00:14:47.943556  563925 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21934-513600/.minikube/certs/516937.pem -> /usr/share/ca-certificates/516937.pem
	I1122 00:14:47.943584  563925 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21934-513600/.minikube/files/etc/ssl/certs/5169372.pem -> /usr/share/ca-certificates/5169372.pem
	I1122 00:14:47.944164  563925 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21934-513600/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1122 00:14:47.970032  563925 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21934-513600/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1122 00:14:47.993299  563925 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21934-513600/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1122 00:14:48.024732  563925 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21934-513600/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1122 00:14:48.049916  563925 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21934-513600/.minikube/profiles/ha-561110/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I1122 00:14:48.074841  563925 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21934-513600/.minikube/profiles/ha-561110/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1122 00:14:48.093300  563925 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21934-513600/.minikube/profiles/ha-561110/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1122 00:14:48.113386  563925 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21934-513600/.minikube/profiles/ha-561110/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1122 00:14:48.133760  563925 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21934-513600/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1122 00:14:48.153049  563925 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21934-513600/.minikube/certs/516937.pem --> /usr/share/ca-certificates/516937.pem (1338 bytes)
	I1122 00:14:48.173569  563925 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21934-513600/.minikube/files/etc/ssl/certs/5169372.pem --> /usr/share/ca-certificates/5169372.pem (1708 bytes)
	I1122 00:14:48.198292  563925 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1122 00:14:48.211957  563925 ssh_runner.go:195] Run: openssl version
	I1122 00:14:48.218515  563925 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/516937.pem && ln -fs /usr/share/ca-certificates/516937.pem /etc/ssl/certs/516937.pem"
	I1122 00:14:48.228447  563925 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/516937.pem
	I1122 00:14:48.232426  563925 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov 21 23:55 /usr/share/ca-certificates/516937.pem
	I1122 00:14:48.232551  563925 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/516937.pem
	I1122 00:14:48.273469  563925 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/516937.pem /etc/ssl/certs/51391683.0"
	I1122 00:14:48.281348  563925 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/5169372.pem && ln -fs /usr/share/ca-certificates/5169372.pem /etc/ssl/certs/5169372.pem"
	I1122 00:14:48.289635  563925 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/5169372.pem
	I1122 00:14:48.293430  563925 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov 21 23:55 /usr/share/ca-certificates/5169372.pem
	I1122 00:14:48.293550  563925 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/5169372.pem
	I1122 00:14:48.335324  563925 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/5169372.pem /etc/ssl/certs/3ec20f2e.0"
	I1122 00:14:48.343382  563925 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1122 00:14:48.351346  563925 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1122 00:14:48.354892  563925 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov 21 23:48 /usr/share/ca-certificates/minikubeCA.pem
	I1122 00:14:48.354958  563925 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1122 00:14:48.398958  563925 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1122 00:14:48.406910  563925 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1122 00:14:48.410614  563925 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1122 00:14:48.451560  563925 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1122 00:14:48.492804  563925 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1122 00:14:48.540013  563925 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1122 00:14:48.585271  563925 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1122 00:14:48.653970  563925 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1122 00:14:48.747548  563925 kubeadm.go:401] StartCluster: {Name:ha-561110 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763588073-21934@sha256:19d3da0413e1bfa354cbb88004c6796f8e9772a083e0230b0f6e50212ee04c7e Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:ha-561110 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServe
rNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.49.4 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.49.5 Port:0 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:fal
se ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:fals
e DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1122 00:14:48.747694  563925 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1122 00:14:48.747775  563925 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1122 00:14:48.836090  563925 cri.go:89] found id: "4cbb3fde391bd86e756416ec260b0b8a5501d5139da802107965d9e012c4eca5"
	I1122 00:14:48.836127  563925 cri.go:89] found id: "4360f5517fd5eb7d570a98dee1b801419d3b650d7e890d5ddecc79946fba46db"
	I1122 00:14:48.836132  563925 cri.go:89] found id: "a395e7473ffe2b7999ae75a70e19b4f153d459c8ccae48aeeb71b5b6248cc1f2"
	I1122 00:14:48.836136  563925 cri.go:89] found id: "9fdf72902e6e01af8761552bc83ad83cdf5a34600401d1ee9126ac6a25ae0e37"
	I1122 00:14:48.836140  563925 cri.go:89] found id: "1c929db60119ab54f03020d00f2063dc6672d329ea34f4504e502142bffbe644"
	I1122 00:14:48.836148  563925 cri.go:89] found id: ""
	I1122 00:14:48.836216  563925 ssh_runner.go:195] Run: sudo runc list -f json
	W1122 00:14:48.857525  563925 kubeadm.go:408] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-22T00:14:48Z" level=error msg="open /run/runc: no such file or directory"
	I1122 00:14:48.857613  563925 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1122 00:14:48.878520  563925 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1122 00:14:48.878565  563925 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1122 00:14:48.878624  563925 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1122 00:14:48.898381  563925 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1122 00:14:48.898972  563925 kubeconfig.go:47] verify endpoint returned: get endpoint: "ha-561110" does not appear in /home/jenkins/minikube-integration/21934-513600/kubeconfig
	I1122 00:14:48.899101  563925 kubeconfig.go:62] /home/jenkins/minikube-integration/21934-513600/kubeconfig needs updating (will repair): [kubeconfig missing "ha-561110" cluster setting kubeconfig missing "ha-561110" context setting]
	I1122 00:14:48.900028  563925 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21934-513600/kubeconfig: {Name:mkcd094d8a765e5a81369c0fb33cb5e696b17bb8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1122 00:14:48.901567  563925 kapi.go:59] client config for ha-561110: &rest.Config{Host:"https://192.168.49.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21934-513600/.minikube/profiles/ha-561110/client.crt", KeyFile:"/home/jenkins/minikube-integration/21934-513600/.minikube/profiles/ha-561110/client.key", CAFile:"/home/jenkins/minikube-integration/21934-513600/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil
)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1fb2df0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1122 00:14:48.907943  563925 envvar.go:172] "Feature gate default state" feature="InOrderInformers" enabled=true
	I1122 00:14:48.907972  563925 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I1122 00:14:48.907979  563925 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
	I1122 00:14:48.907984  563925 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
	I1122 00:14:48.907993  563925 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I1122 00:14:48.908413  563925 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1122 00:14:48.908668  563925 cert_rotation.go:141] "Starting client certificate rotation controller" logger="tls-transport-cache"
	I1122 00:14:48.938459  563925 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.49.2
	I1122 00:14:48.938496  563925 kubeadm.go:602] duration metric: took 59.924061ms to restartPrimaryControlPlane
	I1122 00:14:48.938507  563925 kubeadm.go:403] duration metric: took 190.97977ms to StartCluster
	I1122 00:14:48.938533  563925 settings.go:142] acquiring lock: {Name:mk6c31eb57ec65b047b78b4e1046e03fe7cc77bb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1122 00:14:48.938632  563925 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21934-513600/kubeconfig
	I1122 00:14:48.939442  563925 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21934-513600/kubeconfig: {Name:mkcd094d8a765e5a81369c0fb33cb5e696b17bb8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1122 00:14:48.939701  563925 start.go:234] HA (multi-control plane) cluster: will skip waiting for primary control-plane node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1122 00:14:48.939739  563925 start.go:242] waiting for startup goroutines ...
	I1122 00:14:48.939758  563925 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1122 00:14:48.940342  563925 config.go:182] Loaded profile config "ha-561110": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1122 00:14:48.944134  563925 out.go:179] * Enabled addons: 
	I1122 00:14:48.947186  563925 addons.go:530] duration metric: took 7.425265ms for enable addons: enabled=[]
	I1122 00:14:48.947258  563925 start.go:247] waiting for cluster config update ...
	I1122 00:14:48.947278  563925 start.go:256] writing updated cluster config ...
	I1122 00:14:48.950835  563925 out.go:203] 
	I1122 00:14:48.954183  563925 config.go:182] Loaded profile config "ha-561110": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1122 00:14:48.954390  563925 profile.go:143] Saving config to /home/jenkins/minikube-integration/21934-513600/.minikube/profiles/ha-561110/config.json ...
	I1122 00:14:48.958001  563925 out.go:179] * Starting "ha-561110-m02" control-plane node in "ha-561110" cluster
	I1122 00:14:48.961037  563925 cache.go:134] Beginning downloading kic base image for docker with crio
	I1122 00:14:48.964123  563925 out.go:179] * Pulling base image v0.0.48-1763588073-21934 ...
	I1122 00:14:48.966981  563925 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1122 00:14:48.967024  563925 cache.go:65] Caching tarball of preloaded images
	I1122 00:14:48.967169  563925 preload.go:238] Found /home/jenkins/minikube-integration/21934-513600/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1122 00:14:48.967185  563925 cache.go:68] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1122 00:14:48.967352  563925 profile.go:143] Saving config to /home/jenkins/minikube-integration/21934-513600/.minikube/profiles/ha-561110/config.json ...
	I1122 00:14:48.967608  563925 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763588073-21934@sha256:19d3da0413e1bfa354cbb88004c6796f8e9772a083e0230b0f6e50212ee04c7e in local docker daemon
	I1122 00:14:49.000604  563925 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763588073-21934@sha256:19d3da0413e1bfa354cbb88004c6796f8e9772a083e0230b0f6e50212ee04c7e in local docker daemon, skipping pull
	I1122 00:14:49.000625  563925 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763588073-21934@sha256:19d3da0413e1bfa354cbb88004c6796f8e9772a083e0230b0f6e50212ee04c7e exists in daemon, skipping load
	I1122 00:14:49.000646  563925 cache.go:243] Successfully downloaded all kic artifacts
	I1122 00:14:49.000671  563925 start.go:360] acquireMachinesLock for ha-561110-m02: {Name:mkb358f78002efa4c17b8c7cead5ae57992aea2d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1122 00:14:49.000737  563925 start.go:364] duration metric: took 50.534µs to acquireMachinesLock for "ha-561110-m02"
	I1122 00:14:49.000757  563925 start.go:96] Skipping create...Using existing machine configuration
	I1122 00:14:49.000763  563925 fix.go:54] fixHost starting: m02
	I1122 00:14:49.001076  563925 cli_runner.go:164] Run: docker container inspect ha-561110-m02 --format={{.State.Status}}
	I1122 00:14:49.034056  563925 fix.go:112] recreateIfNeeded on ha-561110-m02: state=Stopped err=<nil>
	W1122 00:14:49.034088  563925 fix.go:138] unexpected machine state, will restart: <nil>
	I1122 00:14:49.037399  563925 out.go:252] * Restarting existing docker container for "ha-561110-m02" ...
	I1122 00:14:49.037518  563925 cli_runner.go:164] Run: docker start ha-561110-m02
	I1122 00:14:49.451675  563925 cli_runner.go:164] Run: docker container inspect ha-561110-m02 --format={{.State.Status}}
	I1122 00:14:49.475681  563925 kic.go:430] container "ha-561110-m02" state is running.
	I1122 00:14:49.476112  563925 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-561110-m02
	I1122 00:14:49.506374  563925 profile.go:143] Saving config to /home/jenkins/minikube-integration/21934-513600/.minikube/profiles/ha-561110/config.json ...
	I1122 00:14:49.506719  563925 machine.go:94] provisionDockerMachine start ...
	I1122 00:14:49.506835  563925 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-561110-m02
	I1122 00:14:49.550202  563925 main.go:143] libmachine: Using SSH client type: native
	I1122 00:14:49.550557  563925 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33540 <nil> <nil>}
	I1122 00:14:49.550573  563925 main.go:143] libmachine: About to run SSH command:
	hostname
	I1122 00:14:49.551331  563925 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:37062->127.0.0.1:33540: read: connection reset by peer
	I1122 00:14:52.908642  563925 main.go:143] libmachine: SSH cmd err, output: <nil>: ha-561110-m02
	
	I1122 00:14:52.908715  563925 ubuntu.go:182] provisioning hostname "ha-561110-m02"
	I1122 00:14:52.908805  563925 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-561110-m02
	I1122 00:14:52.953932  563925 main.go:143] libmachine: Using SSH client type: native
	I1122 00:14:52.954246  563925 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33540 <nil> <nil>}
	I1122 00:14:52.954258  563925 main.go:143] libmachine: About to run SSH command:
	sudo hostname ha-561110-m02 && echo "ha-561110-m02" | sudo tee /etc/hostname
	I1122 00:14:53.345252  563925 main.go:143] libmachine: SSH cmd err, output: <nil>: ha-561110-m02
	
	I1122 00:14:53.345401  563925 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-561110-m02
	I1122 00:14:53.377691  563925 main.go:143] libmachine: Using SSH client type: native
	I1122 00:14:53.378150  563925 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33540 <nil> <nil>}
	I1122 00:14:53.378172  563925 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-561110-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-561110-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-561110-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1122 00:14:53.591463  563925 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1122 00:14:53.591496  563925 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21934-513600/.minikube CaCertPath:/home/jenkins/minikube-integration/21934-513600/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21934-513600/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21934-513600/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21934-513600/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21934-513600/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21934-513600/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21934-513600/.minikube}
	I1122 00:14:53.591513  563925 ubuntu.go:190] setting up certificates
	I1122 00:14:53.591526  563925 provision.go:84] configureAuth start
	I1122 00:14:53.591597  563925 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-561110-m02
	I1122 00:14:53.618168  563925 provision.go:143] copyHostCerts
	I1122 00:14:53.618211  563925 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21934-513600/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/21934-513600/.minikube/ca.pem
	I1122 00:14:53.618242  563925 exec_runner.go:144] found /home/jenkins/minikube-integration/21934-513600/.minikube/ca.pem, removing ...
	I1122 00:14:53.618253  563925 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21934-513600/.minikube/ca.pem
	I1122 00:14:53.618333  563925 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21934-513600/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21934-513600/.minikube/ca.pem (1078 bytes)
	I1122 00:14:53.618435  563925 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21934-513600/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/21934-513600/.minikube/cert.pem
	I1122 00:14:53.618458  563925 exec_runner.go:144] found /home/jenkins/minikube-integration/21934-513600/.minikube/cert.pem, removing ...
	I1122 00:14:53.618465  563925 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21934-513600/.minikube/cert.pem
	I1122 00:14:53.618494  563925 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21934-513600/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21934-513600/.minikube/cert.pem (1123 bytes)
	I1122 00:14:53.618552  563925 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21934-513600/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/21934-513600/.minikube/key.pem
	I1122 00:14:53.618576  563925 exec_runner.go:144] found /home/jenkins/minikube-integration/21934-513600/.minikube/key.pem, removing ...
	I1122 00:14:53.618584  563925 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21934-513600/.minikube/key.pem
	I1122 00:14:53.618612  563925 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21934-513600/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21934-513600/.minikube/key.pem (1675 bytes)
	I1122 00:14:53.618665  563925 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21934-513600/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21934-513600/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21934-513600/.minikube/certs/ca-key.pem org=jenkins.ha-561110-m02 san=[127.0.0.1 192.168.49.3 ha-561110-m02 localhost minikube]
	I1122 00:14:53.787782  563925 provision.go:177] copyRemoteCerts
	I1122 00:14:53.787855  563925 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1122 00:14:53.787902  563925 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-561110-m02
	I1122 00:14:53.805764  563925 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33540 SSHKeyPath:/home/jenkins/minikube-integration/21934-513600/.minikube/machines/ha-561110-m02/id_rsa Username:docker}
	I1122 00:14:53.914816  563925 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21934-513600/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1122 00:14:53.914879  563925 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21934-513600/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1122 00:14:53.944075  563925 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21934-513600/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1122 00:14:53.944134  563925 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21934-513600/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1122 00:14:53.978384  563925 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21934-513600/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1122 00:14:53.978443  563925 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21934-513600/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1122 00:14:54.007139  563925 provision.go:87] duration metric: took 415.59481ms to configureAuth
	I1122 00:14:54.007174  563925 ubuntu.go:206] setting minikube options for container-runtime
	I1122 00:14:54.007455  563925 config.go:182] Loaded profile config "ha-561110": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1122 00:14:54.007583  563925 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-561110-m02
	I1122 00:14:54.047939  563925 main.go:143] libmachine: Using SSH client type: native
	I1122 00:14:54.048267  563925 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33540 <nil> <nil>}
	I1122 00:14:54.048291  563925 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1122 00:14:54.482099  563925 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1122 00:14:54.482120  563925 machine.go:97] duration metric: took 4.975378731s to provisionDockerMachine
	I1122 00:14:54.482133  563925 start.go:293] postStartSetup for "ha-561110-m02" (driver="docker")
	I1122 00:14:54.482144  563925 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1122 00:14:54.482209  563925 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1122 00:14:54.482252  563925 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-561110-m02
	I1122 00:14:54.500164  563925 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33540 SSHKeyPath:/home/jenkins/minikube-integration/21934-513600/.minikube/machines/ha-561110-m02/id_rsa Username:docker}
	I1122 00:14:54.602698  563925 ssh_runner.go:195] Run: cat /etc/os-release
	I1122 00:14:54.606253  563925 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1122 00:14:54.606285  563925 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1122 00:14:54.606296  563925 filesync.go:126] Scanning /home/jenkins/minikube-integration/21934-513600/.minikube/addons for local assets ...
	I1122 00:14:54.606352  563925 filesync.go:126] Scanning /home/jenkins/minikube-integration/21934-513600/.minikube/files for local assets ...
	I1122 00:14:54.606439  563925 filesync.go:149] local asset: /home/jenkins/minikube-integration/21934-513600/.minikube/files/etc/ssl/certs/5169372.pem -> 5169372.pem in /etc/ssl/certs
	I1122 00:14:54.606450  563925 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21934-513600/.minikube/files/etc/ssl/certs/5169372.pem -> /etc/ssl/certs/5169372.pem
	I1122 00:14:54.606572  563925 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1122 00:14:54.614732  563925 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21934-513600/.minikube/files/etc/ssl/certs/5169372.pem --> /etc/ssl/certs/5169372.pem (1708 bytes)
	I1122 00:14:54.633198  563925 start.go:296] duration metric: took 151.050123ms for postStartSetup
	I1122 00:14:54.633327  563925 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1122 00:14:54.633378  563925 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-561110-m02
	I1122 00:14:54.651888  563925 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33540 SSHKeyPath:/home/jenkins/minikube-integration/21934-513600/.minikube/machines/ha-561110-m02/id_rsa Username:docker}
	I1122 00:14:54.751498  563925 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1122 00:14:54.757858  563925 fix.go:56] duration metric: took 5.757088169s for fixHost
	I1122 00:14:54.757886  563925 start.go:83] releasing machines lock for "ha-561110-m02", held for 5.757140204s
	I1122 00:14:54.757958  563925 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-561110-m02
	I1122 00:14:54.778371  563925 out.go:179] * Found network options:
	I1122 00:14:54.781341  563925 out.go:179]   - NO_PROXY=192.168.49.2
	W1122 00:14:54.784285  563925 proxy.go:120] fail to check proxy env: Error ip not in block
	W1122 00:14:54.784332  563925 proxy.go:120] fail to check proxy env: Error ip not in block
	I1122 00:14:54.784409  563925 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1122 00:14:54.784457  563925 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-561110-m02
	I1122 00:14:54.784734  563925 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1122 00:14:54.784793  563925 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-561110-m02
	I1122 00:14:54.806895  563925 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33540 SSHKeyPath:/home/jenkins/minikube-integration/21934-513600/.minikube/machines/ha-561110-m02/id_rsa Username:docker}
	I1122 00:14:54.810601  563925 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33540 SSHKeyPath:/home/jenkins/minikube-integration/21934-513600/.minikube/machines/ha-561110-m02/id_rsa Username:docker}
	I1122 00:14:54.952580  563925 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1122 00:14:55.010644  563925 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1122 00:14:55.010736  563925 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1122 00:14:55.020151  563925 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1122 00:14:55.020182  563925 start.go:496] detecting cgroup driver to use...
	I1122 00:14:55.020226  563925 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1122 00:14:55.020299  563925 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1122 00:14:55.036774  563925 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1122 00:14:55.050901  563925 docker.go:218] disabling cri-docker service (if available) ...
	I1122 00:14:55.051008  563925 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1122 00:14:55.067844  563925 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1122 00:14:55.088601  563925 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1122 00:14:55.315735  563925 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1122 00:14:55.558850  563925 docker.go:234] disabling docker service ...
	I1122 00:14:55.558960  563925 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1122 00:14:55.576438  563925 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1122 00:14:55.595046  563925 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1122 00:14:55.815234  563925 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1122 00:14:56.006098  563925 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1122 00:14:56.021481  563925 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1122 00:14:56.044364  563925 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1122 00:14:56.044478  563925 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1122 00:14:56.068864  563925 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1122 00:14:56.068980  563925 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1122 00:14:56.084397  563925 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1122 00:14:56.114539  563925 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1122 00:14:56.145163  563925 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1122 00:14:56.167039  563925 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1122 00:14:56.186342  563925 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1122 00:14:56.205126  563925 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1122 00:14:56.216422  563925 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1122 00:14:56.246320  563925 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1122 00:14:56.266882  563925 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1122 00:14:56.589643  563925 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1122 00:14:56.984258  563925 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1122 00:14:56.984384  563925 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1122 00:14:56.988684  563925 start.go:564] Will wait 60s for crictl version
	I1122 00:14:56.988823  563925 ssh_runner.go:195] Run: which crictl
	I1122 00:14:56.993930  563925 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1122 00:14:57.036836  563925 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.2
	RuntimeApiVersion:  v1
	I1122 00:14:57.036996  563925 ssh_runner.go:195] Run: crio --version
	I1122 00:14:57.084070  563925 ssh_runner.go:195] Run: crio --version
	I1122 00:14:57.125443  563925 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.2 ...
	I1122 00:14:57.128539  563925 out.go:179]   - env NO_PROXY=192.168.49.2
	I1122 00:14:57.131626  563925 cli_runner.go:164] Run: docker network inspect ha-561110 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1122 00:14:57.158795  563925 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1122 00:14:57.173001  563925 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1122 00:14:57.195629  563925 mustload.go:66] Loading cluster: ha-561110
	I1122 00:14:57.195865  563925 config.go:182] Loaded profile config "ha-561110": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1122 00:14:57.196127  563925 cli_runner.go:164] Run: docker container inspect ha-561110 --format={{.State.Status}}
	I1122 00:14:57.223215  563925 host.go:66] Checking if "ha-561110" exists ...
	I1122 00:14:57.223486  563925 certs.go:69] Setting up /home/jenkins/minikube-integration/21934-513600/.minikube/profiles/ha-561110 for IP: 192.168.49.3
	I1122 00:14:57.223499  563925 certs.go:195] generating shared ca certs ...
	I1122 00:14:57.223514  563925 certs.go:227] acquiring lock for ca certs: {Name:mkaf4c79493334cb2058b349c36be7473837f9b7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1122 00:14:57.223627  563925 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21934-513600/.minikube/ca.key
	I1122 00:14:57.223673  563925 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21934-513600/.minikube/proxy-client-ca.key
	I1122 00:14:57.223683  563925 certs.go:257] generating profile certs ...
	I1122 00:14:57.223760  563925 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21934-513600/.minikube/profiles/ha-561110/client.key
	I1122 00:14:57.223818  563925 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21934-513600/.minikube/profiles/ha-561110/apiserver.key.1995a48d
	I1122 00:14:57.223886  563925 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21934-513600/.minikube/profiles/ha-561110/proxy-client.key
	I1122 00:14:57.223904  563925 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21934-513600/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1122 00:14:57.223916  563925 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21934-513600/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1122 00:14:57.223932  563925 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21934-513600/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1122 00:14:57.223943  563925 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21934-513600/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1122 00:14:57.223958  563925 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21934-513600/.minikube/profiles/ha-561110/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1122 00:14:57.223970  563925 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21934-513600/.minikube/profiles/ha-561110/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1122 00:14:57.223985  563925 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21934-513600/.minikube/profiles/ha-561110/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1122 00:14:57.223995  563925 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21934-513600/.minikube/profiles/ha-561110/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1122 00:14:57.224044  563925 certs.go:484] found cert: /home/jenkins/minikube-integration/21934-513600/.minikube/certs/516937.pem (1338 bytes)
	W1122 00:14:57.224081  563925 certs.go:480] ignoring /home/jenkins/minikube-integration/21934-513600/.minikube/certs/516937_empty.pem, impossibly tiny 0 bytes
	I1122 00:14:57.224093  563925 certs.go:484] found cert: /home/jenkins/minikube-integration/21934-513600/.minikube/certs/ca-key.pem (1679 bytes)
	I1122 00:14:57.224122  563925 certs.go:484] found cert: /home/jenkins/minikube-integration/21934-513600/.minikube/certs/ca.pem (1078 bytes)
	I1122 00:14:57.224153  563925 certs.go:484] found cert: /home/jenkins/minikube-integration/21934-513600/.minikube/certs/cert.pem (1123 bytes)
	I1122 00:14:57.224179  563925 certs.go:484] found cert: /home/jenkins/minikube-integration/21934-513600/.minikube/certs/key.pem (1675 bytes)
	I1122 00:14:57.224229  563925 certs.go:484] found cert: /home/jenkins/minikube-integration/21934-513600/.minikube/files/etc/ssl/certs/5169372.pem (1708 bytes)
	I1122 00:14:57.224300  563925 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21934-513600/.minikube/files/etc/ssl/certs/5169372.pem -> /usr/share/ca-certificates/5169372.pem
	I1122 00:14:57.224317  563925 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21934-513600/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1122 00:14:57.224334  563925 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21934-513600/.minikube/certs/516937.pem -> /usr/share/ca-certificates/516937.pem
	I1122 00:14:57.224393  563925 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-561110
	I1122 00:14:57.252760  563925 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33535 SSHKeyPath:/home/jenkins/minikube-integration/21934-513600/.minikube/machines/ha-561110/id_rsa Username:docker}
	I1122 00:14:57.354098  563925 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.pub
	I1122 00:14:57.358457  563925 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I1122 00:14:57.367394  563925 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.key
	I1122 00:14:57.371898  563925 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1679 bytes)
	I1122 00:14:57.380426  563925 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.crt
	I1122 00:14:57.384846  563925 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I1122 00:14:57.393409  563925 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.key
	I1122 00:14:57.397317  563925 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1679 bytes)
	I1122 00:14:57.405462  563925 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.crt
	I1122 00:14:57.409765  563925 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I1122 00:14:57.418123  563925 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.key
	I1122 00:14:57.422240  563925 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1675 bytes)
	I1122 00:14:57.430625  563925 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21934-513600/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1122 00:14:57.448740  563925 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21934-513600/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1122 00:14:57.466976  563925 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21934-513600/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1122 00:14:57.489136  563925 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21934-513600/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1122 00:14:57.510655  563925 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21934-513600/.minikube/profiles/ha-561110/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I1122 00:14:57.531352  563925 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21934-513600/.minikube/profiles/ha-561110/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1122 00:14:57.551538  563925 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21934-513600/.minikube/profiles/ha-561110/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1122 00:14:57.572743  563925 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21934-513600/.minikube/profiles/ha-561110/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1122 00:14:57.593047  563925 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21934-513600/.minikube/files/etc/ssl/certs/5169372.pem --> /usr/share/ca-certificates/5169372.pem (1708 bytes)
	I1122 00:14:57.616537  563925 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21934-513600/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1122 00:14:57.636347  563925 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21934-513600/.minikube/certs/516937.pem --> /usr/share/ca-certificates/516937.pem (1338 bytes)
	I1122 00:14:57.655714  563925 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I1122 00:14:57.671132  563925 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1679 bytes)
	I1122 00:14:57.686013  563925 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I1122 00:14:57.702655  563925 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1679 bytes)
	I1122 00:14:57.717580  563925 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I1122 00:14:57.733104  563925 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1675 bytes)
	I1122 00:14:57.748086  563925 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I1122 00:14:57.762829  563925 ssh_runner.go:195] Run: openssl version
	I1122 00:14:57.770255  563925 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/5169372.pem && ln -fs /usr/share/ca-certificates/5169372.pem /etc/ssl/certs/5169372.pem"
	I1122 00:14:57.779598  563925 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/5169372.pem
	I1122 00:14:57.784055  563925 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov 21 23:55 /usr/share/ca-certificates/5169372.pem
	I1122 00:14:57.784140  563925 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/5169372.pem
	I1122 00:14:57.827123  563925 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/5169372.pem /etc/ssl/certs/3ec20f2e.0"
	I1122 00:14:57.836065  563925 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1122 00:14:57.845341  563925 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1122 00:14:57.849594  563925 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov 21 23:48 /usr/share/ca-certificates/minikubeCA.pem
	I1122 00:14:57.849679  563925 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1122 00:14:57.893282  563925 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1122 00:14:57.903127  563925 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/516937.pem && ln -fs /usr/share/ca-certificates/516937.pem /etc/ssl/certs/516937.pem"
	I1122 00:14:57.912201  563925 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/516937.pem
	I1122 00:14:57.916336  563925 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov 21 23:55 /usr/share/ca-certificates/516937.pem
	I1122 00:14:57.916418  563925 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/516937.pem
	I1122 00:14:57.959761  563925 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/516937.pem /etc/ssl/certs/51391683.0"
	I1122 00:14:57.969369  563925 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1122 00:14:57.974254  563925 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1122 00:14:58.017064  563925 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1122 00:14:58.070486  563925 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1122 00:14:58.116182  563925 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1122 00:14:58.158146  563925 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1122 00:14:58.220397  563925 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1122 00:14:58.263034  563925 kubeadm.go:935] updating node {m02 192.168.49.3 8443 v1.34.1 crio true true} ...
	I1122 00:14:58.263156  563925 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=ha-561110-m02 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.3
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:ha-561110 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1122 00:14:58.263186  563925 kube-vip.go:115] generating kube-vip config ...
	I1122 00:14:58.263244  563925 ssh_runner.go:195] Run: sudo sh -c "lsmod | grep ip_vs"
	I1122 00:14:58.282844  563925 kube-vip.go:163] giving up enabling control-plane load-balancing as ipvs kernel modules appears not to be available: sudo sh -c "lsmod | grep ip_vs": Process exited with status 1
	stdout:
	
	stderr:
	I1122 00:14:58.282918  563925 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.49.254
	    - name: prometheus_server
	      value: :2112
	    image: ghcr.io/kube-vip/kube-vip:v1.0.1
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I1122 00:14:58.282999  563925 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1122 00:14:58.293245  563925 binaries.go:51] Found k8s binaries, skipping transfer
	I1122 00:14:58.293334  563925 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I1122 00:14:58.306481  563925 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (363 bytes)
	I1122 00:14:58.327177  563925 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1122 00:14:58.341755  563925 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1358 bytes)
	I1122 00:14:58.358483  563925 ssh_runner.go:195] Run: grep 192.168.49.254	control-plane.minikube.internal$ /etc/hosts
	I1122 00:14:58.362397  563925 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1122 00:14:58.372758  563925 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1122 00:14:58.574763  563925 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1122 00:14:58.589366  563925 config.go:182] Loaded profile config "ha-561110": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1122 00:14:58.589071  563925 start.go:236] Will wait 6m0s for node &{Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1122 00:14:58.595464  563925 out.go:179] * Verifying Kubernetes components...
	I1122 00:14:58.597975  563925 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1122 00:14:58.780512  563925 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1122 00:14:58.804624  563925 kapi.go:59] client config for ha-561110: &rest.Config{Host:"https://192.168.49.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21934-513600/.minikube/profiles/ha-561110/client.crt", KeyFile:"/home/jenkins/minikube-integration/21934-513600/.minikube/profiles/ha-561110/client.key", CAFile:"/home/jenkins/minikube-integration/21934-513600/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(n
il)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1fb2df0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W1122 00:14:58.804704  563925 kubeadm.go:492] Overriding stale ClientConfig host https://192.168.49.254:8443 with https://192.168.49.2:8443
	I1122 00:14:58.804940  563925 node_ready.go:35] waiting up to 6m0s for node "ha-561110-m02" to be "Ready" ...
	I1122 00:15:18.370415  563925 node_ready.go:49] node "ha-561110-m02" is "Ready"
	I1122 00:15:18.370443  563925 node_ready.go:38] duration metric: took 19.565489572s for node "ha-561110-m02" to be "Ready" ...
	I1122 00:15:18.370457  563925 api_server.go:52] waiting for apiserver process to appear ...
	I1122 00:15:18.370519  563925 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1122 00:15:18.871467  563925 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1122 00:15:19.371300  563925 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1122 00:15:19.387145  563925 api_server.go:72] duration metric: took 20.797721396s to wait for apiserver process to appear ...
	I1122 00:15:19.387224  563925 api_server.go:88] waiting for apiserver healthz status ...
	I1122 00:15:19.387265  563925 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1122 00:15:19.396105  563925 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/start-kubernetes-service-cidr-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1122 00:15:19.396183  563925 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/start-kubernetes-service-cidr-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1122 00:15:19.887636  563925 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1122 00:15:19.899172  563925 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1122 00:15:19.899202  563925 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1122 00:15:20.387390  563925 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1122 00:15:20.399975  563925 api_server.go:279] https://192.168.49.2:8443/healthz returned 200:
	ok
	I1122 00:15:20.401338  563925 api_server.go:141] control plane version: v1.34.1
	I1122 00:15:20.401367  563925 api_server.go:131] duration metric: took 1.014115281s to wait for apiserver health ...
	I1122 00:15:20.401377  563925 system_pods.go:43] waiting for kube-system pods to appear ...
	I1122 00:15:20.428331  563925 system_pods.go:59] 25 kube-system pods found
	I1122 00:15:20.428372  563925 system_pods.go:61] "coredns-66bc5c9577-rrkkw" [97c7e1c9-e499-4131-957e-6da8bd29c994] Running
	I1122 00:15:20.428379  563925 system_pods.go:61] "coredns-66bc5c9577-vp8f5" [6d945620-203b-4e4e-b9e2-ef07e6b0f89b] Running
	I1122 00:15:20.428413  563925 system_pods.go:61] "etcd-ha-561110" [5a87193f-0871-4a4c-a409-4d52da31b88b] Running
	I1122 00:15:20.428428  563925 system_pods.go:61] "etcd-ha-561110-m02" [2c4dde3d-3a4c-4d47-b52c-980920facb09] Running
	I1122 00:15:20.428433  563925 system_pods.go:61] "etcd-ha-561110-m03" [d9d64b02-a6c9-48d1-9633-71cfae997fa8] Running
	I1122 00:15:20.428436  563925 system_pods.go:61] "kindnet-4tkd6" [63b063bf-1876-47e2-acb2-a5561b22b975] Running
	I1122 00:15:20.428440  563925 system_pods.go:61] "kindnet-7g65m" [edeca4a6-de24-4444-be9c-cdcbf744f52a] Running
	I1122 00:15:20.428448  563925 system_pods.go:61] "kindnet-dltvw" [ec75f262-ca6c-4766-bc81-60a4e51e94f0] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I1122 00:15:20.428457  563925 system_pods.go:61] "kindnet-w4kh7" [61649d36-e515-4c70-831e-2a509e3b67f3] Running
	I1122 00:15:20.428464  563925 system_pods.go:61] "kube-apiserver-ha-561110" [e94b2c4e-8cc8-45e3-9b89-d1805b254c99] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1122 00:15:20.428469  563925 system_pods.go:61] "kube-apiserver-ha-561110-m02" [98ee0c6b-6094-4264-98e8-69d3f1bd0c04] Running
	I1122 00:15:20.428491  563925 system_pods.go:61] "kube-apiserver-ha-561110-m03" [5b0131a7-0af0-48ff-8889-e82b8a2a2e43] Running
	I1122 00:15:20.428503  563925 system_pods.go:61] "kube-controller-manager-ha-561110" [db7b105b-9fa2-43a8-a08d-837b9960db31] Running
	I1122 00:15:20.428508  563925 system_pods.go:61] "kube-controller-manager-ha-561110-m02" [2bb17b90-45c6-4c74-96a1-81f05c51a0cf] Running
	I1122 00:15:20.428511  563925 system_pods.go:61] "kube-controller-manager-ha-561110-m03" [a1fefba1-3967-4b58-b8e7-2bec4a7b896b] Running
	I1122 00:15:20.428516  563925 system_pods.go:61] "kube-proxy-2vctt" [f89e3d32-bca1-4b9a-8531-7eab74e6e0da] Running
	I1122 00:15:20.428527  563925 system_pods.go:61] "kube-proxy-b8wb5" [ac8e8b19-cd59-454e-ab83-b9d08cf4cea0] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1122 00:15:20.428533  563925 system_pods.go:61] "kube-proxy-fh5cv" [318c6763-fea1-4564-86f6-18cfad691213] Running
	I1122 00:15:20.428542  563925 system_pods.go:61] "kube-proxy-v5ndg" [5e85dc4a-71dd-40c6-86f6-5c79b7f45194] Running
	I1122 00:15:20.428546  563925 system_pods.go:61] "kube-scheduler-ha-561110" [3267ceff-350f-471c-8e2b-9be8b8bdc471] Running
	I1122 00:15:20.428567  563925 system_pods.go:61] "kube-scheduler-ha-561110-m02" [75edb16c-cd99-46b4-bd49-e0646746877f] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1122 00:15:20.428578  563925 system_pods.go:61] "kube-scheduler-ha-561110-m03" [6763f28e-1726-4a48-bac3-1a7e5f82595e] Running
	I1122 00:15:20.428582  563925 system_pods.go:61] "kube-vip-ha-561110-m02" [e4be1217-de52-4c2a-8cfb-a411559af009] Running
	I1122 00:15:20.428596  563925 system_pods.go:61] "kube-vip-ha-561110-m03" [5e7072f7-2a3d-4add-bc1d-e69a03dd28cb] Running
	I1122 00:15:20.428608  563925 system_pods.go:61] "storage-provisioner" [6bf95a26-263b-4088-904d-b344d4826342] Running
	I1122 00:15:20.428614  563925 system_pods.go:74] duration metric: took 27.23022ms to wait for pod list to return data ...
	I1122 00:15:20.428622  563925 default_sa.go:34] waiting for default service account to be created ...
	I1122 00:15:20.444498  563925 default_sa.go:45] found service account: "default"
	I1122 00:15:20.444536  563925 default_sa.go:55] duration metric: took 15.88117ms for default service account to be created ...
	I1122 00:15:20.444583  563925 system_pods.go:116] waiting for k8s-apps to be running ...
	I1122 00:15:20.468591  563925 system_pods.go:86] 25 kube-system pods found
	I1122 00:15:20.468633  563925 system_pods.go:89] "coredns-66bc5c9577-rrkkw" [97c7e1c9-e499-4131-957e-6da8bd29c994] Running
	I1122 00:15:20.468662  563925 system_pods.go:89] "coredns-66bc5c9577-vp8f5" [6d945620-203b-4e4e-b9e2-ef07e6b0f89b] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1122 00:15:20.468674  563925 system_pods.go:89] "etcd-ha-561110" [5a87193f-0871-4a4c-a409-4d52da31b88b] Running
	I1122 00:15:20.468681  563925 system_pods.go:89] "etcd-ha-561110-m02" [2c4dde3d-3a4c-4d47-b52c-980920facb09] Running
	I1122 00:15:20.468703  563925 system_pods.go:89] "etcd-ha-561110-m03" [d9d64b02-a6c9-48d1-9633-71cfae997fa8] Running
	I1122 00:15:20.468713  563925 system_pods.go:89] "kindnet-4tkd6" [63b063bf-1876-47e2-acb2-a5561b22b975] Running
	I1122 00:15:20.468719  563925 system_pods.go:89] "kindnet-7g65m" [edeca4a6-de24-4444-be9c-cdcbf744f52a] Running
	I1122 00:15:20.468727  563925 system_pods.go:89] "kindnet-dltvw" [ec75f262-ca6c-4766-bc81-60a4e51e94f0] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I1122 00:15:20.468736  563925 system_pods.go:89] "kindnet-w4kh7" [61649d36-e515-4c70-831e-2a509e3b67f3] Running
	I1122 00:15:20.468743  563925 system_pods.go:89] "kube-apiserver-ha-561110" [e94b2c4e-8cc8-45e3-9b89-d1805b254c99] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1122 00:15:20.468753  563925 system_pods.go:89] "kube-apiserver-ha-561110-m02" [98ee0c6b-6094-4264-98e8-69d3f1bd0c04] Running
	I1122 00:15:20.468758  563925 system_pods.go:89] "kube-apiserver-ha-561110-m03" [5b0131a7-0af0-48ff-8889-e82b8a2a2e43] Running
	I1122 00:15:20.468762  563925 system_pods.go:89] "kube-controller-manager-ha-561110" [db7b105b-9fa2-43a8-a08d-837b9960db31] Running
	I1122 00:15:20.468785  563925 system_pods.go:89] "kube-controller-manager-ha-561110-m02" [2bb17b90-45c6-4c74-96a1-81f05c51a0cf] Running
	I1122 00:15:20.468796  563925 system_pods.go:89] "kube-controller-manager-ha-561110-m03" [a1fefba1-3967-4b58-b8e7-2bec4a7b896b] Running
	I1122 00:15:20.468800  563925 system_pods.go:89] "kube-proxy-2vctt" [f89e3d32-bca1-4b9a-8531-7eab74e6e0da] Running
	I1122 00:15:20.468809  563925 system_pods.go:89] "kube-proxy-b8wb5" [ac8e8b19-cd59-454e-ab83-b9d08cf4cea0] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1122 00:15:20.468818  563925 system_pods.go:89] "kube-proxy-fh5cv" [318c6763-fea1-4564-86f6-18cfad691213] Running
	I1122 00:15:20.468823  563925 system_pods.go:89] "kube-proxy-v5ndg" [5e85dc4a-71dd-40c6-86f6-5c79b7f45194] Running
	I1122 00:15:20.468827  563925 system_pods.go:89] "kube-scheduler-ha-561110" [3267ceff-350f-471c-8e2b-9be8b8bdc471] Running
	I1122 00:15:20.468833  563925 system_pods.go:89] "kube-scheduler-ha-561110-m02" [75edb16c-cd99-46b4-bd49-e0646746877f] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1122 00:15:20.468841  563925 system_pods.go:89] "kube-scheduler-ha-561110-m03" [6763f28e-1726-4a48-bac3-1a7e5f82595e] Running
	I1122 00:15:20.468869  563925 system_pods.go:89] "kube-vip-ha-561110-m02" [e4be1217-de52-4c2a-8cfb-a411559af009] Running
	I1122 00:15:20.468881  563925 system_pods.go:89] "kube-vip-ha-561110-m03" [5e7072f7-2a3d-4add-bc1d-e69a03dd28cb] Running
	I1122 00:15:20.468887  563925 system_pods.go:89] "storage-provisioner" [6bf95a26-263b-4088-904d-b344d4826342] Running
	I1122 00:15:20.468911  563925 system_pods.go:126] duration metric: took 24.319558ms to wait for k8s-apps to be running ...
	I1122 00:15:20.468936  563925 system_svc.go:44] waiting for kubelet service to be running ....
	I1122 00:15:20.469011  563925 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1122 00:15:20.486178  563925 system_svc.go:56] duration metric: took 17.232261ms WaitForService to wait for kubelet
	I1122 00:15:20.486213  563925 kubeadm.go:587] duration metric: took 21.896794227s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1122 00:15:20.486246  563925 node_conditions.go:102] verifying NodePressure condition ...
	I1122 00:15:20.505594  563925 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1122 00:15:20.505637  563925 node_conditions.go:123] node cpu capacity is 2
	I1122 00:15:20.505651  563925 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1122 00:15:20.505673  563925 node_conditions.go:123] node cpu capacity is 2
	I1122 00:15:20.505684  563925 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1122 00:15:20.505689  563925 node_conditions.go:123] node cpu capacity is 2
	I1122 00:15:20.505693  563925 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1122 00:15:20.505697  563925 node_conditions.go:123] node cpu capacity is 2
	I1122 00:15:20.505716  563925 node_conditions.go:105] duration metric: took 19.443078ms to run NodePressure ...
	I1122 00:15:20.505736  563925 start.go:242] waiting for startup goroutines ...
	I1122 00:15:20.505776  563925 start.go:256] writing updated cluster config ...
	I1122 00:15:20.509517  563925 out.go:203] 
	I1122 00:15:20.512839  563925 config.go:182] Loaded profile config "ha-561110": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1122 00:15:20.513009  563925 profile.go:143] Saving config to /home/jenkins/minikube-integration/21934-513600/.minikube/profiles/ha-561110/config.json ...
	I1122 00:15:20.516821  563925 out.go:179] * Starting "ha-561110-m03" control-plane node in "ha-561110" cluster
	I1122 00:15:20.520742  563925 cache.go:134] Beginning downloading kic base image for docker with crio
	I1122 00:15:20.524203  563925 out.go:179] * Pulling base image v0.0.48-1763588073-21934 ...
	I1122 00:15:20.527654  563925 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1122 00:15:20.527732  563925 cache.go:65] Caching tarball of preloaded images
	I1122 00:15:20.527695  563925 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763588073-21934@sha256:19d3da0413e1bfa354cbb88004c6796f8e9772a083e0230b0f6e50212ee04c7e in local docker daemon
	I1122 00:15:20.528031  563925 preload.go:238] Found /home/jenkins/minikube-integration/21934-513600/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1122 00:15:20.528049  563925 cache.go:68] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1122 00:15:20.528201  563925 profile.go:143] Saving config to /home/jenkins/minikube-integration/21934-513600/.minikube/profiles/ha-561110/config.json ...
	I1122 00:15:20.552866  563925 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763588073-21934@sha256:19d3da0413e1bfa354cbb88004c6796f8e9772a083e0230b0f6e50212ee04c7e in local docker daemon, skipping pull
	I1122 00:15:20.552887  563925 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763588073-21934@sha256:19d3da0413e1bfa354cbb88004c6796f8e9772a083e0230b0f6e50212ee04c7e exists in daemon, skipping load
	I1122 00:15:20.552899  563925 cache.go:243] Successfully downloaded all kic artifacts
	I1122 00:15:20.552922  563925 start.go:360] acquireMachinesLock for ha-561110-m03: {Name:mk8a19cfae84d78ad843d3f8169a3190cadb2d92 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1122 00:15:20.552971  563925 start.go:364] duration metric: took 34.805µs to acquireMachinesLock for "ha-561110-m03"
	I1122 00:15:20.552989  563925 start.go:96] Skipping create...Using existing machine configuration
	I1122 00:15:20.552994  563925 fix.go:54] fixHost starting: m03
	I1122 00:15:20.553255  563925 cli_runner.go:164] Run: docker container inspect ha-561110-m03 --format={{.State.Status}}
	I1122 00:15:20.581965  563925 fix.go:112] recreateIfNeeded on ha-561110-m03: state=Stopped err=<nil>
	W1122 00:15:20.581999  563925 fix.go:138] unexpected machine state, will restart: <nil>
	I1122 00:15:20.586013  563925 out.go:252] * Restarting existing docker container for "ha-561110-m03" ...
	I1122 00:15:20.586099  563925 cli_runner.go:164] Run: docker start ha-561110-m03
	I1122 00:15:20.954348  563925 cli_runner.go:164] Run: docker container inspect ha-561110-m03 --format={{.State.Status}}
	I1122 00:15:20.979345  563925 kic.go:430] container "ha-561110-m03" state is running.
	I1122 00:15:20.979708  563925 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-561110-m03
	I1122 00:15:21.002371  563925 profile.go:143] Saving config to /home/jenkins/minikube-integration/21934-513600/.minikube/profiles/ha-561110/config.json ...
	I1122 00:15:21.002682  563925 machine.go:94] provisionDockerMachine start ...
	I1122 00:15:21.002758  563925 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-561110-m03
	I1122 00:15:21.032872  563925 main.go:143] libmachine: Using SSH client type: native
	I1122 00:15:21.033195  563925 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33545 <nil> <nil>}
	I1122 00:15:21.033211  563925 main.go:143] libmachine: About to run SSH command:
	hostname
	I1122 00:15:21.033881  563925 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I1122 00:15:24.293634  563925 main.go:143] libmachine: SSH cmd err, output: <nil>: ha-561110-m03
	
	I1122 00:15:24.293664  563925 ubuntu.go:182] provisioning hostname "ha-561110-m03"
	I1122 00:15:24.293763  563925 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-561110-m03
	I1122 00:15:24.324599  563925 main.go:143] libmachine: Using SSH client type: native
	I1122 00:15:24.324926  563925 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33545 <nil> <nil>}
	I1122 00:15:24.324939  563925 main.go:143] libmachine: About to run SSH command:
	sudo hostname ha-561110-m03 && echo "ha-561110-m03" | sudo tee /etc/hostname
	I1122 00:15:24.595129  563925 main.go:143] libmachine: SSH cmd err, output: <nil>: ha-561110-m03
	
	I1122 00:15:24.595249  563925 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-561110-m03
	I1122 00:15:24.620733  563925 main.go:143] libmachine: Using SSH client type: native
	I1122 00:15:24.621049  563925 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33545 <nil> <nil>}
	I1122 00:15:24.621676  563925 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-561110-m03' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-561110-m03/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-561110-m03' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1122 00:15:24.856356  563925 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1122 00:15:24.856384  563925 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21934-513600/.minikube CaCertPath:/home/jenkins/minikube-integration/21934-513600/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21934-513600/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21934-513600/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21934-513600/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21934-513600/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21934-513600/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21934-513600/.minikube}
	I1122 00:15:24.856400  563925 ubuntu.go:190] setting up certificates
	I1122 00:15:24.856434  563925 provision.go:84] configureAuth start
	I1122 00:15:24.856521  563925 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-561110-m03
	I1122 00:15:24.885855  563925 provision.go:143] copyHostCerts
	I1122 00:15:24.885898  563925 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21934-513600/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/21934-513600/.minikube/ca.pem
	I1122 00:15:24.885930  563925 exec_runner.go:144] found /home/jenkins/minikube-integration/21934-513600/.minikube/ca.pem, removing ...
	I1122 00:15:24.885941  563925 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21934-513600/.minikube/ca.pem
	I1122 00:15:24.886031  563925 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21934-513600/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21934-513600/.minikube/ca.pem (1078 bytes)
	I1122 00:15:24.886116  563925 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21934-513600/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/21934-513600/.minikube/cert.pem
	I1122 00:15:24.886139  563925 exec_runner.go:144] found /home/jenkins/minikube-integration/21934-513600/.minikube/cert.pem, removing ...
	I1122 00:15:24.886147  563925 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21934-513600/.minikube/cert.pem
	I1122 00:15:24.886175  563925 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21934-513600/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21934-513600/.minikube/cert.pem (1123 bytes)
	I1122 00:15:24.886221  563925 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21934-513600/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/21934-513600/.minikube/key.pem
	I1122 00:15:24.886242  563925 exec_runner.go:144] found /home/jenkins/minikube-integration/21934-513600/.minikube/key.pem, removing ...
	I1122 00:15:24.886246  563925 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21934-513600/.minikube/key.pem
	I1122 00:15:24.886271  563925 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21934-513600/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21934-513600/.minikube/key.pem (1675 bytes)
	I1122 00:15:24.886322  563925 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21934-513600/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21934-513600/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21934-513600/.minikube/certs/ca-key.pem org=jenkins.ha-561110-m03 san=[127.0.0.1 192.168.49.4 ha-561110-m03 localhost minikube]
	I1122 00:15:25.343405  563925 provision.go:177] copyRemoteCerts
	I1122 00:15:25.343499  563925 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1122 00:15:25.343569  563925 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-561110-m03
	I1122 00:15:25.363935  563925 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33545 SSHKeyPath:/home/jenkins/minikube-integration/21934-513600/.minikube/machines/ha-561110-m03/id_rsa Username:docker}
	I1122 00:15:25.550286  563925 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21934-513600/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1122 00:15:25.550350  563925 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21934-513600/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1122 00:15:25.575299  563925 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21934-513600/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1122 00:15:25.575374  563925 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21934-513600/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1122 00:15:25.598237  563925 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21934-513600/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1122 00:15:25.598338  563925 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21934-513600/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1122 00:15:25.628049  563925 provision.go:87] duration metric: took 771.594834ms to configureAuth
	I1122 00:15:25.628077  563925 ubuntu.go:206] setting minikube options for container-runtime
	I1122 00:15:25.628358  563925 config.go:182] Loaded profile config "ha-561110": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1122 00:15:25.628508  563925 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-561110-m03
	I1122 00:15:25.662079  563925 main.go:143] libmachine: Using SSH client type: native
	I1122 00:15:25.662398  563925 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33545 <nil> <nil>}
	I1122 00:15:25.662419  563925 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1122 00:15:26.350066  563925 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1122 00:15:26.350092  563925 machine.go:97] duration metric: took 5.34739065s to provisionDockerMachine
	I1122 00:15:26.350164  563925 start.go:293] postStartSetup for "ha-561110-m03" (driver="docker")
	I1122 00:15:26.350184  563925 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1122 00:15:26.350274  563925 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1122 00:15:26.350334  563925 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-561110-m03
	I1122 00:15:26.375980  563925 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33545 SSHKeyPath:/home/jenkins/minikube-integration/21934-513600/.minikube/machines/ha-561110-m03/id_rsa Username:docker}
	I1122 00:15:26.492303  563925 ssh_runner.go:195] Run: cat /etc/os-release
	I1122 00:15:26.496241  563925 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1122 00:15:26.496272  563925 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1122 00:15:26.496284  563925 filesync.go:126] Scanning /home/jenkins/minikube-integration/21934-513600/.minikube/addons for local assets ...
	I1122 00:15:26.496339  563925 filesync.go:126] Scanning /home/jenkins/minikube-integration/21934-513600/.minikube/files for local assets ...
	I1122 00:15:26.496422  563925 filesync.go:149] local asset: /home/jenkins/minikube-integration/21934-513600/.minikube/files/etc/ssl/certs/5169372.pem -> 5169372.pem in /etc/ssl/certs
	I1122 00:15:26.496433  563925 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21934-513600/.minikube/files/etc/ssl/certs/5169372.pem -> /etc/ssl/certs/5169372.pem
	I1122 00:15:26.496535  563925 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1122 00:15:26.505321  563925 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21934-513600/.minikube/files/etc/ssl/certs/5169372.pem --> /etc/ssl/certs/5169372.pem (1708 bytes)
	I1122 00:15:26.526339  563925 start.go:296] duration metric: took 176.150409ms for postStartSetup
	I1122 00:15:26.526443  563925 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1122 00:15:26.526504  563925 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-561110-m03
	I1122 00:15:26.550085  563925 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33545 SSHKeyPath:/home/jenkins/minikube-integration/21934-513600/.minikube/machines/ha-561110-m03/id_rsa Username:docker}
	I1122 00:15:26.663353  563925 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1122 00:15:26.670831  563925 fix.go:56] duration metric: took 6.117814975s for fixHost
	I1122 00:15:26.670857  563925 start.go:83] releasing machines lock for "ha-561110-m03", held for 6.117877799s
	I1122 00:15:26.670925  563925 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-561110-m03
	I1122 00:15:26.706528  563925 out.go:179] * Found network options:
	I1122 00:15:26.709469  563925 out.go:179]   - NO_PROXY=192.168.49.2,192.168.49.3
	W1122 00:15:26.712333  563925 proxy.go:120] fail to check proxy env: Error ip not in block
	W1122 00:15:26.712371  563925 proxy.go:120] fail to check proxy env: Error ip not in block
	W1122 00:15:26.712395  563925 proxy.go:120] fail to check proxy env: Error ip not in block
	W1122 00:15:26.712406  563925 proxy.go:120] fail to check proxy env: Error ip not in block
	I1122 00:15:26.712494  563925 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1122 00:15:26.712541  563925 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-561110-m03
	I1122 00:15:26.712807  563925 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1122 00:15:26.712873  563925 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-561110-m03
	I1122 00:15:26.749585  563925 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33545 SSHKeyPath:/home/jenkins/minikube-integration/21934-513600/.minikube/machines/ha-561110-m03/id_rsa Username:docker}
	I1122 00:15:26.751996  563925 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33545 SSHKeyPath:/home/jenkins/minikube-integration/21934-513600/.minikube/machines/ha-561110-m03/id_rsa Username:docker}
	I1122 00:15:27.082598  563925 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1122 00:15:27.101543  563925 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1122 00:15:27.101616  563925 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1122 00:15:27.126235  563925 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1122 00:15:27.126257  563925 start.go:496] detecting cgroup driver to use...
	I1122 00:15:27.126287  563925 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1122 00:15:27.126334  563925 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1122 00:15:27.165923  563925 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1122 00:15:27.239673  563925 docker.go:218] disabling cri-docker service (if available) ...
	I1122 00:15:27.239811  563925 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1122 00:15:27.293000  563925 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1122 00:15:27.338853  563925 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1122 00:15:27.741533  563925 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1122 00:15:28.092677  563925 docker.go:234] disabling docker service ...
	I1122 00:15:28.092771  563925 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1122 00:15:28.168796  563925 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1122 00:15:28.226242  563925 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1122 00:15:28.659941  563925 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1122 00:15:29.058606  563925 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1122 00:15:29.101920  563925 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1122 00:15:29.136744  563925 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1122 00:15:29.136856  563925 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1122 00:15:29.162030  563925 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1122 00:15:29.162149  563925 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1122 00:15:29.183947  563925 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1122 00:15:29.221891  563925 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1122 00:15:29.244672  563925 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1122 00:15:29.275560  563925 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1122 00:15:29.306222  563925 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1122 00:15:29.332094  563925 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1122 00:15:29.350775  563925 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1122 00:15:29.370006  563925 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1122 00:15:29.391362  563925 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1122 00:15:29.706214  563925 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1122 00:17:00.097219  563925 ssh_runner.go:235] Completed: sudo systemctl restart crio: (1m30.390962529s)
	I1122 00:17:00.097249  563925 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1122 00:17:00.097319  563925 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1122 00:17:00.113544  563925 start.go:564] Will wait 60s for crictl version
	I1122 00:17:00.113649  563925 ssh_runner.go:195] Run: which crictl
	I1122 00:17:00.136784  563925 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1122 00:17:00.321902  563925 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.2
	RuntimeApiVersion:  v1
	I1122 00:17:00.322038  563925 ssh_runner.go:195] Run: crio --version
	I1122 00:17:00.437751  563925 ssh_runner.go:195] Run: crio --version
	I1122 00:17:00.498700  563925 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.2 ...
	I1122 00:17:00.502322  563925 out.go:179]   - env NO_PROXY=192.168.49.2
	I1122 00:17:00.505365  563925 out.go:179]   - env NO_PROXY=192.168.49.2,192.168.49.3
	I1122 00:17:00.508493  563925 cli_runner.go:164] Run: docker network inspect ha-561110 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1122 00:17:00.538039  563925 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1122 00:17:00.545403  563925 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1122 00:17:00.558621  563925 mustload.go:66] Loading cluster: ha-561110
	I1122 00:17:00.558938  563925 config.go:182] Loaded profile config "ha-561110": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1122 00:17:00.559221  563925 cli_runner.go:164] Run: docker container inspect ha-561110 --format={{.State.Status}}
	I1122 00:17:00.586783  563925 host.go:66] Checking if "ha-561110" exists ...
	I1122 00:17:00.587143  563925 certs.go:69] Setting up /home/jenkins/minikube-integration/21934-513600/.minikube/profiles/ha-561110 for IP: 192.168.49.4
	I1122 00:17:00.587159  563925 certs.go:195] generating shared ca certs ...
	I1122 00:17:00.587181  563925 certs.go:227] acquiring lock for ca certs: {Name:mkaf4c79493334cb2058b349c36be7473837f9b7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1122 00:17:00.587353  563925 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21934-513600/.minikube/ca.key
	I1122 00:17:00.587400  563925 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21934-513600/.minikube/proxy-client-ca.key
	I1122 00:17:00.587412  563925 certs.go:257] generating profile certs ...
	I1122 00:17:00.587496  563925 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21934-513600/.minikube/profiles/ha-561110/client.key
	I1122 00:17:00.587573  563925 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21934-513600/.minikube/profiles/ha-561110/apiserver.key.be48eb15
	I1122 00:17:00.587622  563925 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21934-513600/.minikube/profiles/ha-561110/proxy-client.key
	I1122 00:17:00.587635  563925 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21934-513600/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1122 00:17:00.587651  563925 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21934-513600/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1122 00:17:00.587667  563925 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21934-513600/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1122 00:17:00.587723  563925 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21934-513600/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1122 00:17:00.587739  563925 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21934-513600/.minikube/profiles/ha-561110/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1122 00:17:00.587752  563925 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21934-513600/.minikube/profiles/ha-561110/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1122 00:17:00.587768  563925 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21934-513600/.minikube/profiles/ha-561110/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1122 00:17:00.587778  563925 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21934-513600/.minikube/profiles/ha-561110/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1122 00:17:00.587836  563925 certs.go:484] found cert: /home/jenkins/minikube-integration/21934-513600/.minikube/certs/516937.pem (1338 bytes)
	W1122 00:17:00.587877  563925 certs.go:480] ignoring /home/jenkins/minikube-integration/21934-513600/.minikube/certs/516937_empty.pem, impossibly tiny 0 bytes
	I1122 00:17:00.587891  563925 certs.go:484] found cert: /home/jenkins/minikube-integration/21934-513600/.minikube/certs/ca-key.pem (1679 bytes)
	I1122 00:17:00.587929  563925 certs.go:484] found cert: /home/jenkins/minikube-integration/21934-513600/.minikube/certs/ca.pem (1078 bytes)
	I1122 00:17:00.587961  563925 certs.go:484] found cert: /home/jenkins/minikube-integration/21934-513600/.minikube/certs/cert.pem (1123 bytes)
	I1122 00:17:00.587990  563925 certs.go:484] found cert: /home/jenkins/minikube-integration/21934-513600/.minikube/certs/key.pem (1675 bytes)
	I1122 00:17:00.588101  563925 certs.go:484] found cert: /home/jenkins/minikube-integration/21934-513600/.minikube/files/etc/ssl/certs/5169372.pem (1708 bytes)
	I1122 00:17:00.588199  563925 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21934-513600/.minikube/certs/516937.pem -> /usr/share/ca-certificates/516937.pem
	I1122 00:17:00.588226  563925 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21934-513600/.minikube/files/etc/ssl/certs/5169372.pem -> /usr/share/ca-certificates/5169372.pem
	I1122 00:17:00.588241  563925 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21934-513600/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1122 00:17:00.588312  563925 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-561110
	I1122 00:17:00.613873  563925 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33535 SSHKeyPath:/home/jenkins/minikube-integration/21934-513600/.minikube/machines/ha-561110/id_rsa Username:docker}
	I1122 00:17:00.714215  563925 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.pub
	I1122 00:17:00.718718  563925 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I1122 00:17:00.729019  563925 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.key
	I1122 00:17:00.733330  563925 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1679 bytes)
	I1122 00:17:00.743477  563925 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.crt
	I1122 00:17:00.747658  563925 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I1122 00:17:00.758201  563925 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.key
	I1122 00:17:00.763435  563925 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1679 bytes)
	I1122 00:17:00.773425  563925 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.crt
	I1122 00:17:00.777456  563925 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I1122 00:17:00.787246  563925 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.key
	I1122 00:17:00.791598  563925 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1675 bytes)
	I1122 00:17:00.801660  563925 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21934-513600/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1122 00:17:00.826055  563925 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21934-513600/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1122 00:17:00.848933  563925 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21934-513600/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1122 00:17:00.888604  563925 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21934-513600/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1122 00:17:00.921496  563925 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21934-513600/.minikube/profiles/ha-561110/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I1122 00:17:00.951086  563925 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21934-513600/.minikube/profiles/ha-561110/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1122 00:17:00.975145  563925 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21934-513600/.minikube/profiles/ha-561110/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1122 00:17:00.999138  563925 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21934-513600/.minikube/profiles/ha-561110/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1122 00:17:01.024534  563925 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21934-513600/.minikube/certs/516937.pem --> /usr/share/ca-certificates/516937.pem (1338 bytes)
	I1122 00:17:01.046560  563925 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21934-513600/.minikube/files/etc/ssl/certs/5169372.pem --> /usr/share/ca-certificates/5169372.pem (1708 bytes)
	I1122 00:17:01.072877  563925 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21934-513600/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1122 00:17:01.103089  563925 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I1122 00:17:01.119601  563925 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1679 bytes)
	I1122 00:17:01.136419  563925 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I1122 00:17:01.153380  563925 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1679 bytes)
	I1122 00:17:01.171240  563925 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I1122 00:17:01.202584  563925 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1675 bytes)
	I1122 00:17:01.223852  563925 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I1122 00:17:01.247292  563925 ssh_runner.go:195] Run: openssl version
	I1122 00:17:01.259516  563925 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1122 00:17:01.280780  563925 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1122 00:17:01.289039  563925 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov 21 23:48 /usr/share/ca-certificates/minikubeCA.pem
	I1122 00:17:01.289158  563925 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1122 00:17:01.373640  563925 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1122 00:17:01.395461  563925 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/516937.pem && ln -fs /usr/share/ca-certificates/516937.pem /etc/ssl/certs/516937.pem"
	I1122 00:17:01.420524  563925 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/516937.pem
	I1122 00:17:01.426623  563925 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov 21 23:55 /usr/share/ca-certificates/516937.pem
	I1122 00:17:01.426698  563925 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/516937.pem
	I1122 00:17:01.478449  563925 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/516937.pem /etc/ssl/certs/51391683.0"
	I1122 00:17:01.490493  563925 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/5169372.pem && ln -fs /usr/share/ca-certificates/5169372.pem /etc/ssl/certs/5169372.pem"
	I1122 00:17:01.502084  563925 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/5169372.pem
	I1122 00:17:01.507855  563925 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov 21 23:55 /usr/share/ca-certificates/5169372.pem
	I1122 00:17:01.507956  563925 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/5169372.pem
	I1122 00:17:01.587957  563925 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/5169372.pem /etc/ssl/certs/3ec20f2e.0"
	I1122 00:17:01.599719  563925 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1122 00:17:01.605126  563925 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1122 00:17:01.660029  563925 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1122 00:17:01.712345  563925 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1122 00:17:01.786467  563925 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1122 00:17:01.862166  563925 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1122 00:17:01.946187  563925 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1122 00:17:02.010384  563925 kubeadm.go:935] updating node {m03 192.168.49.4 8443 v1.34.1 crio true true} ...
	I1122 00:17:02.010523  563925 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=ha-561110-m03 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.4
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:ha-561110 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1122 00:17:02.010557  563925 kube-vip.go:115] generating kube-vip config ...
	I1122 00:17:02.010619  563925 ssh_runner.go:195] Run: sudo sh -c "lsmod | grep ip_vs"
	I1122 00:17:02.037246  563925 kube-vip.go:163] giving up enabling control-plane load-balancing as ipvs kernel modules appears not to be available: sudo sh -c "lsmod | grep ip_vs": Process exited with status 1
	stdout:
	
	stderr:
	I1122 00:17:02.037316  563925 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.49.254
	    - name: prometheus_server
	      value: :2112
	    image: ghcr.io/kube-vip/kube-vip:v1.0.1
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I1122 00:17:02.037405  563925 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1122 00:17:02.052472  563925 binaries.go:51] Found k8s binaries, skipping transfer
	I1122 00:17:02.052567  563925 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I1122 00:17:02.073857  563925 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (363 bytes)
	I1122 00:17:02.112139  563925 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1122 00:17:02.133854  563925 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1358 bytes)
	I1122 00:17:02.152649  563925 ssh_runner.go:195] Run: grep 192.168.49.254	control-plane.minikube.internal$ /etc/hosts
	I1122 00:17:02.158389  563925 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1122 00:17:02.184228  563925 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1122 00:17:02.493772  563925 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1122 00:17:02.514312  563925 start.go:236] Will wait 6m0s for node &{Name:m03 IP:192.168.49.4 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1122 00:17:02.514696  563925 config.go:182] Loaded profile config "ha-561110": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1122 00:17:02.518824  563925 out.go:179] * Verifying Kubernetes components...
	I1122 00:17:02.521919  563925 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1122 00:17:02.746981  563925 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1122 00:17:02.765468  563925 kapi.go:59] client config for ha-561110: &rest.Config{Host:"https://192.168.49.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21934-513600/.minikube/profiles/ha-561110/client.crt", KeyFile:"/home/jenkins/minikube-integration/21934-513600/.minikube/profiles/ha-561110/client.key", CAFile:"/home/jenkins/minikube-integration/21934-513600/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(n
il)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1fb2df0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W1122 00:17:02.765589  563925 kubeadm.go:492] Overriding stale ClientConfig host https://192.168.49.254:8443 with https://192.168.49.2:8443
	I1122 00:17:02.765898  563925 node_ready.go:35] waiting up to 6m0s for node "ha-561110-m03" to be "Ready" ...
	W1122 00:17:04.770183  563925 node_ready.go:57] node "ha-561110-m03" has "Ready":"Unknown" status (will retry)
	W1122 00:17:06.771513  563925 node_ready.go:57] node "ha-561110-m03" has "Ready":"Unknown" status (will retry)
	W1122 00:17:09.269611  563925 node_ready.go:57] node "ha-561110-m03" has "Ready":"Unknown" status (will retry)
	W1122 00:17:11.270683  563925 node_ready.go:57] node "ha-561110-m03" has "Ready":"Unknown" status (will retry)
	W1122 00:17:13.275612  563925 node_ready.go:57] node "ha-561110-m03" has "Ready":"Unknown" status (will retry)
	W1122 00:17:15.769660  563925 node_ready.go:57] node "ha-561110-m03" has "Ready":"Unknown" status (will retry)
	W1122 00:17:17.769933  563925 node_ready.go:57] node "ha-561110-m03" has "Ready":"Unknown" status (will retry)
	W1122 00:17:20.269315  563925 node_ready.go:57] node "ha-561110-m03" has "Ready":"Unknown" status (will retry)
	W1122 00:17:22.270943  563925 node_ready.go:57] node "ha-561110-m03" has "Ready":"Unknown" status (will retry)
	W1122 00:17:24.769260  563925 node_ready.go:57] node "ha-561110-m03" has "Ready":"Unknown" status (will retry)
	W1122 00:17:26.770369  563925 node_ready.go:57] node "ha-561110-m03" has "Ready":"Unknown" status (will retry)
	W1122 00:17:29.269015  563925 node_ready.go:57] node "ha-561110-m03" has "Ready":"Unknown" status (will retry)
	W1122 00:17:31.269858  563925 node_ready.go:57] node "ha-561110-m03" has "Ready":"Unknown" status (will retry)
	W1122 00:17:33.269945  563925 node_ready.go:57] node "ha-561110-m03" has "Ready":"Unknown" status (will retry)
	W1122 00:17:35.769971  563925 node_ready.go:57] node "ha-561110-m03" has "Ready":"Unknown" status (will retry)
	W1122 00:17:38.269922  563925 node_ready.go:57] node "ha-561110-m03" has "Ready":"Unknown" status (will retry)
	W1122 00:17:40.270335  563925 node_ready.go:57] node "ha-561110-m03" has "Ready":"Unknown" status (will retry)
	W1122 00:17:42.271149  563925 node_ready.go:57] node "ha-561110-m03" has "Ready":"Unknown" status (will retry)
	W1122 00:17:44.770140  563925 node_ready.go:57] node "ha-561110-m03" has "Ready":"Unknown" status (will retry)
	W1122 00:17:47.269690  563925 node_ready.go:57] node "ha-561110-m03" has "Ready":"Unknown" status (will retry)
	W1122 00:17:49.270654  563925 node_ready.go:57] node "ha-561110-m03" has "Ready":"Unknown" status (will retry)
	W1122 00:17:51.770465  563925 node_ready.go:57] node "ha-561110-m03" has "Ready":"Unknown" status (will retry)
	W1122 00:17:54.269768  563925 node_ready.go:57] node "ha-561110-m03" has "Ready":"Unknown" status (will retry)
	W1122 00:17:56.769254  563925 node_ready.go:57] node "ha-561110-m03" has "Ready":"Unknown" status (will retry)
	W1122 00:17:58.769625  563925 node_ready.go:57] node "ha-561110-m03" has "Ready":"Unknown" status (will retry)
	W1122 00:18:00.770202  563925 node_ready.go:57] node "ha-561110-m03" has "Ready":"Unknown" status (will retry)
	W1122 00:18:02.773270  563925 node_ready.go:57] node "ha-561110-m03" has "Ready":"Unknown" status (will retry)
	W1122 00:18:05.270130  563925 node_ready.go:57] node "ha-561110-m03" has "Ready":"Unknown" status (will retry)
	W1122 00:18:07.271583  563925 node_ready.go:57] node "ha-561110-m03" has "Ready":"Unknown" status (will retry)
	W1122 00:18:09.769397  563925 node_ready.go:57] node "ha-561110-m03" has "Ready":"Unknown" status (will retry)
	W1122 00:18:11.770012  563925 node_ready.go:57] node "ha-561110-m03" has "Ready":"Unknown" status (will retry)
	W1122 00:18:13.770106  563925 node_ready.go:57] node "ha-561110-m03" has "Ready":"Unknown" status (will retry)
	W1122 00:18:16.270008  563925 node_ready.go:57] node "ha-561110-m03" has "Ready":"Unknown" status (will retry)
	W1122 00:18:18.771373  563925 node_ready.go:57] node "ha-561110-m03" has "Ready":"Unknown" status (will retry)
	W1122 00:18:21.270047  563925 node_ready.go:57] node "ha-561110-m03" has "Ready":"Unknown" status (will retry)
	W1122 00:18:23.768948  563925 node_ready.go:57] node "ha-561110-m03" has "Ready":"Unknown" status (will retry)
	W1122 00:18:25.770213  563925 node_ready.go:57] node "ha-561110-m03" has "Ready":"Unknown" status (will retry)
	W1122 00:18:28.269635  563925 node_ready.go:57] node "ha-561110-m03" has "Ready":"Unknown" status (will retry)
	W1122 00:18:30.770096  563925 node_ready.go:57] node "ha-561110-m03" has "Ready":"Unknown" status (will retry)
	W1122 00:18:32.771794  563925 node_ready.go:57] node "ha-561110-m03" has "Ready":"Unknown" status (will retry)
	W1122 00:18:35.270059  563925 node_ready.go:57] node "ha-561110-m03" has "Ready":"Unknown" status (will retry)
	W1122 00:18:37.769842  563925 node_ready.go:57] node "ha-561110-m03" has "Ready":"Unknown" status (will retry)
	W1122 00:18:40.269289  563925 node_ready.go:57] node "ha-561110-m03" has "Ready":"Unknown" status (will retry)
	W1122 00:18:42.273345  563925 node_ready.go:57] node "ha-561110-m03" has "Ready":"Unknown" status (will retry)
	W1122 00:18:44.275125  563925 node_ready.go:57] node "ha-561110-m03" has "Ready":"Unknown" status (will retry)
	W1122 00:18:46.776656  563925 node_ready.go:57] node "ha-561110-m03" has "Ready":"Unknown" status (will retry)
	W1122 00:18:49.270280  563925 node_ready.go:57] node "ha-561110-m03" has "Ready":"Unknown" status (will retry)
	W1122 00:18:51.770076  563925 node_ready.go:57] node "ha-561110-m03" has "Ready":"Unknown" status (will retry)
	W1122 00:18:54.269588  563925 node_ready.go:57] node "ha-561110-m03" has "Ready":"Unknown" status (will retry)
	W1122 00:18:56.270135  563925 node_ready.go:57] node "ha-561110-m03" has "Ready":"Unknown" status (will retry)
	W1122 00:18:58.768991  563925 node_ready.go:57] node "ha-561110-m03" has "Ready":"Unknown" status (will retry)
	W1122 00:19:00.771422  563925 node_ready.go:57] node "ha-561110-m03" has "Ready":"Unknown" status (will retry)
	W1122 00:19:03.269840  563925 node_ready.go:57] node "ha-561110-m03" has "Ready":"Unknown" status (will retry)
	W1122 00:19:05.270420  563925 node_ready.go:57] node "ha-561110-m03" has "Ready":"Unknown" status (will retry)
	W1122 00:19:07.770020  563925 node_ready.go:57] node "ha-561110-m03" has "Ready":"Unknown" status (will retry)
	W1122 00:19:10.268980  563925 node_ready.go:57] node "ha-561110-m03" has "Ready":"Unknown" status (will retry)
	W1122 00:19:12.269695  563925 node_ready.go:57] node "ha-561110-m03" has "Ready":"Unknown" status (will retry)
	W1122 00:19:14.769271  563925 node_ready.go:57] node "ha-561110-m03" has "Ready":"Unknown" status (will retry)
	W1122 00:19:16.769509  563925 node_ready.go:57] node "ha-561110-m03" has "Ready":"Unknown" status (will retry)
	W1122 00:19:19.270240  563925 node_ready.go:57] node "ha-561110-m03" has "Ready":"Unknown" status (will retry)
	W1122 00:19:21.769249  563925 node_ready.go:57] node "ha-561110-m03" has "Ready":"Unknown" status (will retry)
	W1122 00:19:23.770580  563925 node_ready.go:57] node "ha-561110-m03" has "Ready":"Unknown" status (will retry)
	W1122 00:19:26.269982  563925 node_ready.go:57] node "ha-561110-m03" has "Ready":"Unknown" status (will retry)
	W1122 00:19:28.770054  563925 node_ready.go:57] node "ha-561110-m03" has "Ready":"Unknown" status (will retry)
	W1122 00:19:31.269163  563925 node_ready.go:57] node "ha-561110-m03" has "Ready":"Unknown" status (will retry)
	W1122 00:19:33.269886  563925 node_ready.go:57] node "ha-561110-m03" has "Ready":"Unknown" status (will retry)
	W1122 00:19:35.270677  563925 node_ready.go:57] node "ha-561110-m03" has "Ready":"Unknown" status (will retry)
	W1122 00:19:37.769622  563925 node_ready.go:57] node "ha-561110-m03" has "Ready":"Unknown" status (will retry)
	W1122 00:19:39.769703  563925 node_ready.go:57] node "ha-561110-m03" has "Ready":"Unknown" status (will retry)
	W1122 00:19:42.270956  563925 node_ready.go:57] node "ha-561110-m03" has "Ready":"Unknown" status (will retry)
	W1122 00:19:44.768762  563925 node_ready.go:57] node "ha-561110-m03" has "Ready":"Unknown" status (will retry)
	W1122 00:19:46.769989  563925 node_ready.go:57] node "ha-561110-m03" has "Ready":"Unknown" status (will retry)
	W1122 00:19:49.269515  563925 node_ready.go:57] node "ha-561110-m03" has "Ready":"Unknown" status (will retry)
	W1122 00:19:51.270122  563925 node_ready.go:57] node "ha-561110-m03" has "Ready":"Unknown" status (will retry)
	W1122 00:19:53.769467  563925 node_ready.go:57] node "ha-561110-m03" has "Ready":"Unknown" status (will retry)
	W1122 00:19:55.770293  563925 node_ready.go:57] node "ha-561110-m03" has "Ready":"Unknown" status (will retry)
	W1122 00:19:58.269947  563925 node_ready.go:57] node "ha-561110-m03" has "Ready":"Unknown" status (will retry)
	W1122 00:20:00.322810  563925 node_ready.go:57] node "ha-561110-m03" has "Ready":"Unknown" status (will retry)
	W1122 00:20:02.769554  563925 node_ready.go:57] node "ha-561110-m03" has "Ready":"Unknown" status (will retry)
	W1122 00:20:04.770551  563925 node_ready.go:57] node "ha-561110-m03" has "Ready":"Unknown" status (will retry)
	W1122 00:20:07.269784  563925 node_ready.go:57] node "ha-561110-m03" has "Ready":"Unknown" status (will retry)
	W1122 00:20:09.769344  563925 node_ready.go:57] node "ha-561110-m03" has "Ready":"Unknown" status (will retry)
	W1122 00:20:11.769990  563925 node_ready.go:57] node "ha-561110-m03" has "Ready":"Unknown" status (will retry)
	W1122 00:20:14.269132  563925 node_ready.go:57] node "ha-561110-m03" has "Ready":"Unknown" status (will retry)
	W1122 00:20:16.269765  563925 node_ready.go:57] node "ha-561110-m03" has "Ready":"Unknown" status (will retry)
	W1122 00:20:18.770174  563925 node_ready.go:57] node "ha-561110-m03" has "Ready":"Unknown" status (will retry)
	W1122 00:20:21.269837  563925 node_ready.go:57] node "ha-561110-m03" has "Ready":"Unknown" status (will retry)
	W1122 00:20:23.270065  563925 node_ready.go:57] node "ha-561110-m03" has "Ready":"Unknown" status (will retry)
	W1122 00:20:25.770172  563925 node_ready.go:57] node "ha-561110-m03" has "Ready":"Unknown" status (will retry)
	W1122 00:20:28.269279  563925 node_ready.go:57] node "ha-561110-m03" has "Ready":"Unknown" status (will retry)
	W1122 00:20:30.270734  563925 node_ready.go:57] node "ha-561110-m03" has "Ready":"Unknown" status (will retry)
	W1122 00:20:32.769392  563925 node_ready.go:57] node "ha-561110-m03" has "Ready":"Unknown" status (will retry)
	W1122 00:20:34.769668  563925 node_ready.go:57] node "ha-561110-m03" has "Ready":"Unknown" status (will retry)
	W1122 00:20:36.770010  563925 node_ready.go:57] node "ha-561110-m03" has "Ready":"Unknown" status (will retry)
	W1122 00:20:38.770203  563925 node_ready.go:57] node "ha-561110-m03" has "Ready":"Unknown" status (will retry)
	W1122 00:20:40.770721  563925 node_ready.go:57] node "ha-561110-m03" has "Ready":"Unknown" status (will retry)
	W1122 00:20:43.270389  563925 node_ready.go:57] node "ha-561110-m03" has "Ready":"Unknown" status (will retry)
	W1122 00:20:45.276123  563925 node_ready.go:57] node "ha-561110-m03" has "Ready":"Unknown" status (will retry)
	W1122 00:20:47.770112  563925 node_ready.go:57] node "ha-561110-m03" has "Ready":"Unknown" status (will retry)
	W1122 00:20:50.269310  563925 node_ready.go:57] node "ha-561110-m03" has "Ready":"Unknown" status (will retry)
	W1122 00:20:52.269861  563925 node_ready.go:57] node "ha-561110-m03" has "Ready":"Unknown" status (will retry)
	W1122 00:20:54.270570  563925 node_ready.go:57] node "ha-561110-m03" has "Ready":"Unknown" status (will retry)
	W1122 00:20:56.769591  563925 node_ready.go:57] node "ha-561110-m03" has "Ready":"Unknown" status (will retry)
	W1122 00:20:58.770126  563925 node_ready.go:57] node "ha-561110-m03" has "Ready":"Unknown" status (will retry)
	W1122 00:21:01.270099  563925 node_ready.go:57] node "ha-561110-m03" has "Ready":"Unknown" status (will retry)
	W1122 00:21:03.769793  563925 node_ready.go:57] node "ha-561110-m03" has "Ready":"Unknown" status (will retry)
	W1122 00:21:05.771503  563925 node_ready.go:57] node "ha-561110-m03" has "Ready":"Unknown" status (will retry)
	W1122 00:21:08.269537  563925 node_ready.go:57] node "ha-561110-m03" has "Ready":"Unknown" status (will retry)
	W1122 00:21:10.770347  563925 node_ready.go:57] node "ha-561110-m03" has "Ready":"Unknown" status (will retry)
	W1122 00:21:13.269687  563925 node_ready.go:57] node "ha-561110-m03" has "Ready":"Unknown" status (will retry)
	W1122 00:21:15.270464  563925 node_ready.go:57] node "ha-561110-m03" has "Ready":"Unknown" status (will retry)
	W1122 00:21:17.271724  563925 node_ready.go:57] node "ha-561110-m03" has "Ready":"Unknown" status (will retry)
	W1122 00:21:19.769950  563925 node_ready.go:57] node "ha-561110-m03" has "Ready":"Unknown" status (will retry)
	W1122 00:21:22.269581  563925 node_ready.go:57] node "ha-561110-m03" has "Ready":"Unknown" status (will retry)
	W1122 00:21:24.269903  563925 node_ready.go:57] node "ha-561110-m03" has "Ready":"Unknown" status (will retry)
	W1122 00:21:26.269977  563925 node_ready.go:57] node "ha-561110-m03" has "Ready":"Unknown" status (will retry)
	W1122 00:21:28.769453  563925 node_ready.go:57] node "ha-561110-m03" has "Ready":"Unknown" status (will retry)
	W1122 00:21:30.770323  563925 node_ready.go:57] node "ha-561110-m03" has "Ready":"Unknown" status (will retry)
	W1122 00:21:33.270153  563925 node_ready.go:57] node "ha-561110-m03" has "Ready":"Unknown" status (will retry)
	W1122 00:21:35.769486  563925 node_ready.go:57] node "ha-561110-m03" has "Ready":"Unknown" status (will retry)
	W1122 00:21:37.770126  563925 node_ready.go:57] node "ha-561110-m03" has "Ready":"Unknown" status (will retry)
	W1122 00:21:39.770389  563925 node_ready.go:57] node "ha-561110-m03" has "Ready":"Unknown" status (will retry)
	W1122 00:21:42.273464  563925 node_ready.go:57] node "ha-561110-m03" has "Ready":"Unknown" status (will retry)
	W1122 00:21:44.769688  563925 node_ready.go:57] node "ha-561110-m03" has "Ready":"Unknown" status (will retry)
	W1122 00:21:46.770370  563925 node_ready.go:57] node "ha-561110-m03" has "Ready":"Unknown" status (will retry)
	W1122 00:21:49.269335  563925 node_ready.go:57] node "ha-561110-m03" has "Ready":"Unknown" status (will retry)
	W1122 00:21:51.270430  563925 node_ready.go:57] node "ha-561110-m03" has "Ready":"Unknown" status (will retry)
	W1122 00:21:53.769776  563925 node_ready.go:57] node "ha-561110-m03" has "Ready":"Unknown" status (will retry)
	W1122 00:21:56.269697  563925 node_ready.go:57] node "ha-561110-m03" has "Ready":"Unknown" status (will retry)
	W1122 00:21:58.270251  563925 node_ready.go:57] node "ha-561110-m03" has "Ready":"Unknown" status (will retry)
	W1122 00:22:00.292924  563925 node_ready.go:57] node "ha-561110-m03" has "Ready":"Unknown" status (will retry)
	W1122 00:22:02.779828  563925 node_ready.go:57] node "ha-561110-m03" has "Ready":"Unknown" status (will retry)
	W1122 00:22:05.270290  563925 node_ready.go:57] node "ha-561110-m03" has "Ready":"Unknown" status (will retry)
	W1122 00:22:07.270475  563925 node_ready.go:57] node "ha-561110-m03" has "Ready":"Unknown" status (will retry)
	W1122 00:22:09.769072  563925 node_ready.go:57] node "ha-561110-m03" has "Ready":"Unknown" status (will retry)
	W1122 00:22:11.769917  563925 node_ready.go:57] node "ha-561110-m03" has "Ready":"Unknown" status (will retry)
	W1122 00:22:13.770097  563925 node_ready.go:57] node "ha-561110-m03" has "Ready":"Unknown" status (will retry)
	W1122 00:22:16.269780  563925 node_ready.go:57] node "ha-561110-m03" has "Ready":"Unknown" status (will retry)
	W1122 00:22:18.269850  563925 node_ready.go:57] node "ha-561110-m03" has "Ready":"Unknown" status (will retry)
	W1122 00:22:20.276178  563925 node_ready.go:57] node "ha-561110-m03" has "Ready":"Unknown" status (will retry)
	W1122 00:22:22.770032  563925 node_ready.go:57] node "ha-561110-m03" has "Ready":"Unknown" status (will retry)
	W1122 00:22:25.270326  563925 node_ready.go:57] node "ha-561110-m03" has "Ready":"Unknown" status (will retry)
	W1122 00:22:27.769736  563925 node_ready.go:57] node "ha-561110-m03" has "Ready":"Unknown" status (will retry)
	W1122 00:22:30.270331  563925 node_ready.go:57] node "ha-561110-m03" has "Ready":"Unknown" status (will retry)
	W1122 00:22:32.768987  563925 node_ready.go:57] node "ha-561110-m03" has "Ready":"Unknown" status (will retry)
	W1122 00:22:35.269587  563925 node_ready.go:57] node "ha-561110-m03" has "Ready":"Unknown" status (will retry)
	W1122 00:22:37.770642  563925 node_ready.go:57] node "ha-561110-m03" has "Ready":"Unknown" status (will retry)
	W1122 00:22:40.269226  563925 node_ready.go:57] node "ha-561110-m03" has "Ready":"Unknown" status (will retry)
	W1122 00:22:42.281918  563925 node_ready.go:57] node "ha-561110-m03" has "Ready":"Unknown" status (will retry)
	W1122 00:22:44.770302  563925 node_ready.go:57] node "ha-561110-m03" has "Ready":"Unknown" status (will retry)
	W1122 00:22:47.269651  563925 node_ready.go:57] node "ha-561110-m03" has "Ready":"Unknown" status (will retry)
	W1122 00:22:49.270011  563925 node_ready.go:57] node "ha-561110-m03" has "Ready":"Unknown" status (will retry)
	W1122 00:22:51.770305  563925 node_ready.go:57] node "ha-561110-m03" has "Ready":"Unknown" status (will retry)
	W1122 00:22:54.269848  563925 node_ready.go:57] node "ha-561110-m03" has "Ready":"Unknown" status (will retry)
	W1122 00:22:56.269962  563925 node_ready.go:57] node "ha-561110-m03" has "Ready":"Unknown" status (will retry)
	W1122 00:22:58.770073  563925 node_ready.go:57] node "ha-561110-m03" has "Ready":"Unknown" status (will retry)
	W1122 00:23:00.770445  563925 node_ready.go:57] node "ha-561110-m03" has "Ready":"Unknown" status (will retry)
	I1122 00:23:02.766152  563925 node_ready.go:38] duration metric: took 6m0.000206678s for node "ha-561110-m03" to be "Ready" ...
	I1122 00:23:02.769486  563925 out.go:203] 
	W1122 00:23:02.772416  563925 out.go:285] X Exiting due to GUEST_START: failed to start node: adding node: wait 6m0s for node: waiting for node to be ready: WaitNodeCondition: context deadline exceeded
	W1122 00:23:02.772436  563925 out.go:285] * 
	W1122 00:23:02.774635  563925 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1122 00:23:02.776836  563925 out.go:203] 
	
	
	==> CRI-O <==
	Nov 22 00:15:52 ha-561110 crio[666]: time="2025-11-22T00:15:52.39043996Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=6b55a33c-982b-407b-a39e-f5c092d837ad name=/runtime.v1.ImageService/ImageStatus
	Nov 22 00:15:52 ha-561110 crio[666]: time="2025-11-22T00:15:52.391455898Z" level=info msg="Creating container: kube-system/storage-provisioner/storage-provisioner" id=aed84f71-7deb-4060-a2b1-3504a94ddccd name=/runtime.v1.RuntimeService/CreateContainer
	Nov 22 00:15:52 ha-561110 crio[666]: time="2025-11-22T00:15:52.391592756Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 22 00:15:52 ha-561110 crio[666]: time="2025-11-22T00:15:52.398141795Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 22 00:15:52 ha-561110 crio[666]: time="2025-11-22T00:15:52.398456674Z" level=warning msg="Failed to open /etc/passwd: open /var/lib/containers/storage/overlay/5080ecedb1aca210f92642c0da614341ac5baee6bb123e6d3efa15080462423f/merged/etc/passwd: no such file or directory"
	Nov 22 00:15:52 ha-561110 crio[666]: time="2025-11-22T00:15:52.398549644Z" level=warning msg="Failed to open /etc/group: open /var/lib/containers/storage/overlay/5080ecedb1aca210f92642c0da614341ac5baee6bb123e6d3efa15080462423f/merged/etc/group: no such file or directory"
	Nov 22 00:15:52 ha-561110 crio[666]: time="2025-11-22T00:15:52.398849032Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 22 00:15:52 ha-561110 crio[666]: time="2025-11-22T00:15:52.429646993Z" level=info msg="Created container 135f8581d288b240b9c444b0861bec261a02882a56b15c99e1bb476a861d296a: kube-system/storage-provisioner/storage-provisioner" id=aed84f71-7deb-4060-a2b1-3504a94ddccd name=/runtime.v1.RuntimeService/CreateContainer
	Nov 22 00:15:52 ha-561110 crio[666]: time="2025-11-22T00:15:52.430745119Z" level=info msg="Starting container: 135f8581d288b240b9c444b0861bec261a02882a56b15c99e1bb476a861d296a" id=781c7a19-539c-4417-a691-8f4e096b71ed name=/runtime.v1.RuntimeService/StartContainer
	Nov 22 00:15:52 ha-561110 crio[666]: time="2025-11-22T00:15:52.435089701Z" level=info msg="Started container" PID=1391 containerID=135f8581d288b240b9c444b0861bec261a02882a56b15c99e1bb476a861d296a description=kube-system/storage-provisioner/storage-provisioner id=781c7a19-539c-4417-a691-8f4e096b71ed name=/runtime.v1.RuntimeService/StartContainer sandboxID=de4629de69837fe0447ae13245102ae0d04524a3858dcce8a9d5b8e10bb91eaf
	Nov 22 00:16:01 ha-561110 crio[666]: time="2025-11-22T00:16:01.470325793Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 22 00:16:01 ha-561110 crio[666]: time="2025-11-22T00:16:01.474774281Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 22 00:16:01 ha-561110 crio[666]: time="2025-11-22T00:16:01.474811154Z" level=info msg="Updated default CNI network name to kindnet"
	Nov 22 00:16:01 ha-561110 crio[666]: time="2025-11-22T00:16:01.474835695Z" level=info msg="CNI monitoring event WRITE         \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 22 00:16:01 ha-561110 crio[666]: time="2025-11-22T00:16:01.478898906Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 22 00:16:01 ha-561110 crio[666]: time="2025-11-22T00:16:01.478939659Z" level=info msg="Updated default CNI network name to kindnet"
	Nov 22 00:16:01 ha-561110 crio[666]: time="2025-11-22T00:16:01.478962272Z" level=info msg="CNI monitoring event WRITE         \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 22 00:16:01 ha-561110 crio[666]: time="2025-11-22T00:16:01.482066282Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 22 00:16:01 ha-561110 crio[666]: time="2025-11-22T00:16:01.482101062Z" level=info msg="Updated default CNI network name to kindnet"
	Nov 22 00:16:01 ha-561110 crio[666]: time="2025-11-22T00:16:01.482122772Z" level=info msg="CNI monitoring event RENAME        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 22 00:16:01 ha-561110 crio[666]: time="2025-11-22T00:16:01.485482939Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 22 00:16:01 ha-561110 crio[666]: time="2025-11-22T00:16:01.485521674Z" level=info msg="Updated default CNI network name to kindnet"
	Nov 22 00:16:01 ha-561110 crio[666]: time="2025-11-22T00:16:01.485545829Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist\" ← \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 22 00:16:01 ha-561110 crio[666]: time="2025-11-22T00:16:01.488891801Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 22 00:16:01 ha-561110 crio[666]: time="2025-11-22T00:16:01.48892796Z" level=info msg="Updated default CNI network name to kindnet"
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                 NAMESPACE
	135f8581d288b       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6   7 minutes ago       Running             storage-provisioner       2                   de4629de69837       storage-provisioner                 kube-system
	fe1c6226bf4c6       138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc   7 minutes ago       Running             coredns                   1                   b641fd83b9816       coredns-66bc5c9577-vp8f5            kube-system
	69ffa71725510       138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc   7 minutes ago       Running             coredns                   1                   1104258d7fdef       coredns-66bc5c9577-rrkkw            kube-system
	60513ca704c00       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6   7 minutes ago       Exited              storage-provisioner       1                   de4629de69837       storage-provisioner                 kube-system
	d9e4613f17ffd       05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9   7 minutes ago       Running             kube-proxy                1                   1ff3e662bdd09       kube-proxy-fh5cv                    kube-system
	a2d8ce4bb1edd       89a35e2ebb6b938201966889b5e8c85b931db6432c5643966116cd1c28bf45cd   7 minutes ago       Running             busybox                   1                   b8e440f614e56       busybox-7b57f96db7-fbtrb            default
	5a2fb45570b8d       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c   7 minutes ago       Running             kindnet-cni               1                   55f44270c0111       kindnet-7g65m                       kube-system
	555f050993ba2       7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a   7 minutes ago       Running             kube-controller-manager   2                   10dbad5a4508a       kube-controller-manager-ha-561110   kube-system
	4cbb3fde391bd       43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196   8 minutes ago       Running             kube-apiserver            1                   38691a4dbf6ea       kube-apiserver-ha-561110            kube-system
	4360f5517fd5e       b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0   8 minutes ago       Running             kube-scheduler            1                   e0baba9cafe90       kube-scheduler-ha-561110            kube-system
	a395e7473ffe2       a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e   8 minutes ago       Running             etcd                      1                   193446051a803       etcd-ha-561110                      kube-system
	9fdf72902e6e0       2a8917f902489be5a8dd414209c32b77bd644d187ea646d86dbdc31e85efb551   8 minutes ago       Running             kube-vip                  0                   884d14e2e6045       kube-vip-ha-561110                  kube-system
	1c929db60119a       7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a   8 minutes ago       Exited              kube-controller-manager   1                   10dbad5a4508a       kube-controller-manager-ha-561110   kube-system
	
	
	==> coredns [69ffa7172551035e0586a2f61f518f9846bd0b87abc14ba1505f02248c5a9a02] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 9e2996f8cb67ac53e0259ab1f8d615d07d1beb0bd07e6a1e39769c3bf486a905bb991cc47f8d2f14d0d3a90a87dfc625a0b4c524fed169d8158c40657c0694b1
	CoreDNS-1.12.1
	linux/arm64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:39796 - 60732 "HINFO IN 576766510875163090.3461274759123809982. udp 56 false 512" NXDOMAIN qr,rd,ra 56 0.004198928s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	
	
	==> coredns [fe1c6226bf4c6a8f0d43125ecd01e36e538a750fd9dd5c3edb73d4ffd5a90aff] <==
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 9e2996f8cb67ac53e0259ab1f8d615d07d1beb0bd07e6a1e39769c3bf486a905bb991cc47f8d2f14d0d3a90a87dfc625a0b4c524fed169d8158c40657c0694b1
	CoreDNS-1.12.1
	linux/arm64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:58159 - 30701 "HINFO IN 6742751567940684104.616832762995402637. udp 56 false 512" NXDOMAIN qr,rd,ra 56 0.025967847s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	
	
	==> describe nodes <==
	Name:               ha-561110
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=ha-561110
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=299bbe887a12c40541707cc636234f35f4ff1785
	                    minikube.k8s.io/name=ha-561110
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_11_22T00_09_04_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 22 Nov 2025 00:09:01 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-561110
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 22 Nov 2025 00:22:57 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 22 Nov 2025 00:21:46 +0000   Sat, 22 Nov 2025 00:08:56 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 22 Nov 2025 00:21:46 +0000   Sat, 22 Nov 2025 00:08:56 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 22 Nov 2025 00:21:46 +0000   Sat, 22 Nov 2025 00:08:56 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 22 Nov 2025 00:21:46 +0000   Sat, 22 Nov 2025 00:15:19 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    ha-561110
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	System Info:
	  Machine ID:                 c92eea4d3c03c0156e01fcf8691e3907
	  System UUID:                77a39681-2950-4264-8660-77e1aeddeb83
	  Boot ID:                    72ac7385-472f-47d1-a23e-bd80468d6e09
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.2
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                 CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                 ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7b57f96db7-fbtrb             0 (0%)        0 (0%)      0 (0%)           0 (0%)         11m
	  kube-system                 coredns-66bc5c9577-rrkkw             100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     13m
	  kube-system                 coredns-66bc5c9577-vp8f5             100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     13m
	  kube-system                 etcd-ha-561110                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         14m
	  kube-system                 kindnet-7g65m                        100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      13m
	  kube-system                 kube-apiserver-ha-561110             250m (12%)    0 (0%)      0 (0%)           0 (0%)         14m
	  kube-system                 kube-controller-manager-ha-561110    200m (10%)    0 (0%)      0 (0%)           0 (0%)         14m
	  kube-system                 kube-proxy-fh5cv                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         13m
	  kube-system                 kube-scheduler-ha-561110             100m (5%)     0 (0%)      0 (0%)           0 (0%)         14m
	  kube-system                 kube-vip-ha-561110                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m39s
	  kube-system                 storage-provisioner                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         13m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                950m (47%)  100m (5%)
	  memory             290Mi (3%)  390Mi (4%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                    From             Message
	  ----     ------                   ----                   ----             -------
	  Normal   Starting                 7m41s                  kube-proxy       
	  Normal   Starting                 13m                    kube-proxy       
	  Normal   NodeHasNoDiskPressure    14m (x8 over 14m)      kubelet          Node ha-561110 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     14m (x8 over 14m)      kubelet          Node ha-561110 status is now: NodeHasSufficientPID
	  Normal   Starting                 14m                    kubelet          Starting kubelet.
	  Warning  CgroupV1                 14m                    kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  14m (x8 over 14m)      kubelet          Node ha-561110 status is now: NodeHasSufficientMemory
	  Warning  CgroupV1                 14m                    kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   Starting                 14m                    kubelet          Starting kubelet.
	  Normal   NodeHasSufficientPID     14m                    kubelet          Node ha-561110 status is now: NodeHasSufficientPID
	  Normal   NodeHasSufficientMemory  14m                    kubelet          Node ha-561110 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    14m                    kubelet          Node ha-561110 status is now: NodeHasNoDiskPressure
	  Normal   RegisteredNode           13m                    node-controller  Node ha-561110 event: Registered Node ha-561110 in Controller
	  Normal   RegisteredNode           13m                    node-controller  Node ha-561110 event: Registered Node ha-561110 in Controller
	  Normal   NodeReady                13m                    kubelet          Node ha-561110 status is now: NodeReady
	  Normal   RegisteredNode           12m                    node-controller  Node ha-561110 event: Registered Node ha-561110 in Controller
	  Normal   RegisteredNode           8m45s                  node-controller  Node ha-561110 event: Registered Node ha-561110 in Controller
	  Warning  CgroupV1                 8m17s                  kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  8m16s (x8 over 8m16s)  kubelet          Node ha-561110 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    8m16s (x8 over 8m16s)  kubelet          Node ha-561110 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     8m16s (x8 over 8m16s)  kubelet          Node ha-561110 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           7m42s                  node-controller  Node ha-561110 event: Registered Node ha-561110 in Controller
	  Normal   RegisteredNode           7m34s                  node-controller  Node ha-561110 event: Registered Node ha-561110 in Controller
	
	
	Name:               ha-561110-m02
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=ha-561110-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=299bbe887a12c40541707cc636234f35f4ff1785
	                    minikube.k8s.io/name=ha-561110
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2025_11_22T00_09_47_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 22 Nov 2025 00:09:47 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-561110-m02
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 22 Nov 2025 00:22:55 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 22 Nov 2025 00:22:27 +0000   Sat, 22 Nov 2025 00:09:47 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 22 Nov 2025 00:22:27 +0000   Sat, 22 Nov 2025 00:09:47 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 22 Nov 2025 00:22:27 +0000   Sat, 22 Nov 2025 00:09:47 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 22 Nov 2025 00:22:27 +0000   Sat, 22 Nov 2025 00:10:29 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.3
	  Hostname:    ha-561110-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	System Info:
	  Machine ID:                 c92eea4d3c03c0156e01fcf8691e3907
	  System UUID:                a2162c95-cc29-4cd8-8a91-589e6eb1ab6b
	  Boot ID:                    72ac7385-472f-47d1-a23e-bd80468d6e09
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.2
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7b57f96db7-dx9nw                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         11m
	  kube-system                 etcd-ha-561110-m02                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         13m
	  kube-system                 kindnet-dltvw                            100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      13m
	  kube-system                 kube-apiserver-ha-561110-m02             250m (12%)    0 (0%)      0 (0%)           0 (0%)         13m
	  kube-system                 kube-controller-manager-ha-561110-m02    200m (10%)    0 (0%)      0 (0%)           0 (0%)         13m
	  kube-system                 kube-proxy-b8wb5                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         13m
	  kube-system                 kube-scheduler-ha-561110-m02             100m (5%)     0 (0%)      0 (0%)           0 (0%)         13m
	  kube-system                 kube-vip-ha-561110-m02                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         13m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  100m (5%)
	  memory             150Mi (1%)  50Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                    From             Message
	  ----     ------                   ----                   ----             -------
	  Normal   Starting                 7m30s                  kube-proxy       
	  Normal   Starting                 13m                    kube-proxy       
	  Normal   RegisteredNode           13m                    node-controller  Node ha-561110-m02 event: Registered Node ha-561110-m02 in Controller
	  Normal   RegisteredNode           13m                    node-controller  Node ha-561110-m02 event: Registered Node ha-561110-m02 in Controller
	  Normal   RegisteredNode           12m                    node-controller  Node ha-561110-m02 event: Registered Node ha-561110-m02 in Controller
	  Normal   NodeHasSufficientPID     9m18s (x8 over 9m18s)  kubelet          Node ha-561110-m02 status is now: NodeHasSufficientPID
	  Normal   Starting                 9m18s                  kubelet          Starting kubelet.
	  Warning  CgroupV1                 9m18s                  kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  9m18s (x8 over 9m18s)  kubelet          Node ha-561110-m02 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    9m18s (x8 over 9m18s)  kubelet          Node ha-561110-m02 status is now: NodeHasNoDiskPressure
	  Normal   RegisteredNode           8m45s                  node-controller  Node ha-561110-m02 event: Registered Node ha-561110-m02 in Controller
	  Normal   Starting                 8m13s                  kubelet          Starting kubelet.
	  Warning  CgroupV1                 8m13s                  kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  8m12s (x8 over 8m13s)  kubelet          Node ha-561110-m02 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    8m12s (x8 over 8m13s)  kubelet          Node ha-561110-m02 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     8m12s (x8 over 8m13s)  kubelet          Node ha-561110-m02 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           7m42s                  node-controller  Node ha-561110-m02 event: Registered Node ha-561110-m02 in Controller
	  Normal   RegisteredNode           7m34s                  node-controller  Node ha-561110-m02 event: Registered Node ha-561110-m02 in Controller
	
	
	Name:               ha-561110-m03
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=ha-561110-m03
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=299bbe887a12c40541707cc636234f35f4ff1785
	                    minikube.k8s.io/name=ha-561110
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2025_11_22T00_11_02_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 22 Nov 2025 00:11:01 +0000
	Taints:             node.kubernetes.io/unreachable:NoExecute
	                    node.kubernetes.io/unreachable:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-561110-m03
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 22 Nov 2025 00:14:05 +0000
	Conditions:
	  Type             Status    LastHeartbeatTime                 LastTransitionTime                Reason              Message
	  ----             ------    -----------------                 ------------------                ------              -------
	  MemoryPressure   Unknown   Sat, 22 Nov 2025 00:12:44 +0000   Sat, 22 Nov 2025 00:16:12 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  DiskPressure     Unknown   Sat, 22 Nov 2025 00:12:44 +0000   Sat, 22 Nov 2025 00:16:12 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  PIDPressure      Unknown   Sat, 22 Nov 2025 00:12:44 +0000   Sat, 22 Nov 2025 00:16:12 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  Ready            Unknown   Sat, 22 Nov 2025 00:12:44 +0000   Sat, 22 Nov 2025 00:16:12 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	Addresses:
	  InternalIP:  192.168.49.4
	  Hostname:    ha-561110-m03
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	System Info:
	  Machine ID:                 c92eea4d3c03c0156e01fcf8691e3907
	  System UUID:                992c36b7-260a-4e71-a461-53a9d9f9f201
	  Boot ID:                    72ac7385-472f-47d1-a23e-bd80468d6e09
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.2
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.2.0/24
	PodCIDRs:                     10.244.2.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7b57f96db7-jnjz9                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         11m
	  kube-system                 etcd-ha-561110-m03                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         12m
	  kube-system                 kindnet-w4kh7                            100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      12m
	  kube-system                 kube-apiserver-ha-561110-m03             250m (12%)    0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 kube-controller-manager-ha-561110-m03    200m (10%)    0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 kube-proxy-v5ndg                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 kube-scheduler-ha-561110-m03             100m (5%)     0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 kube-vip-ha-561110-m03                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  100m (5%)
	  memory             150Mi (1%)  50Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type    Reason          Age    From             Message
	  ----    ------          ----   ----             -------
	  Normal  Starting        11m    kube-proxy       
	  Normal  RegisteredNode  12m    node-controller  Node ha-561110-m03 event: Registered Node ha-561110-m03 in Controller
	  Normal  RegisteredNode  12m    node-controller  Node ha-561110-m03 event: Registered Node ha-561110-m03 in Controller
	  Normal  RegisteredNode  11m    node-controller  Node ha-561110-m03 event: Registered Node ha-561110-m03 in Controller
	  Normal  RegisteredNode  8m45s  node-controller  Node ha-561110-m03 event: Registered Node ha-561110-m03 in Controller
	  Normal  RegisteredNode  7m42s  node-controller  Node ha-561110-m03 event: Registered Node ha-561110-m03 in Controller
	  Normal  RegisteredNode  7m34s  node-controller  Node ha-561110-m03 event: Registered Node ha-561110-m03 in Controller
	  Normal  NodeNotReady    6m52s  node-controller  Node ha-561110-m03 status is now: NodeNotReady
	
	
	Name:               ha-561110-m04
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=ha-561110-m04
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=299bbe887a12c40541707cc636234f35f4ff1785
	                    minikube.k8s.io/name=ha-561110
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2025_11_22T00_12_27_0700
	                    minikube.k8s.io/version=v1.37.0
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 22 Nov 2025 00:12:26 +0000
	Taints:             node.kubernetes.io/unreachable:NoExecute
	                    node.kubernetes.io/unreachable:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-561110-m04
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 22 Nov 2025 00:14:09 +0000
	Conditions:
	  Type             Status    LastHeartbeatTime                 LastTransitionTime                Reason              Message
	  ----             ------    -----------------                 ------------------                ------              -------
	  MemoryPressure   Unknown   Sat, 22 Nov 2025 00:13:09 +0000   Sat, 22 Nov 2025 00:16:12 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  DiskPressure     Unknown   Sat, 22 Nov 2025 00:13:09 +0000   Sat, 22 Nov 2025 00:16:12 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  PIDPressure      Unknown   Sat, 22 Nov 2025 00:13:09 +0000   Sat, 22 Nov 2025 00:16:12 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  Ready            Unknown   Sat, 22 Nov 2025 00:13:09 +0000   Sat, 22 Nov 2025 00:16:12 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	Addresses:
	  InternalIP:  192.168.49.5
	  Hostname:    ha-561110-m04
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	System Info:
	  Machine ID:                 c92eea4d3c03c0156e01fcf8691e3907
	  System UUID:                00d86356-c884-4dfd-a214-95f51a02c157
	  Boot ID:                    72ac7385-472f-47d1-a23e-bd80468d6e09
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.2
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.3.0/24
	PodCIDRs:                     10.244.3.0/24
	Non-terminated Pods:          (2 in total)
	  Namespace                   Name                CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                ------------  ----------  ---------------  -------------  ---
	  kube-system                 kindnet-4tkd6       100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      10m
	  kube-system                 kube-proxy-2vctt    0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%)  100m (5%)
	  memory             50Mi (0%)  50Mi (0%)
	  ephemeral-storage  0 (0%)     0 (0%)
	  hugepages-1Gi      0 (0%)     0 (0%)
	  hugepages-2Mi      0 (0%)     0 (0%)
	  hugepages-32Mi     0 (0%)     0 (0%)
	  hugepages-64Ki     0 (0%)     0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 10m                kube-proxy       
	  Normal  NodeHasSufficientMemory  10m (x3 over 10m)  kubelet          Node ha-561110-m04 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    10m (x3 over 10m)  kubelet          Node ha-561110-m04 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     10m (x3 over 10m)  kubelet          Node ha-561110-m04 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           10m                node-controller  Node ha-561110-m04 event: Registered Node ha-561110-m04 in Controller
	  Normal  RegisteredNode           10m                node-controller  Node ha-561110-m04 event: Registered Node ha-561110-m04 in Controller
	  Normal  RegisteredNode           10m                node-controller  Node ha-561110-m04 event: Registered Node ha-561110-m04 in Controller
	  Normal  NodeReady                9m55s              kubelet          Node ha-561110-m04 status is now: NodeReady
	  Normal  RegisteredNode           8m45s              node-controller  Node ha-561110-m04 event: Registered Node ha-561110-m04 in Controller
	  Normal  RegisteredNode           7m42s              node-controller  Node ha-561110-m04 event: Registered Node ha-561110-m04 in Controller
	  Normal  RegisteredNode           7m34s              node-controller  Node ha-561110-m04 event: Registered Node ha-561110-m04 in Controller
	  Normal  NodeNotReady             6m52s              node-controller  Node ha-561110-m04 status is now: NodeNotReady
	
	
	==> dmesg <==
	[Nov21 23:16] overlayfs: idmapped layers are currently not supported
	[Nov21 23:17] overlayfs: idmapped layers are currently not supported
	[ +10.681159] overlayfs: idmapped layers are currently not supported
	[Nov21 23:19] overlayfs: idmapped layers are currently not supported
	[ +15.192296] overlayfs: idmapped layers are currently not supported
	[Nov21 23:20] overlayfs: idmapped layers are currently not supported
	[Nov21 23:21] overlayfs: idmapped layers are currently not supported
	[Nov21 23:22] overlayfs: idmapped layers are currently not supported
	[ +12.884842] overlayfs: idmapped layers are currently not supported
	[Nov21 23:23] overlayfs: idmapped layers are currently not supported
	[ +12.022080] overlayfs: idmapped layers are currently not supported
	[Nov21 23:25] overlayfs: idmapped layers are currently not supported
	[ +24.447615] overlayfs: idmapped layers are currently not supported
	[Nov21 23:46] kauditd_printk_skb: 8 callbacks suppressed
	[Nov21 23:48] overlayfs: idmapped layers are currently not supported
	[Nov21 23:54] overlayfs: idmapped layers are currently not supported
	[Nov21 23:55] overlayfs: idmapped layers are currently not supported
	[Nov22 00:08] overlayfs: idmapped layers are currently not supported
	[Nov22 00:09] overlayfs: idmapped layers are currently not supported
	[Nov22 00:10] overlayfs: idmapped layers are currently not supported
	[Nov22 00:12] overlayfs: idmapped layers are currently not supported
	[Nov22 00:13] overlayfs: idmapped layers are currently not supported
	[Nov22 00:14] overlayfs: idmapped layers are currently not supported
	[  +3.904643] overlayfs: idmapped layers are currently not supported
	[Nov22 00:15] overlayfs: idmapped layers are currently not supported
	
	
	==> etcd [a395e7473ffe2b7999ae75a70e19b4f153d459c8ccae48aeeb71b5b6248cc1f2] <==
	{"level":"warn","ts":"2025-11-22T00:22:37.982593Z","caller":"etcdserver/cluster_util.go:160","msg":"failed to get version","remote-member-id":"700ebc6e9635b48f","error":"Get \"https://192.168.49.4:2380/version\": dial tcp 192.168.49.4:2380: connect: connection refused"}
	{"level":"warn","ts":"2025-11-22T00:22:39.400689Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_SNAPSHOT","remote-peer-id":"700ebc6e9635b48f","rtt":"61.899654ms","error":"dial tcp 192.168.49.4:2380: connect: connection refused"}
	{"level":"warn","ts":"2025-11-22T00:22:39.400702Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_RAFT_MESSAGE","remote-peer-id":"700ebc6e9635b48f","rtt":"51.609423ms","error":"dial tcp 192.168.49.4:2380: connect: connection refused"}
	{"level":"warn","ts":"2025-11-22T00:22:41.983900Z","caller":"etcdserver/cluster_util.go:259","msg":"failed to reach the peer URL","address":"https://192.168.49.4:2380/version","remote-member-id":"700ebc6e9635b48f","error":"Get \"https://192.168.49.4:2380/version\": dial tcp 192.168.49.4:2380: connect: connection refused"}
	{"level":"warn","ts":"2025-11-22T00:22:41.983963Z","caller":"etcdserver/cluster_util.go:160","msg":"failed to get version","remote-member-id":"700ebc6e9635b48f","error":"Get \"https://192.168.49.4:2380/version\": dial tcp 192.168.49.4:2380: connect: connection refused"}
	{"level":"warn","ts":"2025-11-22T00:22:44.401144Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_SNAPSHOT","remote-peer-id":"700ebc6e9635b48f","rtt":"61.899654ms","error":"dial tcp 192.168.49.4:2380: connect: connection refused"}
	{"level":"warn","ts":"2025-11-22T00:22:44.401134Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_RAFT_MESSAGE","remote-peer-id":"700ebc6e9635b48f","rtt":"51.609423ms","error":"dial tcp 192.168.49.4:2380: connect: connection refused"}
	{"level":"warn","ts":"2025-11-22T00:22:45.985115Z","caller":"etcdserver/cluster_util.go:259","msg":"failed to reach the peer URL","address":"https://192.168.49.4:2380/version","remote-member-id":"700ebc6e9635b48f","error":"Get \"https://192.168.49.4:2380/version\": dial tcp 192.168.49.4:2380: connect: connection refused"}
	{"level":"warn","ts":"2025-11-22T00:22:45.985177Z","caller":"etcdserver/cluster_util.go:160","msg":"failed to get version","remote-member-id":"700ebc6e9635b48f","error":"Get \"https://192.168.49.4:2380/version\": dial tcp 192.168.49.4:2380: connect: connection refused"}
	{"level":"warn","ts":"2025-11-22T00:22:49.401824Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_RAFT_MESSAGE","remote-peer-id":"700ebc6e9635b48f","rtt":"51.609423ms","error":"dial tcp 192.168.49.4:2380: connect: connection refused"}
	{"level":"warn","ts":"2025-11-22T00:22:49.401815Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_SNAPSHOT","remote-peer-id":"700ebc6e9635b48f","rtt":"61.899654ms","error":"dial tcp 192.168.49.4:2380: connect: connection refused"}
	{"level":"warn","ts":"2025-11-22T00:22:49.986889Z","caller":"etcdserver/cluster_util.go:259","msg":"failed to reach the peer URL","address":"https://192.168.49.4:2380/version","remote-member-id":"700ebc6e9635b48f","error":"Get \"https://192.168.49.4:2380/version\": dial tcp 192.168.49.4:2380: connect: connection refused"}
	{"level":"warn","ts":"2025-11-22T00:22:49.987009Z","caller":"etcdserver/cluster_util.go:160","msg":"failed to get version","remote-member-id":"700ebc6e9635b48f","error":"Get \"https://192.168.49.4:2380/version\": dial tcp 192.168.49.4:2380: connect: connection refused"}
	{"level":"warn","ts":"2025-11-22T00:22:53.988106Z","caller":"etcdserver/cluster_util.go:259","msg":"failed to reach the peer URL","address":"https://192.168.49.4:2380/version","remote-member-id":"700ebc6e9635b48f","error":"Get \"https://192.168.49.4:2380/version\": dial tcp 192.168.49.4:2380: connect: connection refused"}
	{"level":"warn","ts":"2025-11-22T00:22:53.988161Z","caller":"etcdserver/cluster_util.go:160","msg":"failed to get version","remote-member-id":"700ebc6e9635b48f","error":"Get \"https://192.168.49.4:2380/version\": dial tcp 192.168.49.4:2380: connect: connection refused"}
	{"level":"warn","ts":"2025-11-22T00:22:54.402648Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_RAFT_MESSAGE","remote-peer-id":"700ebc6e9635b48f","rtt":"51.609423ms","error":"dial tcp 192.168.49.4:2380: connect: connection refused"}
	{"level":"warn","ts":"2025-11-22T00:22:54.402639Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_SNAPSHOT","remote-peer-id":"700ebc6e9635b48f","rtt":"61.899654ms","error":"dial tcp 192.168.49.4:2380: connect: connection refused"}
	{"level":"warn","ts":"2025-11-22T00:22:57.989780Z","caller":"etcdserver/cluster_util.go:259","msg":"failed to reach the peer URL","address":"https://192.168.49.4:2380/version","remote-member-id":"700ebc6e9635b48f","error":"Get \"https://192.168.49.4:2380/version\": dial tcp 192.168.49.4:2380: connect: connection refused"}
	{"level":"warn","ts":"2025-11-22T00:22:57.989860Z","caller":"etcdserver/cluster_util.go:160","msg":"failed to get version","remote-member-id":"700ebc6e9635b48f","error":"Get \"https://192.168.49.4:2380/version\": dial tcp 192.168.49.4:2380: connect: connection refused"}
	{"level":"warn","ts":"2025-11-22T00:22:59.403601Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_RAFT_MESSAGE","remote-peer-id":"700ebc6e9635b48f","rtt":"51.609423ms","error":"dial tcp 192.168.49.4:2380: connect: connection refused"}
	{"level":"warn","ts":"2025-11-22T00:22:59.403589Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_SNAPSHOT","remote-peer-id":"700ebc6e9635b48f","rtt":"61.899654ms","error":"dial tcp 192.168.49.4:2380: connect: connection refused"}
	{"level":"warn","ts":"2025-11-22T00:23:01.991023Z","caller":"etcdserver/cluster_util.go:259","msg":"failed to reach the peer URL","address":"https://192.168.49.4:2380/version","remote-member-id":"700ebc6e9635b48f","error":"Get \"https://192.168.49.4:2380/version\": dial tcp 192.168.49.4:2380: connect: connection refused"}
	{"level":"warn","ts":"2025-11-22T00:23:01.991083Z","caller":"etcdserver/cluster_util.go:160","msg":"failed to get version","remote-member-id":"700ebc6e9635b48f","error":"Get \"https://192.168.49.4:2380/version\": dial tcp 192.168.49.4:2380: connect: connection refused"}
	{"level":"warn","ts":"2025-11-22T00:23:04.403878Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_SNAPSHOT","remote-peer-id":"700ebc6e9635b48f","rtt":"61.899654ms","error":"dial tcp 192.168.49.4:2380: connect: connection refused"}
	{"level":"warn","ts":"2025-11-22T00:23:04.403933Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_RAFT_MESSAGE","remote-peer-id":"700ebc6e9635b48f","rtt":"51.609423ms","error":"dial tcp 192.168.49.4:2380: connect: connection refused"}
	
	
	==> kernel <==
	 00:23:04 up  5:05,  0 user,  load average: 0.19, 0.93, 1.17
	Linux ha-561110 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [5a2fb45570b8d8d9729d3fcc9460e054e1a5757ce0b35d5e4c6ab8f496780c4f] <==
	I1122 00:22:31.466198       1 main.go:324] Node ha-561110-m03 has CIDR [10.244.2.0/24] 
	I1122 00:22:41.465025       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1122 00:22:41.465060       1 main.go:301] handling current node
	I1122 00:22:41.465076       1 main.go:297] Handling node with IPs: map[192.168.49.3:{}]
	I1122 00:22:41.465081       1 main.go:324] Node ha-561110-m02 has CIDR [10.244.1.0/24] 
	I1122 00:22:41.465232       1 main.go:297] Handling node with IPs: map[192.168.49.4:{}]
	I1122 00:22:41.465247       1 main.go:324] Node ha-561110-m03 has CIDR [10.244.2.0/24] 
	I1122 00:22:41.465303       1 main.go:297] Handling node with IPs: map[192.168.49.5:{}]
	I1122 00:22:41.465309       1 main.go:324] Node ha-561110-m04 has CIDR [10.244.3.0/24] 
	I1122 00:22:51.473085       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1122 00:22:51.473126       1 main.go:301] handling current node
	I1122 00:22:51.473142       1 main.go:297] Handling node with IPs: map[192.168.49.3:{}]
	I1122 00:22:51.473148       1 main.go:324] Node ha-561110-m02 has CIDR [10.244.1.0/24] 
	I1122 00:22:51.473281       1 main.go:297] Handling node with IPs: map[192.168.49.4:{}]
	I1122 00:22:51.473294       1 main.go:324] Node ha-561110-m03 has CIDR [10.244.2.0/24] 
	I1122 00:22:51.473352       1 main.go:297] Handling node with IPs: map[192.168.49.5:{}]
	I1122 00:22:51.473363       1 main.go:324] Node ha-561110-m04 has CIDR [10.244.3.0/24] 
	I1122 00:23:01.470409       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1122 00:23:01.470552       1 main.go:301] handling current node
	I1122 00:23:01.470584       1 main.go:297] Handling node with IPs: map[192.168.49.3:{}]
	I1122 00:23:01.470592       1 main.go:324] Node ha-561110-m02 has CIDR [10.244.1.0/24] 
	I1122 00:23:01.470789       1 main.go:297] Handling node with IPs: map[192.168.49.4:{}]
	I1122 00:23:01.470804       1 main.go:324] Node ha-561110-m03 has CIDR [10.244.2.0/24] 
	I1122 00:23:01.470889       1 main.go:297] Handling node with IPs: map[192.168.49.5:{}]
	I1122 00:23:01.470900       1 main.go:324] Node ha-561110-m04 has CIDR [10.244.3.0/24] 
	
	
	==> kube-apiserver [4cbb3fde391bd86e756416ec260b0b8a5501d5139da802107965d9e012c4eca5] <==
	I1122 00:15:18.445997       1 shared_informer.go:356] "Caches are synced" controller="ipallocator-repair-controller"
	I1122 00:15:18.446301       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	W1122 00:15:18.447701       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.49.3]
	I1122 00:15:18.452038       1 shared_informer.go:356] "Caches are synced" controller="*generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]"
	I1122 00:15:18.452137       1 policy_source.go:240] refreshing policies
	I1122 00:15:18.460639       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1122 00:15:18.471883       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I1122 00:15:18.471973       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I1122 00:15:18.484728       1 cache.go:39] Caches are synced for LocalAvailability controller
	I1122 00:15:18.486315       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I1122 00:15:18.488710       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1122 00:15:18.492798       1 shared_informer.go:356] "Caches are synced" controller="node_authorizer"
	I1122 00:15:18.495280       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I1122 00:15:18.507423       1 cache.go:39] Caches are synced for autoregister controller
	I1122 00:15:18.534574       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1122 00:15:18.549678       1 controller.go:667] quota admission added evaluator for: endpoints
	I1122 00:15:18.565788       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	E1122 00:15:18.571045       1 controller.go:95] Found stale data, removed previous endpoints on kubernetes service, apiserver didn't exit successfully previously
	I1122 00:15:19.403170       1 shared_informer.go:356] "Caches are synced" controller="kubernetes-service-cidr-controller"
	I1122 00:15:19.403318       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	W1122 00:15:19.985311       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.49.2 192.168.49.3]
	I1122 00:15:20.110990       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1122 00:15:22.839985       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1122 00:15:22.952373       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1122 00:15:33.431623       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	
	
	==> kube-controller-manager [1c929db60119ab54f03020d00f2063dc6672d329ea34f4504e502142bffbe644] <==
	I1122 00:14:51.749993       1 serving.go:386] Generated self-signed cert in-memory
	I1122 00:14:53.094715       1 controllermanager.go:191] "Starting" version="v1.34.1"
	I1122 00:14:53.095280       1 controllermanager.go:193] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1122 00:14:53.099971       1 secure_serving.go:211] Serving securely on 127.0.0.1:10257
	I1122 00:14:53.101968       1 dynamic_cafile_content.go:161] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I1122 00:14:53.102195       1 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I1122 00:14:53.102364       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	E1122 00:15:08.891956       1 controllermanager.go:245] "Error building controller context" err="failed to wait for apiserver being healthy: timed out waiting for the condition: failed to get apiserver /healthz status: forbidden: User \"system:kube-controller-manager\" cannot get path \"/healthz\""
	
	
	==> kube-controller-manager [555f050993ba210ea8b5a432f7b9d055cece81e4f3e958134fe029c08873937f] <==
	I1122 00:15:22.663380       1 shared_informer.go:356] "Caches are synced" controller="ReplicationController"
	I1122 00:15:22.665955       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1122 00:15:22.665980       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1122 00:15:22.665989       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1122 00:15:22.670916       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I1122 00:15:22.671810       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I1122 00:15:22.671975       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I1122 00:15:22.674739       1 shared_informer.go:356] "Caches are synced" controller="service account"
	I1122 00:15:22.700683       1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
	I1122 00:15:22.700732       1 shared_informer.go:356] "Caches are synced" controller="job"
	I1122 00:15:22.700975       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I1122 00:15:22.701031       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="ha-561110-m04"
	I1122 00:15:22.702027       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1122 00:15:22.702218       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1122 00:15:22.702265       1 shared_informer.go:356] "Caches are synced" controller="taint"
	I1122 00:15:22.702335       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I1122 00:15:22.702421       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="ha-561110-m04"
	I1122 00:15:22.702475       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="ha-561110"
	I1122 00:15:22.702508       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="ha-561110-m02"
	I1122 00:15:22.702530       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="ha-561110-m03"
	I1122 00:15:22.703121       1 node_lifecycle_controller.go:1067] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I1122 00:16:02.360319       1 endpointslice_controller.go:344] "Error syncing endpoint slices for service, retrying" logger="endpointslice-controller" key="kube-system/kube-dns" err="failed to update kube-dns-fg476 EndpointSlice for Service kube-system/kube-dns: Operation cannot be fulfilled on endpointslices.discovery.k8s.io \"kube-dns-fg476\": the object has been modified; please apply your changes to the latest version and try again"
	I1122 00:16:02.360917       1 event.go:377] Event(v1.ObjectReference{Kind:"Service", Namespace:"kube-system", Name:"kube-dns", UID:"9e1e93e1-00b2-4af4-b92a-649228d61b24", APIVersion:"v1", ResourceVersion:"291", FieldPath:""}): type: 'Warning' reason: 'FailedToUpdateEndpointSlices' Error updating Endpoint Slices for Service kube-system/kube-dns: failed to update kube-dns-fg476 EndpointSlice for Service kube-system/kube-dns: Operation cannot be fulfilled on endpointslices.discovery.k8s.io "kube-dns-fg476": the object has been modified; please apply your changes to the latest version and try again
	I1122 00:21:22.783104       1 taint_eviction.go:111] "Deleting pod" logger="taint-eviction-controller" controller="taint-eviction-controller" pod="default/busybox-7b57f96db7-jnjz9"
	E1122 00:21:23.049792       1 replica_set.go:587] "Unhandled Error" err="sync \"default/busybox-7b57f96db7\" failed with Operation cannot be fulfilled on replicasets.apps \"busybox-7b57f96db7\": the object has been modified; please apply your changes to the latest version and try again" logger="UnhandledError"
	
	
	==> kube-proxy [d9e4613f17ffd567cd78a387d7add1e58e4b781fbb445147b8bfca54b9432ab5] <==
	I1122 00:15:21.735861       1 server_linux.go:53] "Using iptables proxy"
	I1122 00:15:22.275352       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1122 00:15:22.376449       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1122 00:15:22.376485       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.49.2"]
	E1122 00:15:22.376557       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1122 00:15:22.535668       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1122 00:15:22.535795       1 server_linux.go:132] "Using iptables Proxier"
	I1122 00:15:22.609065       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1122 00:15:22.609513       1 server.go:527] "Version info" version="v1.34.1"
	I1122 00:15:22.609711       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1122 00:15:22.617349       1 config.go:106] "Starting endpoint slice config controller"
	I1122 00:15:22.642095       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1122 00:15:22.642216       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1122 00:15:22.625017       1 config.go:309] "Starting node config controller"
	I1122 00:15:22.642311       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1122 00:15:22.661330       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1122 00:15:22.618034       1 config.go:403] "Starting serviceCIDR config controller"
	I1122 00:15:22.661456       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1122 00:15:22.661484       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1122 00:15:22.617914       1 config.go:200] "Starting service config controller"
	I1122 00:15:22.667161       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1122 00:15:22.669962       1 shared_informer.go:356] "Caches are synced" controller="service config"
	
	
	==> kube-scheduler [4360f5517fd5eb7d570a98dee1b801419d3b650d7e890d5ddecc79946fba46db] <==
	E1122 00:15:06.983690       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1122 00:15:07.083902       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1122 00:15:07.669484       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1122 00:15:08.371857       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1122 00:15:08.496001       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1122 00:15:09.010289       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1122 00:15:09.013639       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1122 00:15:09.181881       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1122 00:15:13.452037       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1122 00:15:13.489179       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1122 00:15:13.596578       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1122 00:15:15.338465       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1122 00:15:15.586170       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1122 00:15:15.778676       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1122 00:15:16.291567       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1122 00:15:16.393784       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1122 00:15:16.654452       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1122 00:15:16.676020       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1122 00:15:16.720023       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_arm64.s:1223" type="*v1.ConfigMap"
	E1122 00:15:16.867178       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1122 00:15:17.894056       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1122 00:15:17.894162       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1122 00:15:18.097763       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1122 00:15:18.407533       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope: RBAC: [clusterrole.rbac.authorization.k8s.io \"system:discovery\" not found, clusterrole.rbac.authorization.k8s.io \"system:basic-user\" not found, clusterrole.rbac.authorization.k8s.io \"system:volume-scheduler\" not found, clusterrole.rbac.authorization.k8s.io \"system:public-info-viewer\" not found, clusterrole.rbac.authorization.k8s.io \"system:kube-scheduler\" not found]" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	I1122 00:15:40.715350       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Nov 22 00:15:19 ha-561110 kubelet[804]: E1122 00:15:19.202909     804 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"etcd-ha-561110\" already exists" pod="kube-system/etcd-ha-561110"
	Nov 22 00:15:19 ha-561110 kubelet[804]: E1122 00:15:19.208208     804 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"etcd-ha-561110\" already exists" pod="kube-system/etcd-ha-561110"
	Nov 22 00:15:19 ha-561110 kubelet[804]: I1122 00:15:19.208378     804 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ha-561110"
	Nov 22 00:15:19 ha-561110 kubelet[804]: E1122 00:15:19.222213     804 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-apiserver-ha-561110\" already exists" pod="kube-system/kube-apiserver-ha-561110"
	Nov 22 00:15:19 ha-561110 kubelet[804]: I1122 00:15:19.222405     804 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ha-561110"
	Nov 22 00:15:19 ha-561110 kubelet[804]: E1122 00:15:19.238353     804 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-controller-manager-ha-561110\" already exists" pod="kube-system/kube-controller-manager-ha-561110"
	Nov 22 00:15:19 ha-561110 kubelet[804]: I1122 00:15:19.996652     804 apiserver.go:52] "Watching apiserver"
	Nov 22 00:15:20 ha-561110 kubelet[804]: I1122 00:15:20.004192     804 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world"
	Nov 22 00:15:20 ha-561110 kubelet[804]: I1122 00:15:20.010755     804 kubelet.go:3202] "Trying to delete pod" pod="kube-system/kube-vip-ha-561110" podUID="f9bbfb1b-cc91-44c4-be9d-f028e6f3038f"
	Nov 22 00:15:20 ha-561110 kubelet[804]: I1122 00:15:20.042558     804 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/318c6763-fea1-4564-86f6-18cfad691213-xtables-lock\") pod \"kube-proxy-fh5cv\" (UID: \"318c6763-fea1-4564-86f6-18cfad691213\") " pod="kube-system/kube-proxy-fh5cv"
	Nov 22 00:15:20 ha-561110 kubelet[804]: I1122 00:15:20.042916     804 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/edeca4a6-de24-4444-be9c-cdcbf744f52a-lib-modules\") pod \"kindnet-7g65m\" (UID: \"edeca4a6-de24-4444-be9c-cdcbf744f52a\") " pod="kube-system/kindnet-7g65m"
	Nov 22 00:15:20 ha-561110 kubelet[804]: I1122 00:15:20.043044     804 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/edeca4a6-de24-4444-be9c-cdcbf744f52a-xtables-lock\") pod \"kindnet-7g65m\" (UID: \"edeca4a6-de24-4444-be9c-cdcbf744f52a\") " pod="kube-system/kindnet-7g65m"
	Nov 22 00:15:20 ha-561110 kubelet[804]: I1122 00:15:20.043629     804 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/318c6763-fea1-4564-86f6-18cfad691213-lib-modules\") pod \"kube-proxy-fh5cv\" (UID: \"318c6763-fea1-4564-86f6-18cfad691213\") " pod="kube-system/kube-proxy-fh5cv"
	Nov 22 00:15:20 ha-561110 kubelet[804]: I1122 00:15:20.043908     804 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/6bf95a26-263b-4088-904d-b344d4826342-tmp\") pod \"storage-provisioner\" (UID: \"6bf95a26-263b-4088-904d-b344d4826342\") " pod="kube-system/storage-provisioner"
	Nov 22 00:15:20 ha-561110 kubelet[804]: I1122 00:15:20.044454     804 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/edeca4a6-de24-4444-be9c-cdcbf744f52a-cni-cfg\") pod \"kindnet-7g65m\" (UID: \"edeca4a6-de24-4444-be9c-cdcbf744f52a\") " pod="kube-system/kindnet-7g65m"
	Nov 22 00:15:20 ha-561110 kubelet[804]: I1122 00:15:20.069531     804 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="12f5cffcd2e0febd6c4ae07da010fd8f" path="/var/lib/kubelet/pods/12f5cffcd2e0febd6c4ae07da010fd8f/volumes"
	Nov 22 00:15:20 ha-561110 kubelet[804]: I1122 00:15:20.170059     804 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory"
	Nov 22 00:15:20 ha-561110 kubelet[804]: I1122 00:15:20.199192     804 kubelet.go:3208] "Deleted mirror pod as it didn't match the static Pod" pod="kube-system/kube-vip-ha-561110"
	Nov 22 00:15:20 ha-561110 kubelet[804]: I1122 00:15:20.199382     804 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-vip-ha-561110"
	Nov 22 00:15:20 ha-561110 kubelet[804]: W1122 00:15:20.465863     804 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/b491a219f5f6a6da2c04a012513c1a2266c783068f6131eddf3365f4209ece96/crio-b8e440f614e569ef72e19243d1540dd34639d19916d8b0e346545eb4867daf57 WatchSource:0}: Error finding container b8e440f614e569ef72e19243d1540dd34639d19916d8b0e346545eb4867daf57: Status 404 returned error can't find the container with id b8e440f614e569ef72e19243d1540dd34639d19916d8b0e346545eb4867daf57
	Nov 22 00:15:20 ha-561110 kubelet[804]: W1122 00:15:20.651808     804 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/b491a219f5f6a6da2c04a012513c1a2266c783068f6131eddf3365f4209ece96/crio-b641fd83b9816fb348d03cb35df6649a6ab3d78bdff2936914e0167db04fad0a WatchSource:0}: Error finding container b641fd83b9816fb348d03cb35df6649a6ab3d78bdff2936914e0167db04fad0a: Status 404 returned error can't find the container with id b641fd83b9816fb348d03cb35df6649a6ab3d78bdff2936914e0167db04fad0a
	Nov 22 00:15:47 ha-561110 kubelet[804]: E1122 00:15:47.996298     804 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"60db7bf551c52a828501521a7f79a373f51d5d988223afbc4f6f1a9ca6872e12\": container with ID starting with 60db7bf551c52a828501521a7f79a373f51d5d988223afbc4f6f1a9ca6872e12 not found: ID does not exist" containerID="60db7bf551c52a828501521a7f79a373f51d5d988223afbc4f6f1a9ca6872e12"
	Nov 22 00:15:47 ha-561110 kubelet[804]: I1122 00:15:47.996363     804 kuberuntime_gc.go:364] "Error getting ContainerStatus for containerID" containerID="60db7bf551c52a828501521a7f79a373f51d5d988223afbc4f6f1a9ca6872e12" err="rpc error: code = NotFound desc = could not find container \"60db7bf551c52a828501521a7f79a373f51d5d988223afbc4f6f1a9ca6872e12\": container with ID starting with 60db7bf551c52a828501521a7f79a373f51d5d988223afbc4f6f1a9ca6872e12 not found: ID does not exist"
	Nov 22 00:15:52 ha-561110 kubelet[804]: I1122 00:15:52.388242     804 scope.go:117] "RemoveContainer" containerID="60513ca704c00c488d3491dd4f8a9e84dd69cf4c098d6dddf6f9ecba18d70a70"
	Nov 22 00:16:25 ha-561110 kubelet[804]: I1122 00:16:25.065664     804 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-vip-ha-561110"
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p ha-561110 -n ha-561110
helpers_test.go:269: (dbg) Run:  kubectl --context ha-561110 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:280: non-running pods: busybox-7b57f96db7-hkwmz
helpers_test.go:282: ======> post-mortem[TestMultiControlPlane/serial/RestartClusterKeepsNodes]: describe non-running pods <======
helpers_test.go:285: (dbg) Run:  kubectl --context ha-561110 describe pod busybox-7b57f96db7-hkwmz
helpers_test.go:290: (dbg) kubectl --context ha-561110 describe pod busybox-7b57f96db7-hkwmz:

                                                
                                                
-- stdout --
	Name:             busybox-7b57f96db7-hkwmz
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             <none>
	Labels:           app=busybox
	                  pod-template-hash=7b57f96db7
	Annotations:      <none>
	Status:           Pending
	IP:               
	IPs:              <none>
	Controlled By:    ReplicaSet/busybox-7b57f96db7
	Containers:
	  busybox:
	    Image:      gcr.io/k8s-minikube/busybox:1.28
	    Port:       <none>
	    Host Port:  <none>
	    Command:
	      sleep
	      3600
	    Environment:  <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-82jj6 (ro)
	Conditions:
	  Type           Status
	  PodScheduled   False 
	Volumes:
	  kube-api-access-82jj6:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason            Age   From               Message
	  ----     ------            ----  ----               -------
	  Warning  FailedScheduling  103s  default-scheduler  0/4 nodes are available: 2 node(s) didn't match pod anti-affinity rules, 2 node(s) had untolerated taint {node.kubernetes.io/unreachable: }. no new claims to deallocate, preemption: 0/4 nodes are available: 2 No preemption victims found for incoming pod, 2 Preemption is not helpful for scheduling.
	  Warning  FailedScheduling  103s  default-scheduler  0/4 nodes are available: 2 node(s) didn't match pod anti-affinity rules, 2 node(s) had untolerated taint {node.kubernetes.io/unreachable: }. no new claims to deallocate, preemption: 0/4 nodes are available: 2 No preemption victims found for incoming pod, 2 Preemption is not helpful for scheduling.

                                                
                                                
-- /stdout --
helpers_test.go:293: <<< TestMultiControlPlane/serial/RestartClusterKeepsNodes FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiControlPlane/serial/RestartClusterKeepsNodes (532.40s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeleteSecondaryNode (8.65s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeleteSecondaryNode
ha_test.go:489: (dbg) Run:  out/minikube-linux-arm64 -p ha-561110 node delete m03 --alsologtostderr -v 5
ha_test.go:489: (dbg) Done: out/minikube-linux-arm64 -p ha-561110 node delete m03 --alsologtostderr -v 5: (5.455675353s)
ha_test.go:495: (dbg) Run:  out/minikube-linux-arm64 -p ha-561110 status --alsologtostderr -v 5
ha_test.go:495: (dbg) Non-zero exit: out/minikube-linux-arm64 -p ha-561110 status --alsologtostderr -v 5: exit status 7 (610.744695ms)

                                                
                                                
-- stdout --
	ha-561110
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-561110-m02
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-561110-m04
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1122 00:23:11.248529  570089 out.go:360] Setting OutFile to fd 1 ...
	I1122 00:23:11.248706  570089 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1122 00:23:11.248742  570089 out.go:374] Setting ErrFile to fd 2...
	I1122 00:23:11.248762  570089 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1122 00:23:11.249039  570089 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21934-513600/.minikube/bin
	I1122 00:23:11.249257  570089 out.go:368] Setting JSON to false
	I1122 00:23:11.249316  570089 mustload.go:66] Loading cluster: ha-561110
	I1122 00:23:11.249388  570089 notify.go:221] Checking for updates...
	I1122 00:23:11.249759  570089 config.go:182] Loaded profile config "ha-561110": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1122 00:23:11.249835  570089 status.go:174] checking status of ha-561110 ...
	I1122 00:23:11.250684  570089 cli_runner.go:164] Run: docker container inspect ha-561110 --format={{.State.Status}}
	I1122 00:23:11.273640  570089 status.go:371] ha-561110 host status = "Running" (err=<nil>)
	I1122 00:23:11.273663  570089 host.go:66] Checking if "ha-561110" exists ...
	I1122 00:23:11.274022  570089 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-561110
	I1122 00:23:11.309924  570089 host.go:66] Checking if "ha-561110" exists ...
	I1122 00:23:11.310266  570089 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1122 00:23:11.310340  570089 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-561110
	I1122 00:23:11.331299  570089 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33535 SSHKeyPath:/home/jenkins/minikube-integration/21934-513600/.minikube/machines/ha-561110/id_rsa Username:docker}
	I1122 00:23:11.431558  570089 ssh_runner.go:195] Run: systemctl --version
	I1122 00:23:11.438542  570089 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1122 00:23:11.452242  570089 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1122 00:23:11.535876  570089 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:3 ContainersRunning:2 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:50 OomKillDisable:true NGoroutines:62 SystemTime:2025-11-22 00:23:11.519741595 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1122 00:23:11.536502  570089 kubeconfig.go:125] found "ha-561110" server: "https://192.168.49.254:8443"
	I1122 00:23:11.536546  570089 api_server.go:166] Checking apiserver status ...
	I1122 00:23:11.536595  570089 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1122 00:23:11.549272  570089 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/978/cgroup
	I1122 00:23:11.558421  570089 api_server.go:182] apiserver freezer: "4:freezer:/docker/b491a219f5f6a6da2c04a012513c1a2266c783068f6131eddf3365f4209ece96/crio/crio-4cbb3fde391bd86e756416ec260b0b8a5501d5139da802107965d9e012c4eca5"
	I1122 00:23:11.558504  570089 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/b491a219f5f6a6da2c04a012513c1a2266c783068f6131eddf3365f4209ece96/crio/crio-4cbb3fde391bd86e756416ec260b0b8a5501d5139da802107965d9e012c4eca5/freezer.state
	I1122 00:23:11.567032  570089 api_server.go:204] freezer state: "THAWED"
	I1122 00:23:11.567061  570089 api_server.go:253] Checking apiserver healthz at https://192.168.49.254:8443/healthz ...
	I1122 00:23:11.576726  570089 api_server.go:279] https://192.168.49.254:8443/healthz returned 200:
	ok
	I1122 00:23:11.576759  570089 status.go:463] ha-561110 apiserver status = Running (err=<nil>)
	I1122 00:23:11.576776  570089 status.go:176] ha-561110 status: &{Name:ha-561110 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1122 00:23:11.576794  570089 status.go:174] checking status of ha-561110-m02 ...
	I1122 00:23:11.577131  570089 cli_runner.go:164] Run: docker container inspect ha-561110-m02 --format={{.State.Status}}
	I1122 00:23:11.595416  570089 status.go:371] ha-561110-m02 host status = "Running" (err=<nil>)
	I1122 00:23:11.595440  570089 host.go:66] Checking if "ha-561110-m02" exists ...
	I1122 00:23:11.595734  570089 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-561110-m02
	I1122 00:23:11.614704  570089 host.go:66] Checking if "ha-561110-m02" exists ...
	I1122 00:23:11.615101  570089 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1122 00:23:11.615148  570089 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-561110-m02
	I1122 00:23:11.633524  570089 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33540 SSHKeyPath:/home/jenkins/minikube-integration/21934-513600/.minikube/machines/ha-561110-m02/id_rsa Username:docker}
	I1122 00:23:11.730926  570089 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1122 00:23:11.744625  570089 kubeconfig.go:125] found "ha-561110" server: "https://192.168.49.254:8443"
	I1122 00:23:11.744702  570089 api_server.go:166] Checking apiserver status ...
	I1122 00:23:11.744752  570089 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1122 00:23:11.756344  570089 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1186/cgroup
	I1122 00:23:11.764861  570089 api_server.go:182] apiserver freezer: "4:freezer:/docker/7d6499ec8cf7ec4144447335556155dc9339fb2fa81ff11bdceacbcfe39c0b98/crio/crio-ddb397e694e67ce75d7f412f9f120479116dc9d151c8326805e3b68073bfae91"
	I1122 00:23:11.764931  570089 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/7d6499ec8cf7ec4144447335556155dc9339fb2fa81ff11bdceacbcfe39c0b98/crio/crio-ddb397e694e67ce75d7f412f9f120479116dc9d151c8326805e3b68073bfae91/freezer.state
	I1122 00:23:11.772859  570089 api_server.go:204] freezer state: "THAWED"
	I1122 00:23:11.772887  570089 api_server.go:253] Checking apiserver healthz at https://192.168.49.254:8443/healthz ...
	I1122 00:23:11.781452  570089 api_server.go:279] https://192.168.49.254:8443/healthz returned 200:
	ok
	I1122 00:23:11.781529  570089 status.go:463] ha-561110-m02 apiserver status = Running (err=<nil>)
	I1122 00:23:11.781554  570089 status.go:176] ha-561110-m02 status: &{Name:ha-561110-m02 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1122 00:23:11.781572  570089 status.go:174] checking status of ha-561110-m04 ...
	I1122 00:23:11.782101  570089 cli_runner.go:164] Run: docker container inspect ha-561110-m04 --format={{.State.Status}}
	I1122 00:23:11.799618  570089 status.go:371] ha-561110-m04 host status = "Stopped" (err=<nil>)
	I1122 00:23:11.799642  570089 status.go:384] host is not running, skipping remaining checks
	I1122 00:23:11.799649  570089 status.go:176] ha-561110-m04 status: &{Name:ha-561110-m04 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:497: failed to run minikube status. args "out/minikube-linux-arm64 -p ha-561110 status --alsologtostderr -v 5" : exit status 7
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestMultiControlPlane/serial/DeleteSecondaryNode]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestMultiControlPlane/serial/DeleteSecondaryNode]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect ha-561110
helpers_test.go:243: (dbg) docker inspect ha-561110:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "b491a219f5f6a6da2c04a012513c1a2266c783068f6131eddf3365f4209ece96",
	        "Created": "2025-11-22T00:08:39.249293688Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 564052,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-11-22T00:14:41.326793505Z",
	            "FinishedAt": "2025-11-22T00:14:40.718153366Z"
	        },
	        "Image": "sha256:97061622515a85470dad13cb046ad5fc5021fcd2b05aa921a620abb52951cd5d",
	        "ResolvConfPath": "/var/lib/docker/containers/b491a219f5f6a6da2c04a012513c1a2266c783068f6131eddf3365f4209ece96/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/b491a219f5f6a6da2c04a012513c1a2266c783068f6131eddf3365f4209ece96/hostname",
	        "HostsPath": "/var/lib/docker/containers/b491a219f5f6a6da2c04a012513c1a2266c783068f6131eddf3365f4209ece96/hosts",
	        "LogPath": "/var/lib/docker/containers/b491a219f5f6a6da2c04a012513c1a2266c783068f6131eddf3365f4209ece96/b491a219f5f6a6da2c04a012513c1a2266c783068f6131eddf3365f4209ece96-json.log",
	        "Name": "/ha-561110",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "ha-561110:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "ha-561110",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "b491a219f5f6a6da2c04a012513c1a2266c783068f6131eddf3365f4209ece96",
	                "LowerDir": "/var/lib/docker/overlay2/5b04665b7cab2ec18af91a710d518904c279e2a90668f078e04a26ace79c7488-init/diff:/var/lib/docker/overlay2/7e8788c6de692bc1c3758a2bb2c4b8da0fbba26855f855c0f3b655bfbdd92f8e/diff",
	                "MergedDir": "/var/lib/docker/overlay2/5b04665b7cab2ec18af91a710d518904c279e2a90668f078e04a26ace79c7488/merged",
	                "UpperDir": "/var/lib/docker/overlay2/5b04665b7cab2ec18af91a710d518904c279e2a90668f078e04a26ace79c7488/diff",
	                "WorkDir": "/var/lib/docker/overlay2/5b04665b7cab2ec18af91a710d518904c279e2a90668f078e04a26ace79c7488/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "ha-561110",
	                "Source": "/var/lib/docker/volumes/ha-561110/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "ha-561110",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763588073-21934@sha256:19d3da0413e1bfa354cbb88004c6796f8e9772a083e0230b0f6e50212ee04c7e",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "ha-561110",
	                "name.minikube.sigs.k8s.io": "ha-561110",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "63b3a8bfef41783609e300f295bd9c6ce0b188ddea8ed2fd34f5208c58b47581",
	            "SandboxKey": "/var/run/docker/netns/63b3a8bfef41",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33535"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33536"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33539"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33537"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33538"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "ha-561110": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "96:82:2a:2d:1a:a2",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "b16c782e3da877b947afab8daed1813e31e3d205de3fc5d50df3784dc479d217",
	                    "EndpointID": "61c267346b225270082d2c669fb1fa8e14bbb2c2c81a704ce5a2c8a50f3d07f7",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "ha-561110",
	                        "b491a219f5f6"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p ha-561110 -n ha-561110
helpers_test.go:252: <<< TestMultiControlPlane/serial/DeleteSecondaryNode FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestMultiControlPlane/serial/DeleteSecondaryNode]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p ha-561110 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p ha-561110 logs -n 25: (1.39345046s)
helpers_test.go:260: TestMultiControlPlane/serial/DeleteSecondaryNode logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                 ARGS                                                                 │  PROFILE  │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ ssh     │ ha-561110 ssh -n ha-561110-m03 sudo cat /home/docker/cp-test.txt                                                                     │ ha-561110 │ jenkins │ v1.37.0 │ 22 Nov 25 00:13 UTC │ 22 Nov 25 00:13 UTC │
	│ ssh     │ ha-561110 ssh -n ha-561110-m02 sudo cat /home/docker/cp-test_ha-561110-m03_ha-561110-m02.txt                                         │ ha-561110 │ jenkins │ v1.37.0 │ 22 Nov 25 00:13 UTC │ 22 Nov 25 00:13 UTC │
	│ cp      │ ha-561110 cp ha-561110-m03:/home/docker/cp-test.txt ha-561110-m04:/home/docker/cp-test_ha-561110-m03_ha-561110-m04.txt               │ ha-561110 │ jenkins │ v1.37.0 │ 22 Nov 25 00:13 UTC │ 22 Nov 25 00:13 UTC │
	│ ssh     │ ha-561110 ssh -n ha-561110-m03 sudo cat /home/docker/cp-test.txt                                                                     │ ha-561110 │ jenkins │ v1.37.0 │ 22 Nov 25 00:13 UTC │ 22 Nov 25 00:13 UTC │
	│ ssh     │ ha-561110 ssh -n ha-561110-m04 sudo cat /home/docker/cp-test_ha-561110-m03_ha-561110-m04.txt                                         │ ha-561110 │ jenkins │ v1.37.0 │ 22 Nov 25 00:13 UTC │ 22 Nov 25 00:13 UTC │
	│ cp      │ ha-561110 cp testdata/cp-test.txt ha-561110-m04:/home/docker/cp-test.txt                                                             │ ha-561110 │ jenkins │ v1.37.0 │ 22 Nov 25 00:13 UTC │ 22 Nov 25 00:13 UTC │
	│ ssh     │ ha-561110 ssh -n ha-561110-m04 sudo cat /home/docker/cp-test.txt                                                                     │ ha-561110 │ jenkins │ v1.37.0 │ 22 Nov 25 00:13 UTC │ 22 Nov 25 00:13 UTC │
	│ cp      │ ha-561110 cp ha-561110-m04:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile2616405813/001/cp-test_ha-561110-m04.txt │ ha-561110 │ jenkins │ v1.37.0 │ 22 Nov 25 00:13 UTC │ 22 Nov 25 00:13 UTC │
	│ ssh     │ ha-561110 ssh -n ha-561110-m04 sudo cat /home/docker/cp-test.txt                                                                     │ ha-561110 │ jenkins │ v1.37.0 │ 22 Nov 25 00:13 UTC │ 22 Nov 25 00:13 UTC │
	│ cp      │ ha-561110 cp ha-561110-m04:/home/docker/cp-test.txt ha-561110:/home/docker/cp-test_ha-561110-m04_ha-561110.txt                       │ ha-561110 │ jenkins │ v1.37.0 │ 22 Nov 25 00:13 UTC │ 22 Nov 25 00:13 UTC │
	│ ssh     │ ha-561110 ssh -n ha-561110-m04 sudo cat /home/docker/cp-test.txt                                                                     │ ha-561110 │ jenkins │ v1.37.0 │ 22 Nov 25 00:13 UTC │ 22 Nov 25 00:13 UTC │
	│ ssh     │ ha-561110 ssh -n ha-561110 sudo cat /home/docker/cp-test_ha-561110-m04_ha-561110.txt                                                 │ ha-561110 │ jenkins │ v1.37.0 │ 22 Nov 25 00:13 UTC │ 22 Nov 25 00:13 UTC │
	│ cp      │ ha-561110 cp ha-561110-m04:/home/docker/cp-test.txt ha-561110-m02:/home/docker/cp-test_ha-561110-m04_ha-561110-m02.txt               │ ha-561110 │ jenkins │ v1.37.0 │ 22 Nov 25 00:13 UTC │ 22 Nov 25 00:13 UTC │
	│ ssh     │ ha-561110 ssh -n ha-561110-m04 sudo cat /home/docker/cp-test.txt                                                                     │ ha-561110 │ jenkins │ v1.37.0 │ 22 Nov 25 00:13 UTC │ 22 Nov 25 00:13 UTC │
	│ ssh     │ ha-561110 ssh -n ha-561110-m02 sudo cat /home/docker/cp-test_ha-561110-m04_ha-561110-m02.txt                                         │ ha-561110 │ jenkins │ v1.37.0 │ 22 Nov 25 00:13 UTC │ 22 Nov 25 00:13 UTC │
	│ cp      │ ha-561110 cp ha-561110-m04:/home/docker/cp-test.txt ha-561110-m03:/home/docker/cp-test_ha-561110-m04_ha-561110-m03.txt               │ ha-561110 │ jenkins │ v1.37.0 │ 22 Nov 25 00:13 UTC │ 22 Nov 25 00:13 UTC │
	│ ssh     │ ha-561110 ssh -n ha-561110-m04 sudo cat /home/docker/cp-test.txt                                                                     │ ha-561110 │ jenkins │ v1.37.0 │ 22 Nov 25 00:13 UTC │ 22 Nov 25 00:13 UTC │
	│ ssh     │ ha-561110 ssh -n ha-561110-m03 sudo cat /home/docker/cp-test_ha-561110-m04_ha-561110-m03.txt                                         │ ha-561110 │ jenkins │ v1.37.0 │ 22 Nov 25 00:13 UTC │ 22 Nov 25 00:13 UTC │
	│ node    │ ha-561110 node stop m02 --alsologtostderr -v 5                                                                                       │ ha-561110 │ jenkins │ v1.37.0 │ 22 Nov 25 00:13 UTC │ 22 Nov 25 00:13 UTC │
	│ node    │ ha-561110 node start m02 --alsologtostderr -v 5                                                                                      │ ha-561110 │ jenkins │ v1.37.0 │ 22 Nov 25 00:13 UTC │ 22 Nov 25 00:14 UTC │
	│ node    │ ha-561110 node list --alsologtostderr -v 5                                                                                           │ ha-561110 │ jenkins │ v1.37.0 │ 22 Nov 25 00:14 UTC │                     │
	│ stop    │ ha-561110 stop --alsologtostderr -v 5                                                                                                │ ha-561110 │ jenkins │ v1.37.0 │ 22 Nov 25 00:14 UTC │ 22 Nov 25 00:14 UTC │
	│ start   │ ha-561110 start --wait true --alsologtostderr -v 5                                                                                   │ ha-561110 │ jenkins │ v1.37.0 │ 22 Nov 25 00:14 UTC │                     │
	│ node    │ ha-561110 node list --alsologtostderr -v 5                                                                                           │ ha-561110 │ jenkins │ v1.37.0 │ 22 Nov 25 00:23 UTC │                     │
	│ node    │ ha-561110 node delete m03 --alsologtostderr -v 5                                                                                     │ ha-561110 │ jenkins │ v1.37.0 │ 22 Nov 25 00:23 UTC │ 22 Nov 25 00:23 UTC │
	└─────────┴──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/22 00:14:41
	Running on machine: ip-172-31-21-244
	Binary: Built with gc go1.25.3 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1122 00:14:41.051374  563925 out.go:360] Setting OutFile to fd 1 ...
	I1122 00:14:41.051556  563925 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1122 00:14:41.051586  563925 out.go:374] Setting ErrFile to fd 2...
	I1122 00:14:41.051607  563925 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1122 00:14:41.051880  563925 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21934-513600/.minikube/bin
	I1122 00:14:41.052266  563925 out.go:368] Setting JSON to false
	I1122 00:14:41.053166  563925 start.go:133] hostinfo: {"hostname":"ip-172-31-21-244","uptime":17797,"bootTime":1763752684,"procs":152,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"da8ac1fd-6236-412a-a346-95873c98230d"}
	I1122 00:14:41.053270  563925 start.go:143] virtualization:  
	I1122 00:14:41.056667  563925 out.go:179] * [ha-561110] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1122 00:14:41.060532  563925 out.go:179]   - MINIKUBE_LOCATION=21934
	I1122 00:14:41.060603  563925 notify.go:221] Checking for updates...
	I1122 00:14:41.067352  563925 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1122 00:14:41.070297  563925 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21934-513600/kubeconfig
	I1122 00:14:41.073934  563925 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21934-513600/.minikube
	I1122 00:14:41.076934  563925 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1122 00:14:41.079898  563925 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1122 00:14:41.083494  563925 config.go:182] Loaded profile config "ha-561110": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1122 00:14:41.083606  563925 driver.go:422] Setting default libvirt URI to qemu:///system
	I1122 00:14:41.111284  563925 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1122 00:14:41.111387  563925 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1122 00:14:41.175037  563925 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:4 ContainersRunning:0 ContainersPaused:0 ContainersStopped:4 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:27 OomKillDisable:true NGoroutines:42 SystemTime:2025-11-22 00:14:41.165296296 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1122 00:14:41.175148  563925 docker.go:319] overlay module found
	I1122 00:14:41.178250  563925 out.go:179] * Using the docker driver based on existing profile
	I1122 00:14:41.180953  563925 start.go:309] selected driver: docker
	I1122 00:14:41.180971  563925 start.go:930] validating driver "docker" against &{Name:ha-561110 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763588073-21934@sha256:19d3da0413e1bfa354cbb88004c6796f8e9772a083e0230b0f6e50212ee04c7e Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:ha-561110 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName
:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.49.4 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.49.5 Port:0 KubernetesVersion:v1.34.1 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:f
alse ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false Dis
ableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1122 00:14:41.181129  563925 start.go:941] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1122 00:14:41.181235  563925 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1122 00:14:41.238102  563925 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:4 ContainersRunning:0 ContainersPaused:0 ContainersStopped:4 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:27 OomKillDisable:true NGoroutines:42 SystemTime:2025-11-22 00:14:41.228646014 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1122 00:14:41.238520  563925 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1122 00:14:41.238556  563925 cni.go:84] Creating CNI manager for ""
	I1122 00:14:41.238614  563925 cni.go:136] multinode detected (4 nodes found), recommending kindnet
	I1122 00:14:41.238661  563925 start.go:353] cluster config:
	{Name:ha-561110 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763588073-21934@sha256:19d3da0413e1bfa354cbb88004c6796f8e9772a083e0230b0f6e50212ee04c7e Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:ha-561110 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerR
untime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.49.4 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.49.5 Port:0 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false isti
o-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMne
tClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1122 00:14:41.241877  563925 out.go:179] * Starting "ha-561110" primary control-plane node in "ha-561110" cluster
	I1122 00:14:41.244623  563925 cache.go:134] Beginning downloading kic base image for docker with crio
	I1122 00:14:41.247356  563925 out.go:179] * Pulling base image v0.0.48-1763588073-21934 ...
	I1122 00:14:41.250191  563925 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1122 00:14:41.250238  563925 preload.go:203] Found local preload: /home/jenkins/minikube-integration/21934-513600/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4
	I1122 00:14:41.250251  563925 cache.go:65] Caching tarball of preloaded images
	I1122 00:14:41.250256  563925 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763588073-21934@sha256:19d3da0413e1bfa354cbb88004c6796f8e9772a083e0230b0f6e50212ee04c7e in local docker daemon
	I1122 00:14:41.250328  563925 preload.go:238] Found /home/jenkins/minikube-integration/21934-513600/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1122 00:14:41.250339  563925 cache.go:68] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1122 00:14:41.250480  563925 profile.go:143] Saving config to /home/jenkins/minikube-integration/21934-513600/.minikube/profiles/ha-561110/config.json ...
	I1122 00:14:41.275134  563925 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763588073-21934@sha256:19d3da0413e1bfa354cbb88004c6796f8e9772a083e0230b0f6e50212ee04c7e in local docker daemon, skipping pull
	I1122 00:14:41.275155  563925 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763588073-21934@sha256:19d3da0413e1bfa354cbb88004c6796f8e9772a083e0230b0f6e50212ee04c7e exists in daemon, skipping load
	I1122 00:14:41.275171  563925 cache.go:243] Successfully downloaded all kic artifacts
	I1122 00:14:41.275193  563925 start.go:360] acquireMachinesLock for ha-561110: {Name:mkb487371897d491a1a254bbfa266b10650bf7bc Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1122 00:14:41.275256  563925 start.go:364] duration metric: took 36.265µs to acquireMachinesLock for "ha-561110"
	I1122 00:14:41.275288  563925 start.go:96] Skipping create...Using existing machine configuration
	I1122 00:14:41.275297  563925 fix.go:54] fixHost starting: 
	I1122 00:14:41.275560  563925 cli_runner.go:164] Run: docker container inspect ha-561110 --format={{.State.Status}}
	I1122 00:14:41.292644  563925 fix.go:112] recreateIfNeeded on ha-561110: state=Stopped err=<nil>
	W1122 00:14:41.292679  563925 fix.go:138] unexpected machine state, will restart: <nil>
	I1122 00:14:41.295991  563925 out.go:252] * Restarting existing docker container for "ha-561110" ...
	I1122 00:14:41.296094  563925 cli_runner.go:164] Run: docker start ha-561110
	I1122 00:14:41.567342  563925 cli_runner.go:164] Run: docker container inspect ha-561110 --format={{.State.Status}}
	I1122 00:14:41.593759  563925 kic.go:430] container "ha-561110" state is running.
	I1122 00:14:41.594265  563925 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-561110
	I1122 00:14:41.625087  563925 profile.go:143] Saving config to /home/jenkins/minikube-integration/21934-513600/.minikube/profiles/ha-561110/config.json ...
	I1122 00:14:41.625337  563925 machine.go:94] provisionDockerMachine start ...
	I1122 00:14:41.625405  563925 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-561110
	I1122 00:14:41.644350  563925 main.go:143] libmachine: Using SSH client type: native
	I1122 00:14:41.644684  563925 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33535 <nil> <nil>}
	I1122 00:14:41.644692  563925 main.go:143] libmachine: About to run SSH command:
	hostname
	I1122 00:14:41.645633  563925 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I1122 00:14:44.789929  563925 main.go:143] libmachine: SSH cmd err, output: <nil>: ha-561110
	
	I1122 00:14:44.789988  563925 ubuntu.go:182] provisioning hostname "ha-561110"
	I1122 00:14:44.790089  563925 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-561110
	I1122 00:14:44.809008  563925 main.go:143] libmachine: Using SSH client type: native
	I1122 00:14:44.809338  563925 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33535 <nil> <nil>}
	I1122 00:14:44.809354  563925 main.go:143] libmachine: About to run SSH command:
	sudo hostname ha-561110 && echo "ha-561110" | sudo tee /etc/hostname
	I1122 00:14:44.959054  563925 main.go:143] libmachine: SSH cmd err, output: <nil>: ha-561110
	
	I1122 00:14:44.959174  563925 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-561110
	I1122 00:14:44.977402  563925 main.go:143] libmachine: Using SSH client type: native
	I1122 00:14:44.977725  563925 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33535 <nil> <nil>}
	I1122 00:14:44.977747  563925 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-561110' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-561110/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-561110' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1122 00:14:45.148701  563925 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1122 00:14:45.148780  563925 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21934-513600/.minikube CaCertPath:/home/jenkins/minikube-integration/21934-513600/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21934-513600/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21934-513600/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21934-513600/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21934-513600/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21934-513600/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21934-513600/.minikube}
	I1122 00:14:45.148894  563925 ubuntu.go:190] setting up certificates
	I1122 00:14:45.148911  563925 provision.go:84] configureAuth start
	I1122 00:14:45.149003  563925 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-561110
	I1122 00:14:45.178821  563925 provision.go:143] copyHostCerts
	I1122 00:14:45.178872  563925 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21934-513600/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/21934-513600/.minikube/ca.pem
	I1122 00:14:45.178980  563925 exec_runner.go:144] found /home/jenkins/minikube-integration/21934-513600/.minikube/ca.pem, removing ...
	I1122 00:14:45.179051  563925 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21934-513600/.minikube/ca.pem
	I1122 00:14:45.179147  563925 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21934-513600/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21934-513600/.minikube/ca.pem (1078 bytes)
	I1122 00:14:45.179368  563925 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21934-513600/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/21934-513600/.minikube/cert.pem
	I1122 00:14:45.179396  563925 exec_runner.go:144] found /home/jenkins/minikube-integration/21934-513600/.minikube/cert.pem, removing ...
	I1122 00:14:45.179408  563925 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21934-513600/.minikube/cert.pem
	I1122 00:14:45.179513  563925 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21934-513600/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21934-513600/.minikube/cert.pem (1123 bytes)
	I1122 00:14:45.179582  563925 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21934-513600/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/21934-513600/.minikube/key.pem
	I1122 00:14:45.179688  563925 exec_runner.go:144] found /home/jenkins/minikube-integration/21934-513600/.minikube/key.pem, removing ...
	I1122 00:14:45.179693  563925 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21934-513600/.minikube/key.pem
	I1122 00:14:45.179763  563925 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21934-513600/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21934-513600/.minikube/key.pem (1675 bytes)
	I1122 00:14:45.179869  563925 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21934-513600/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21934-513600/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21934-513600/.minikube/certs/ca-key.pem org=jenkins.ha-561110 san=[127.0.0.1 192.168.49.2 ha-561110 localhost minikube]
	I1122 00:14:45.360921  563925 provision.go:177] copyRemoteCerts
	I1122 00:14:45.360991  563925 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1122 00:14:45.361031  563925 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-561110
	I1122 00:14:45.379675  563925 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33535 SSHKeyPath:/home/jenkins/minikube-integration/21934-513600/.minikube/machines/ha-561110/id_rsa Username:docker}
	I1122 00:14:45.481986  563925 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21934-513600/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1122 00:14:45.482096  563925 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21934-513600/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1122 00:14:45.500661  563925 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21934-513600/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1122 00:14:45.500750  563925 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21934-513600/.minikube/machines/server.pem --> /etc/docker/server.pem (1196 bytes)
	I1122 00:14:45.519280  563925 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21934-513600/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1122 00:14:45.519388  563925 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21934-513600/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1122 00:14:45.538099  563925 provision.go:87] duration metric: took 389.17288ms to configureAuth
	I1122 00:14:45.538126  563925 ubuntu.go:206] setting minikube options for container-runtime
	I1122 00:14:45.538361  563925 config.go:182] Loaded profile config "ha-561110": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1122 00:14:45.538464  563925 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-561110
	I1122 00:14:45.557843  563925 main.go:143] libmachine: Using SSH client type: native
	I1122 00:14:45.558153  563925 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33535 <nil> <nil>}
	I1122 00:14:45.558173  563925 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1122 00:14:45.916699  563925 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1122 00:14:45.916722  563925 machine.go:97] duration metric: took 4.291375262s to provisionDockerMachine
	I1122 00:14:45.916734  563925 start.go:293] postStartSetup for "ha-561110" (driver="docker")
	I1122 00:14:45.916744  563925 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1122 00:14:45.916808  563925 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1122 00:14:45.916864  563925 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-561110
	I1122 00:14:45.937454  563925 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33535 SSHKeyPath:/home/jenkins/minikube-integration/21934-513600/.minikube/machines/ha-561110/id_rsa Username:docker}
	I1122 00:14:46.038557  563925 ssh_runner.go:195] Run: cat /etc/os-release
	I1122 00:14:46.042104  563925 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1122 00:14:46.042148  563925 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1122 00:14:46.042162  563925 filesync.go:126] Scanning /home/jenkins/minikube-integration/21934-513600/.minikube/addons for local assets ...
	I1122 00:14:46.042244  563925 filesync.go:126] Scanning /home/jenkins/minikube-integration/21934-513600/.minikube/files for local assets ...
	I1122 00:14:46.042340  563925 filesync.go:149] local asset: /home/jenkins/minikube-integration/21934-513600/.minikube/files/etc/ssl/certs/5169372.pem -> 5169372.pem in /etc/ssl/certs
	I1122 00:14:46.042358  563925 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21934-513600/.minikube/files/etc/ssl/certs/5169372.pem -> /etc/ssl/certs/5169372.pem
	I1122 00:14:46.042519  563925 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1122 00:14:46.050335  563925 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21934-513600/.minikube/files/etc/ssl/certs/5169372.pem --> /etc/ssl/certs/5169372.pem (1708 bytes)
	I1122 00:14:46.070075  563925 start.go:296] duration metric: took 153.324249ms for postStartSetup
	I1122 00:14:46.070158  563925 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1122 00:14:46.070200  563925 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-561110
	I1122 00:14:46.089314  563925 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33535 SSHKeyPath:/home/jenkins/minikube-integration/21934-513600/.minikube/machines/ha-561110/id_rsa Username:docker}
	I1122 00:14:46.187250  563925 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1122 00:14:46.192065  563925 fix.go:56] duration metric: took 4.916761973s for fixHost
	I1122 00:14:46.192091  563925 start.go:83] releasing machines lock for "ha-561110", held for 4.916821031s
	I1122 00:14:46.192188  563925 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-561110
	I1122 00:14:46.209139  563925 ssh_runner.go:195] Run: cat /version.json
	I1122 00:14:46.209197  563925 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-561110
	I1122 00:14:46.209461  563925 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1122 00:14:46.209511  563925 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-561110
	I1122 00:14:46.233161  563925 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33535 SSHKeyPath:/home/jenkins/minikube-integration/21934-513600/.minikube/machines/ha-561110/id_rsa Username:docker}
	I1122 00:14:46.237608  563925 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33535 SSHKeyPath:/home/jenkins/minikube-integration/21934-513600/.minikube/machines/ha-561110/id_rsa Username:docker}
	I1122 00:14:46.417414  563925 ssh_runner.go:195] Run: systemctl --version
	I1122 00:14:46.423708  563925 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1122 00:14:46.459853  563925 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1122 00:14:46.464430  563925 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1122 00:14:46.464499  563925 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1122 00:14:46.472070  563925 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1122 00:14:46.472092  563925 start.go:496] detecting cgroup driver to use...
	I1122 00:14:46.472140  563925 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1122 00:14:46.472192  563925 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1122 00:14:46.487805  563925 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1122 00:14:46.501008  563925 docker.go:218] disabling cri-docker service (if available) ...
	I1122 00:14:46.501113  563925 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1122 00:14:46.517083  563925 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1122 00:14:46.530035  563925 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1122 00:14:46.634532  563925 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1122 00:14:46.753160  563925 docker.go:234] disabling docker service ...
	I1122 00:14:46.753271  563925 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1122 00:14:46.768112  563925 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1122 00:14:46.781109  563925 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1122 00:14:46.889282  563925 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1122 00:14:47.012744  563925 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1122 00:14:47.026639  563925 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1122 00:14:47.040275  563925 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1122 00:14:47.040386  563925 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1122 00:14:47.049142  563925 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1122 00:14:47.049222  563925 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1122 00:14:47.057948  563925 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1122 00:14:47.066761  563925 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1122 00:14:47.076164  563925 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1122 00:14:47.085123  563925 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1122 00:14:47.094801  563925 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1122 00:14:47.102952  563925 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1122 00:14:47.111641  563925 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1122 00:14:47.119239  563925 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1122 00:14:47.126541  563925 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1122 00:14:47.233256  563925 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1122 00:14:47.384501  563925 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1122 00:14:47.384567  563925 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1122 00:14:47.388356  563925 start.go:564] Will wait 60s for crictl version
	I1122 00:14:47.388468  563925 ssh_runner.go:195] Run: which crictl
	I1122 00:14:47.392030  563925 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1122 00:14:47.416283  563925 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.2
	RuntimeApiVersion:  v1
	I1122 00:14:47.416422  563925 ssh_runner.go:195] Run: crio --version
	I1122 00:14:47.444890  563925 ssh_runner.go:195] Run: crio --version
	I1122 00:14:47.480934  563925 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.2 ...
	I1122 00:14:47.483635  563925 cli_runner.go:164] Run: docker network inspect ha-561110 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1122 00:14:47.499516  563925 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1122 00:14:47.503369  563925 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1122 00:14:47.513239  563925 kubeadm.go:884] updating cluster {Name:ha-561110 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763588073-21934@sha256:19d3da0413e1bfa354cbb88004c6796f8e9772a083e0230b0f6e50212ee04c7e Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:ha-561110 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APISe
rverNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.49.4 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.49.5 Port:0 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:
false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:f
alse DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1122 00:14:47.513386  563925 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1122 00:14:47.513453  563925 ssh_runner.go:195] Run: sudo crictl images --output json
	I1122 00:14:47.547714  563925 crio.go:514] all images are preloaded for cri-o runtime.
	I1122 00:14:47.547741  563925 crio.go:433] Images already preloaded, skipping extraction
	I1122 00:14:47.547794  563925 ssh_runner.go:195] Run: sudo crictl images --output json
	I1122 00:14:47.572446  563925 crio.go:514] all images are preloaded for cri-o runtime.
	I1122 00:14:47.572474  563925 cache_images.go:86] Images are preloaded, skipping loading
	I1122 00:14:47.572483  563925 kubeadm.go:935] updating node { 192.168.49.2 8443 v1.34.1 crio true true} ...
	I1122 00:14:47.572577  563925 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=ha-561110 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:ha-561110 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1122 00:14:47.572661  563925 ssh_runner.go:195] Run: crio config
	I1122 00:14:47.634066  563925 cni.go:84] Creating CNI manager for ""
	I1122 00:14:47.634094  563925 cni.go:136] multinode detected (4 nodes found), recommending kindnet
	I1122 00:14:47.634114  563925 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1122 00:14:47.634156  563925 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-561110 NodeName:ha-561110 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/mani
fests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1122 00:14:47.634316  563925 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "ha-561110"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1122 00:14:47.634340  563925 kube-vip.go:115] generating kube-vip config ...
	I1122 00:14:47.634397  563925 ssh_runner.go:195] Run: sudo sh -c "lsmod | grep ip_vs"
	I1122 00:14:47.646470  563925 kube-vip.go:163] giving up enabling control-plane load-balancing as ipvs kernel modules appears not to be available: sudo sh -c "lsmod | grep ip_vs": Process exited with status 1
	stdout:
	
	stderr:
	I1122 00:14:47.646593  563925 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.49.254
	    - name: prometheus_server
	      value: :2112
	    image: ghcr.io/kube-vip/kube-vip:v1.0.1
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I1122 00:14:47.646695  563925 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1122 00:14:47.654183  563925 binaries.go:51] Found k8s binaries, skipping transfer
	I1122 00:14:47.654249  563925 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I1122 00:14:47.661699  563925 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (359 bytes)
	I1122 00:14:47.674165  563925 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1122 00:14:47.686331  563925 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2206 bytes)
	I1122 00:14:47.698542  563925 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1358 bytes)
	I1122 00:14:47.711254  563925 ssh_runner.go:195] Run: grep 192.168.49.254	control-plane.minikube.internal$ /etc/hosts
	I1122 00:14:47.714862  563925 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1122 00:14:47.724174  563925 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1122 00:14:47.839371  563925 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1122 00:14:47.853685  563925 certs.go:69] Setting up /home/jenkins/minikube-integration/21934-513600/.minikube/profiles/ha-561110 for IP: 192.168.49.2
	I1122 00:14:47.853753  563925 certs.go:195] generating shared ca certs ...
	I1122 00:14:47.853787  563925 certs.go:227] acquiring lock for ca certs: {Name:mkaf4c79493334cb2058b349c36be7473837f9b7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1122 00:14:47.853987  563925 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21934-513600/.minikube/ca.key
	I1122 00:14:47.854075  563925 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21934-513600/.minikube/proxy-client-ca.key
	I1122 00:14:47.854111  563925 certs.go:257] generating profile certs ...
	I1122 00:14:47.854232  563925 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21934-513600/.minikube/profiles/ha-561110/client.key
	I1122 00:14:47.854280  563925 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21934-513600/.minikube/profiles/ha-561110/apiserver.key.17887f76
	I1122 00:14:47.854319  563925 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21934-513600/.minikube/profiles/ha-561110/apiserver.crt.17887f76 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2 192.168.49.3 192.168.49.4 192.168.49.254]
	I1122 00:14:47.941434  563925 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21934-513600/.minikube/profiles/ha-561110/apiserver.crt.17887f76 ...
	I1122 00:14:47.941949  563925 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21934-513600/.minikube/profiles/ha-561110/apiserver.crt.17887f76: {Name:mk196d114e0b17147f8bed35c49f594a2533cc5a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1122 00:14:47.942154  563925 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21934-513600/.minikube/profiles/ha-561110/apiserver.key.17887f76 ...
	I1122 00:14:47.942191  563925 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21934-513600/.minikube/profiles/ha-561110/apiserver.key.17887f76: {Name:mk34aa50af1cad4bd0a7687c2b98f2a65013e746 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1122 00:14:47.942314  563925 certs.go:382] copying /home/jenkins/minikube-integration/21934-513600/.minikube/profiles/ha-561110/apiserver.crt.17887f76 -> /home/jenkins/minikube-integration/21934-513600/.minikube/profiles/ha-561110/apiserver.crt
	I1122 00:14:47.942500  563925 certs.go:386] copying /home/jenkins/minikube-integration/21934-513600/.minikube/profiles/ha-561110/apiserver.key.17887f76 -> /home/jenkins/minikube-integration/21934-513600/.minikube/profiles/ha-561110/apiserver.key
	I1122 00:14:47.942693  563925 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21934-513600/.minikube/profiles/ha-561110/proxy-client.key
	I1122 00:14:47.942729  563925 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21934-513600/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1122 00:14:47.942772  563925 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21934-513600/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1122 00:14:47.942814  563925 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21934-513600/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1122 00:14:47.942845  563925 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21934-513600/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1122 00:14:47.942881  563925 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21934-513600/.minikube/profiles/ha-561110/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1122 00:14:47.942927  563925 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21934-513600/.minikube/profiles/ha-561110/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1122 00:14:47.942960  563925 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21934-513600/.minikube/profiles/ha-561110/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1122 00:14:47.942996  563925 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21934-513600/.minikube/profiles/ha-561110/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1122 00:14:47.943078  563925 certs.go:484] found cert: /home/jenkins/minikube-integration/21934-513600/.minikube/certs/516937.pem (1338 bytes)
	W1122 00:14:47.943133  563925 certs.go:480] ignoring /home/jenkins/minikube-integration/21934-513600/.minikube/certs/516937_empty.pem, impossibly tiny 0 bytes
	I1122 00:14:47.943156  563925 certs.go:484] found cert: /home/jenkins/minikube-integration/21934-513600/.minikube/certs/ca-key.pem (1679 bytes)
	I1122 00:14:47.943215  563925 certs.go:484] found cert: /home/jenkins/minikube-integration/21934-513600/.minikube/certs/ca.pem (1078 bytes)
	I1122 00:14:47.943265  563925 certs.go:484] found cert: /home/jenkins/minikube-integration/21934-513600/.minikube/certs/cert.pem (1123 bytes)
	I1122 00:14:47.943352  563925 certs.go:484] found cert: /home/jenkins/minikube-integration/21934-513600/.minikube/certs/key.pem (1675 bytes)
	I1122 00:14:47.943431  563925 certs.go:484] found cert: /home/jenkins/minikube-integration/21934-513600/.minikube/files/etc/ssl/certs/5169372.pem (1708 bytes)
	I1122 00:14:47.943512  563925 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21934-513600/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1122 00:14:47.943556  563925 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21934-513600/.minikube/certs/516937.pem -> /usr/share/ca-certificates/516937.pem
	I1122 00:14:47.943584  563925 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21934-513600/.minikube/files/etc/ssl/certs/5169372.pem -> /usr/share/ca-certificates/5169372.pem
	I1122 00:14:47.944164  563925 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21934-513600/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1122 00:14:47.970032  563925 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21934-513600/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1122 00:14:47.993299  563925 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21934-513600/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1122 00:14:48.024732  563925 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21934-513600/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1122 00:14:48.049916  563925 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21934-513600/.minikube/profiles/ha-561110/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I1122 00:14:48.074841  563925 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21934-513600/.minikube/profiles/ha-561110/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1122 00:14:48.093300  563925 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21934-513600/.minikube/profiles/ha-561110/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1122 00:14:48.113386  563925 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21934-513600/.minikube/profiles/ha-561110/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1122 00:14:48.133760  563925 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21934-513600/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1122 00:14:48.153049  563925 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21934-513600/.minikube/certs/516937.pem --> /usr/share/ca-certificates/516937.pem (1338 bytes)
	I1122 00:14:48.173569  563925 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21934-513600/.minikube/files/etc/ssl/certs/5169372.pem --> /usr/share/ca-certificates/5169372.pem (1708 bytes)
	I1122 00:14:48.198292  563925 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1122 00:14:48.211957  563925 ssh_runner.go:195] Run: openssl version
	I1122 00:14:48.218515  563925 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/516937.pem && ln -fs /usr/share/ca-certificates/516937.pem /etc/ssl/certs/516937.pem"
	I1122 00:14:48.228447  563925 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/516937.pem
	I1122 00:14:48.232426  563925 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov 21 23:55 /usr/share/ca-certificates/516937.pem
	I1122 00:14:48.232551  563925 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/516937.pem
	I1122 00:14:48.273469  563925 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/516937.pem /etc/ssl/certs/51391683.0"
	I1122 00:14:48.281348  563925 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/5169372.pem && ln -fs /usr/share/ca-certificates/5169372.pem /etc/ssl/certs/5169372.pem"
	I1122 00:14:48.289635  563925 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/5169372.pem
	I1122 00:14:48.293430  563925 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov 21 23:55 /usr/share/ca-certificates/5169372.pem
	I1122 00:14:48.293550  563925 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/5169372.pem
	I1122 00:14:48.335324  563925 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/5169372.pem /etc/ssl/certs/3ec20f2e.0"
	I1122 00:14:48.343382  563925 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1122 00:14:48.351346  563925 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1122 00:14:48.354892  563925 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov 21 23:48 /usr/share/ca-certificates/minikubeCA.pem
	I1122 00:14:48.354958  563925 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1122 00:14:48.398958  563925 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1122 00:14:48.406910  563925 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1122 00:14:48.410614  563925 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1122 00:14:48.451560  563925 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1122 00:14:48.492804  563925 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1122 00:14:48.540013  563925 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1122 00:14:48.585271  563925 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1122 00:14:48.653970  563925 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1122 00:14:48.747548  563925 kubeadm.go:401] StartCluster: {Name:ha-561110 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763588073-21934@sha256:19d3da0413e1bfa354cbb88004c6796f8e9772a083e0230b0f6e50212ee04c7e Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:ha-561110 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServe
rNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.49.4 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.49.5 Port:0 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:fal
se ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:fals
e DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1122 00:14:48.747694  563925 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1122 00:14:48.747775  563925 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1122 00:14:48.836090  563925 cri.go:89] found id: "4cbb3fde391bd86e756416ec260b0b8a5501d5139da802107965d9e012c4eca5"
	I1122 00:14:48.836127  563925 cri.go:89] found id: "4360f5517fd5eb7d570a98dee1b801419d3b650d7e890d5ddecc79946fba46db"
	I1122 00:14:48.836132  563925 cri.go:89] found id: "a395e7473ffe2b7999ae75a70e19b4f153d459c8ccae48aeeb71b5b6248cc1f2"
	I1122 00:14:48.836136  563925 cri.go:89] found id: "9fdf72902e6e01af8761552bc83ad83cdf5a34600401d1ee9126ac6a25ae0e37"
	I1122 00:14:48.836140  563925 cri.go:89] found id: "1c929db60119ab54f03020d00f2063dc6672d329ea34f4504e502142bffbe644"
	I1122 00:14:48.836148  563925 cri.go:89] found id: ""
	I1122 00:14:48.836216  563925 ssh_runner.go:195] Run: sudo runc list -f json
	W1122 00:14:48.857525  563925 kubeadm.go:408] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-22T00:14:48Z" level=error msg="open /run/runc: no such file or directory"
	I1122 00:14:48.857613  563925 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1122 00:14:48.878520  563925 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1122 00:14:48.878565  563925 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1122 00:14:48.878624  563925 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1122 00:14:48.898381  563925 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1122 00:14:48.898972  563925 kubeconfig.go:47] verify endpoint returned: get endpoint: "ha-561110" does not appear in /home/jenkins/minikube-integration/21934-513600/kubeconfig
	I1122 00:14:48.899101  563925 kubeconfig.go:62] /home/jenkins/minikube-integration/21934-513600/kubeconfig needs updating (will repair): [kubeconfig missing "ha-561110" cluster setting kubeconfig missing "ha-561110" context setting]
	I1122 00:14:48.900028  563925 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21934-513600/kubeconfig: {Name:mkcd094d8a765e5a81369c0fb33cb5e696b17bb8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1122 00:14:48.901567  563925 kapi.go:59] client config for ha-561110: &rest.Config{Host:"https://192.168.49.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21934-513600/.minikube/profiles/ha-561110/client.crt", KeyFile:"/home/jenkins/minikube-integration/21934-513600/.minikube/profiles/ha-561110/client.key", CAFile:"/home/jenkins/minikube-integration/21934-513600/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil
)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1fb2df0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1122 00:14:48.907943  563925 envvar.go:172] "Feature gate default state" feature="InOrderInformers" enabled=true
	I1122 00:14:48.907972  563925 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I1122 00:14:48.907979  563925 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
	I1122 00:14:48.907984  563925 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
	I1122 00:14:48.907993  563925 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I1122 00:14:48.908413  563925 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1122 00:14:48.908668  563925 cert_rotation.go:141] "Starting client certificate rotation controller" logger="tls-transport-cache"
	I1122 00:14:48.938459  563925 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.49.2
	I1122 00:14:48.938496  563925 kubeadm.go:602] duration metric: took 59.924061ms to restartPrimaryControlPlane
	I1122 00:14:48.938507  563925 kubeadm.go:403] duration metric: took 190.97977ms to StartCluster
	I1122 00:14:48.938533  563925 settings.go:142] acquiring lock: {Name:mk6c31eb57ec65b047b78b4e1046e03fe7cc77bb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1122 00:14:48.938632  563925 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21934-513600/kubeconfig
	I1122 00:14:48.939442  563925 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21934-513600/kubeconfig: {Name:mkcd094d8a765e5a81369c0fb33cb5e696b17bb8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1122 00:14:48.939701  563925 start.go:234] HA (multi-control plane) cluster: will skip waiting for primary control-plane node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1122 00:14:48.939739  563925 start.go:242] waiting for startup goroutines ...
	I1122 00:14:48.939758  563925 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1122 00:14:48.940342  563925 config.go:182] Loaded profile config "ha-561110": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1122 00:14:48.944134  563925 out.go:179] * Enabled addons: 
	I1122 00:14:48.947186  563925 addons.go:530] duration metric: took 7.425265ms for enable addons: enabled=[]
	I1122 00:14:48.947258  563925 start.go:247] waiting for cluster config update ...
	I1122 00:14:48.947278  563925 start.go:256] writing updated cluster config ...
	I1122 00:14:48.950835  563925 out.go:203] 
	I1122 00:14:48.954183  563925 config.go:182] Loaded profile config "ha-561110": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1122 00:14:48.954390  563925 profile.go:143] Saving config to /home/jenkins/minikube-integration/21934-513600/.minikube/profiles/ha-561110/config.json ...
	I1122 00:14:48.958001  563925 out.go:179] * Starting "ha-561110-m02" control-plane node in "ha-561110" cluster
	I1122 00:14:48.961037  563925 cache.go:134] Beginning downloading kic base image for docker with crio
	I1122 00:14:48.964123  563925 out.go:179] * Pulling base image v0.0.48-1763588073-21934 ...
	I1122 00:14:48.966981  563925 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1122 00:14:48.967024  563925 cache.go:65] Caching tarball of preloaded images
	I1122 00:14:48.967169  563925 preload.go:238] Found /home/jenkins/minikube-integration/21934-513600/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1122 00:14:48.967185  563925 cache.go:68] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1122 00:14:48.967352  563925 profile.go:143] Saving config to /home/jenkins/minikube-integration/21934-513600/.minikube/profiles/ha-561110/config.json ...
	I1122 00:14:48.967608  563925 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763588073-21934@sha256:19d3da0413e1bfa354cbb88004c6796f8e9772a083e0230b0f6e50212ee04c7e in local docker daemon
	I1122 00:14:49.000604  563925 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763588073-21934@sha256:19d3da0413e1bfa354cbb88004c6796f8e9772a083e0230b0f6e50212ee04c7e in local docker daemon, skipping pull
	I1122 00:14:49.000625  563925 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763588073-21934@sha256:19d3da0413e1bfa354cbb88004c6796f8e9772a083e0230b0f6e50212ee04c7e exists in daemon, skipping load
	I1122 00:14:49.000646  563925 cache.go:243] Successfully downloaded all kic artifacts
	I1122 00:14:49.000671  563925 start.go:360] acquireMachinesLock for ha-561110-m02: {Name:mkb358f78002efa4c17b8c7cead5ae57992aea2d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1122 00:14:49.000737  563925 start.go:364] duration metric: took 50.534µs to acquireMachinesLock for "ha-561110-m02"
	I1122 00:14:49.000757  563925 start.go:96] Skipping create...Using existing machine configuration
	I1122 00:14:49.000763  563925 fix.go:54] fixHost starting: m02
	I1122 00:14:49.001076  563925 cli_runner.go:164] Run: docker container inspect ha-561110-m02 --format={{.State.Status}}
	I1122 00:14:49.034056  563925 fix.go:112] recreateIfNeeded on ha-561110-m02: state=Stopped err=<nil>
	W1122 00:14:49.034088  563925 fix.go:138] unexpected machine state, will restart: <nil>
	I1122 00:14:49.037399  563925 out.go:252] * Restarting existing docker container for "ha-561110-m02" ...
	I1122 00:14:49.037518  563925 cli_runner.go:164] Run: docker start ha-561110-m02
	I1122 00:14:49.451675  563925 cli_runner.go:164] Run: docker container inspect ha-561110-m02 --format={{.State.Status}}
	I1122 00:14:49.475681  563925 kic.go:430] container "ha-561110-m02" state is running.
	I1122 00:14:49.476112  563925 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-561110-m02
	I1122 00:14:49.506374  563925 profile.go:143] Saving config to /home/jenkins/minikube-integration/21934-513600/.minikube/profiles/ha-561110/config.json ...
	I1122 00:14:49.506719  563925 machine.go:94] provisionDockerMachine start ...
	I1122 00:14:49.506835  563925 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-561110-m02
	I1122 00:14:49.550202  563925 main.go:143] libmachine: Using SSH client type: native
	I1122 00:14:49.550557  563925 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33540 <nil> <nil>}
	I1122 00:14:49.550573  563925 main.go:143] libmachine: About to run SSH command:
	hostname
	I1122 00:14:49.551331  563925 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:37062->127.0.0.1:33540: read: connection reset by peer
	I1122 00:14:52.908642  563925 main.go:143] libmachine: SSH cmd err, output: <nil>: ha-561110-m02
	
	I1122 00:14:52.908715  563925 ubuntu.go:182] provisioning hostname "ha-561110-m02"
	I1122 00:14:52.908805  563925 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-561110-m02
	I1122 00:14:52.953932  563925 main.go:143] libmachine: Using SSH client type: native
	I1122 00:14:52.954246  563925 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33540 <nil> <nil>}
	I1122 00:14:52.954258  563925 main.go:143] libmachine: About to run SSH command:
	sudo hostname ha-561110-m02 && echo "ha-561110-m02" | sudo tee /etc/hostname
	I1122 00:14:53.345252  563925 main.go:143] libmachine: SSH cmd err, output: <nil>: ha-561110-m02
	
	I1122 00:14:53.345401  563925 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-561110-m02
	I1122 00:14:53.377691  563925 main.go:143] libmachine: Using SSH client type: native
	I1122 00:14:53.378150  563925 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33540 <nil> <nil>}
	I1122 00:14:53.378172  563925 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-561110-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-561110-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-561110-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1122 00:14:53.591463  563925 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1122 00:14:53.591496  563925 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21934-513600/.minikube CaCertPath:/home/jenkins/minikube-integration/21934-513600/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21934-513600/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21934-513600/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21934-513600/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21934-513600/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21934-513600/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21934-513600/.minikube}
	I1122 00:14:53.591513  563925 ubuntu.go:190] setting up certificates
	I1122 00:14:53.591526  563925 provision.go:84] configureAuth start
	I1122 00:14:53.591597  563925 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-561110-m02
	I1122 00:14:53.618168  563925 provision.go:143] copyHostCerts
	I1122 00:14:53.618211  563925 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21934-513600/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/21934-513600/.minikube/ca.pem
	I1122 00:14:53.618242  563925 exec_runner.go:144] found /home/jenkins/minikube-integration/21934-513600/.minikube/ca.pem, removing ...
	I1122 00:14:53.618253  563925 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21934-513600/.minikube/ca.pem
	I1122 00:14:53.618333  563925 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21934-513600/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21934-513600/.minikube/ca.pem (1078 bytes)
	I1122 00:14:53.618435  563925 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21934-513600/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/21934-513600/.minikube/cert.pem
	I1122 00:14:53.618458  563925 exec_runner.go:144] found /home/jenkins/minikube-integration/21934-513600/.minikube/cert.pem, removing ...
	I1122 00:14:53.618465  563925 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21934-513600/.minikube/cert.pem
	I1122 00:14:53.618494  563925 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21934-513600/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21934-513600/.minikube/cert.pem (1123 bytes)
	I1122 00:14:53.618552  563925 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21934-513600/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/21934-513600/.minikube/key.pem
	I1122 00:14:53.618576  563925 exec_runner.go:144] found /home/jenkins/minikube-integration/21934-513600/.minikube/key.pem, removing ...
	I1122 00:14:53.618584  563925 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21934-513600/.minikube/key.pem
	I1122 00:14:53.618612  563925 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21934-513600/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21934-513600/.minikube/key.pem (1675 bytes)
	I1122 00:14:53.618665  563925 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21934-513600/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21934-513600/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21934-513600/.minikube/certs/ca-key.pem org=jenkins.ha-561110-m02 san=[127.0.0.1 192.168.49.3 ha-561110-m02 localhost minikube]
	I1122 00:14:53.787782  563925 provision.go:177] copyRemoteCerts
	I1122 00:14:53.787855  563925 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1122 00:14:53.787902  563925 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-561110-m02
	I1122 00:14:53.805764  563925 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33540 SSHKeyPath:/home/jenkins/minikube-integration/21934-513600/.minikube/machines/ha-561110-m02/id_rsa Username:docker}
	I1122 00:14:53.914816  563925 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21934-513600/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1122 00:14:53.914879  563925 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21934-513600/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1122 00:14:53.944075  563925 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21934-513600/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1122 00:14:53.944134  563925 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21934-513600/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1122 00:14:53.978384  563925 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21934-513600/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1122 00:14:53.978443  563925 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21934-513600/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1122 00:14:54.007139  563925 provision.go:87] duration metric: took 415.59481ms to configureAuth
	I1122 00:14:54.007174  563925 ubuntu.go:206] setting minikube options for container-runtime
	I1122 00:14:54.007455  563925 config.go:182] Loaded profile config "ha-561110": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1122 00:14:54.007583  563925 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-561110-m02
	I1122 00:14:54.047939  563925 main.go:143] libmachine: Using SSH client type: native
	I1122 00:14:54.048267  563925 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33540 <nil> <nil>}
	I1122 00:14:54.048291  563925 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1122 00:14:54.482099  563925 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1122 00:14:54.482120  563925 machine.go:97] duration metric: took 4.975378731s to provisionDockerMachine
	I1122 00:14:54.482133  563925 start.go:293] postStartSetup for "ha-561110-m02" (driver="docker")
	I1122 00:14:54.482144  563925 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1122 00:14:54.482209  563925 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1122 00:14:54.482252  563925 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-561110-m02
	I1122 00:14:54.500164  563925 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33540 SSHKeyPath:/home/jenkins/minikube-integration/21934-513600/.minikube/machines/ha-561110-m02/id_rsa Username:docker}
	I1122 00:14:54.602698  563925 ssh_runner.go:195] Run: cat /etc/os-release
	I1122 00:14:54.606253  563925 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1122 00:14:54.606285  563925 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1122 00:14:54.606296  563925 filesync.go:126] Scanning /home/jenkins/minikube-integration/21934-513600/.minikube/addons for local assets ...
	I1122 00:14:54.606352  563925 filesync.go:126] Scanning /home/jenkins/minikube-integration/21934-513600/.minikube/files for local assets ...
	I1122 00:14:54.606439  563925 filesync.go:149] local asset: /home/jenkins/minikube-integration/21934-513600/.minikube/files/etc/ssl/certs/5169372.pem -> 5169372.pem in /etc/ssl/certs
	I1122 00:14:54.606450  563925 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21934-513600/.minikube/files/etc/ssl/certs/5169372.pem -> /etc/ssl/certs/5169372.pem
	I1122 00:14:54.606572  563925 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1122 00:14:54.614732  563925 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21934-513600/.minikube/files/etc/ssl/certs/5169372.pem --> /etc/ssl/certs/5169372.pem (1708 bytes)
	I1122 00:14:54.633198  563925 start.go:296] duration metric: took 151.050123ms for postStartSetup
	I1122 00:14:54.633327  563925 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1122 00:14:54.633378  563925 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-561110-m02
	I1122 00:14:54.651888  563925 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33540 SSHKeyPath:/home/jenkins/minikube-integration/21934-513600/.minikube/machines/ha-561110-m02/id_rsa Username:docker}
	I1122 00:14:54.751498  563925 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1122 00:14:54.757858  563925 fix.go:56] duration metric: took 5.757088169s for fixHost
	I1122 00:14:54.757886  563925 start.go:83] releasing machines lock for "ha-561110-m02", held for 5.757140204s
	I1122 00:14:54.757958  563925 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-561110-m02
	I1122 00:14:54.778371  563925 out.go:179] * Found network options:
	I1122 00:14:54.781341  563925 out.go:179]   - NO_PROXY=192.168.49.2
	W1122 00:14:54.784285  563925 proxy.go:120] fail to check proxy env: Error ip not in block
	W1122 00:14:54.784332  563925 proxy.go:120] fail to check proxy env: Error ip not in block
	I1122 00:14:54.784409  563925 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1122 00:14:54.784457  563925 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-561110-m02
	I1122 00:14:54.784734  563925 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1122 00:14:54.784793  563925 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-561110-m02
	I1122 00:14:54.806895  563925 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33540 SSHKeyPath:/home/jenkins/minikube-integration/21934-513600/.minikube/machines/ha-561110-m02/id_rsa Username:docker}
	I1122 00:14:54.810601  563925 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33540 SSHKeyPath:/home/jenkins/minikube-integration/21934-513600/.minikube/machines/ha-561110-m02/id_rsa Username:docker}
	I1122 00:14:54.952580  563925 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1122 00:14:55.010644  563925 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1122 00:14:55.010736  563925 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1122 00:14:55.020151  563925 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1122 00:14:55.020182  563925 start.go:496] detecting cgroup driver to use...
	I1122 00:14:55.020226  563925 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1122 00:14:55.020299  563925 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1122 00:14:55.036774  563925 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1122 00:14:55.050901  563925 docker.go:218] disabling cri-docker service (if available) ...
	I1122 00:14:55.051008  563925 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1122 00:14:55.067844  563925 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1122 00:14:55.088601  563925 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1122 00:14:55.315735  563925 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1122 00:14:55.558850  563925 docker.go:234] disabling docker service ...
	I1122 00:14:55.558960  563925 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1122 00:14:55.576438  563925 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1122 00:14:55.595046  563925 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1122 00:14:55.815234  563925 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1122 00:14:56.006098  563925 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1122 00:14:56.021481  563925 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1122 00:14:56.044364  563925 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1122 00:14:56.044478  563925 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1122 00:14:56.068864  563925 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1122 00:14:56.068980  563925 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1122 00:14:56.084397  563925 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1122 00:14:56.114539  563925 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1122 00:14:56.145163  563925 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1122 00:14:56.167039  563925 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1122 00:14:56.186342  563925 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1122 00:14:56.205126  563925 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1122 00:14:56.216422  563925 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1122 00:14:56.246320  563925 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1122 00:14:56.266882  563925 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1122 00:14:56.589643  563925 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1122 00:14:56.984258  563925 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1122 00:14:56.984384  563925 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1122 00:14:56.988684  563925 start.go:564] Will wait 60s for crictl version
	I1122 00:14:56.988823  563925 ssh_runner.go:195] Run: which crictl
	I1122 00:14:56.993930  563925 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1122 00:14:57.036836  563925 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.2
	RuntimeApiVersion:  v1
	I1122 00:14:57.036996  563925 ssh_runner.go:195] Run: crio --version
	I1122 00:14:57.084070  563925 ssh_runner.go:195] Run: crio --version
	I1122 00:14:57.125443  563925 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.2 ...
	I1122 00:14:57.128539  563925 out.go:179]   - env NO_PROXY=192.168.49.2
	I1122 00:14:57.131626  563925 cli_runner.go:164] Run: docker network inspect ha-561110 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1122 00:14:57.158795  563925 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1122 00:14:57.173001  563925 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1122 00:14:57.195629  563925 mustload.go:66] Loading cluster: ha-561110
	I1122 00:14:57.195865  563925 config.go:182] Loaded profile config "ha-561110": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1122 00:14:57.196127  563925 cli_runner.go:164] Run: docker container inspect ha-561110 --format={{.State.Status}}
	I1122 00:14:57.223215  563925 host.go:66] Checking if "ha-561110" exists ...
	I1122 00:14:57.223486  563925 certs.go:69] Setting up /home/jenkins/minikube-integration/21934-513600/.minikube/profiles/ha-561110 for IP: 192.168.49.3
	I1122 00:14:57.223499  563925 certs.go:195] generating shared ca certs ...
	I1122 00:14:57.223514  563925 certs.go:227] acquiring lock for ca certs: {Name:mkaf4c79493334cb2058b349c36be7473837f9b7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1122 00:14:57.223627  563925 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21934-513600/.minikube/ca.key
	I1122 00:14:57.223673  563925 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21934-513600/.minikube/proxy-client-ca.key
	I1122 00:14:57.223683  563925 certs.go:257] generating profile certs ...
	I1122 00:14:57.223760  563925 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21934-513600/.minikube/profiles/ha-561110/client.key
	I1122 00:14:57.223818  563925 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21934-513600/.minikube/profiles/ha-561110/apiserver.key.1995a48d
	I1122 00:14:57.223886  563925 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21934-513600/.minikube/profiles/ha-561110/proxy-client.key
	I1122 00:14:57.223904  563925 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21934-513600/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1122 00:14:57.223916  563925 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21934-513600/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1122 00:14:57.223932  563925 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21934-513600/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1122 00:14:57.223943  563925 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21934-513600/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1122 00:14:57.223958  563925 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21934-513600/.minikube/profiles/ha-561110/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1122 00:14:57.223970  563925 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21934-513600/.minikube/profiles/ha-561110/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1122 00:14:57.223985  563925 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21934-513600/.minikube/profiles/ha-561110/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1122 00:14:57.223995  563925 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21934-513600/.minikube/profiles/ha-561110/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1122 00:14:57.224044  563925 certs.go:484] found cert: /home/jenkins/minikube-integration/21934-513600/.minikube/certs/516937.pem (1338 bytes)
	W1122 00:14:57.224081  563925 certs.go:480] ignoring /home/jenkins/minikube-integration/21934-513600/.minikube/certs/516937_empty.pem, impossibly tiny 0 bytes
	I1122 00:14:57.224093  563925 certs.go:484] found cert: /home/jenkins/minikube-integration/21934-513600/.minikube/certs/ca-key.pem (1679 bytes)
	I1122 00:14:57.224122  563925 certs.go:484] found cert: /home/jenkins/minikube-integration/21934-513600/.minikube/certs/ca.pem (1078 bytes)
	I1122 00:14:57.224153  563925 certs.go:484] found cert: /home/jenkins/minikube-integration/21934-513600/.minikube/certs/cert.pem (1123 bytes)
	I1122 00:14:57.224179  563925 certs.go:484] found cert: /home/jenkins/minikube-integration/21934-513600/.minikube/certs/key.pem (1675 bytes)
	I1122 00:14:57.224229  563925 certs.go:484] found cert: /home/jenkins/minikube-integration/21934-513600/.minikube/files/etc/ssl/certs/5169372.pem (1708 bytes)
	I1122 00:14:57.224300  563925 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21934-513600/.minikube/files/etc/ssl/certs/5169372.pem -> /usr/share/ca-certificates/5169372.pem
	I1122 00:14:57.224317  563925 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21934-513600/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1122 00:14:57.224334  563925 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21934-513600/.minikube/certs/516937.pem -> /usr/share/ca-certificates/516937.pem
	I1122 00:14:57.224393  563925 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-561110
	I1122 00:14:57.252760  563925 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33535 SSHKeyPath:/home/jenkins/minikube-integration/21934-513600/.minikube/machines/ha-561110/id_rsa Username:docker}
	I1122 00:14:57.354098  563925 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.pub
	I1122 00:14:57.358457  563925 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I1122 00:14:57.367394  563925 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.key
	I1122 00:14:57.371898  563925 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1679 bytes)
	I1122 00:14:57.380426  563925 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.crt
	I1122 00:14:57.384846  563925 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I1122 00:14:57.393409  563925 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.key
	I1122 00:14:57.397317  563925 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1679 bytes)
	I1122 00:14:57.405462  563925 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.crt
	I1122 00:14:57.409765  563925 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I1122 00:14:57.418123  563925 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.key
	I1122 00:14:57.422240  563925 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1675 bytes)
	I1122 00:14:57.430625  563925 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21934-513600/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1122 00:14:57.448740  563925 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21934-513600/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1122 00:14:57.466976  563925 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21934-513600/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1122 00:14:57.489136  563925 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21934-513600/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1122 00:14:57.510655  563925 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21934-513600/.minikube/profiles/ha-561110/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I1122 00:14:57.531352  563925 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21934-513600/.minikube/profiles/ha-561110/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1122 00:14:57.551538  563925 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21934-513600/.minikube/profiles/ha-561110/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1122 00:14:57.572743  563925 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21934-513600/.minikube/profiles/ha-561110/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1122 00:14:57.593047  563925 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21934-513600/.minikube/files/etc/ssl/certs/5169372.pem --> /usr/share/ca-certificates/5169372.pem (1708 bytes)
	I1122 00:14:57.616537  563925 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21934-513600/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1122 00:14:57.636347  563925 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21934-513600/.minikube/certs/516937.pem --> /usr/share/ca-certificates/516937.pem (1338 bytes)
	I1122 00:14:57.655714  563925 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I1122 00:14:57.671132  563925 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1679 bytes)
	I1122 00:14:57.686013  563925 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I1122 00:14:57.702655  563925 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1679 bytes)
	I1122 00:14:57.717580  563925 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I1122 00:14:57.733104  563925 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1675 bytes)
	I1122 00:14:57.748086  563925 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I1122 00:14:57.762829  563925 ssh_runner.go:195] Run: openssl version
	I1122 00:14:57.770255  563925 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/5169372.pem && ln -fs /usr/share/ca-certificates/5169372.pem /etc/ssl/certs/5169372.pem"
	I1122 00:14:57.779598  563925 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/5169372.pem
	I1122 00:14:57.784055  563925 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov 21 23:55 /usr/share/ca-certificates/5169372.pem
	I1122 00:14:57.784140  563925 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/5169372.pem
	I1122 00:14:57.827123  563925 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/5169372.pem /etc/ssl/certs/3ec20f2e.0"
	I1122 00:14:57.836065  563925 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1122 00:14:57.845341  563925 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1122 00:14:57.849594  563925 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov 21 23:48 /usr/share/ca-certificates/minikubeCA.pem
	I1122 00:14:57.849679  563925 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1122 00:14:57.893282  563925 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1122 00:14:57.903127  563925 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/516937.pem && ln -fs /usr/share/ca-certificates/516937.pem /etc/ssl/certs/516937.pem"
	I1122 00:14:57.912201  563925 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/516937.pem
	I1122 00:14:57.916336  563925 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov 21 23:55 /usr/share/ca-certificates/516937.pem
	I1122 00:14:57.916418  563925 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/516937.pem
	I1122 00:14:57.959761  563925 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/516937.pem /etc/ssl/certs/51391683.0"
	I1122 00:14:57.969369  563925 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1122 00:14:57.974254  563925 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1122 00:14:58.017064  563925 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1122 00:14:58.070486  563925 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1122 00:14:58.116182  563925 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1122 00:14:58.158146  563925 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1122 00:14:58.220397  563925 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1122 00:14:58.263034  563925 kubeadm.go:935] updating node {m02 192.168.49.3 8443 v1.34.1 crio true true} ...
	I1122 00:14:58.263156  563925 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=ha-561110-m02 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.3
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:ha-561110 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1122 00:14:58.263186  563925 kube-vip.go:115] generating kube-vip config ...
	I1122 00:14:58.263244  563925 ssh_runner.go:195] Run: sudo sh -c "lsmod | grep ip_vs"
	I1122 00:14:58.282844  563925 kube-vip.go:163] giving up enabling control-plane load-balancing as ipvs kernel modules appears not to be available: sudo sh -c "lsmod | grep ip_vs": Process exited with status 1
	stdout:
	
	stderr:
	I1122 00:14:58.282918  563925 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.49.254
	    - name: prometheus_server
	      value: :2112
	    image: ghcr.io/kube-vip/kube-vip:v1.0.1
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I1122 00:14:58.282999  563925 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1122 00:14:58.293245  563925 binaries.go:51] Found k8s binaries, skipping transfer
	I1122 00:14:58.293334  563925 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I1122 00:14:58.306481  563925 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (363 bytes)
	I1122 00:14:58.327177  563925 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1122 00:14:58.341755  563925 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1358 bytes)
	I1122 00:14:58.358483  563925 ssh_runner.go:195] Run: grep 192.168.49.254	control-plane.minikube.internal$ /etc/hosts
	I1122 00:14:58.362397  563925 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1122 00:14:58.372758  563925 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1122 00:14:58.574763  563925 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1122 00:14:58.589366  563925 config.go:182] Loaded profile config "ha-561110": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1122 00:14:58.589071  563925 start.go:236] Will wait 6m0s for node &{Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1122 00:14:58.595464  563925 out.go:179] * Verifying Kubernetes components...
	I1122 00:14:58.597975  563925 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1122 00:14:58.780512  563925 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1122 00:14:58.804624  563925 kapi.go:59] client config for ha-561110: &rest.Config{Host:"https://192.168.49.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21934-513600/.minikube/profiles/ha-561110/client.crt", KeyFile:"/home/jenkins/minikube-integration/21934-513600/.minikube/profiles/ha-561110/client.key", CAFile:"/home/jenkins/minikube-integration/21934-513600/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(n
il)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1fb2df0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W1122 00:14:58.804704  563925 kubeadm.go:492] Overriding stale ClientConfig host https://192.168.49.254:8443 with https://192.168.49.2:8443
	I1122 00:14:58.804940  563925 node_ready.go:35] waiting up to 6m0s for node "ha-561110-m02" to be "Ready" ...
	I1122 00:15:18.370415  563925 node_ready.go:49] node "ha-561110-m02" is "Ready"
	I1122 00:15:18.370443  563925 node_ready.go:38] duration metric: took 19.565489572s for node "ha-561110-m02" to be "Ready" ...
	I1122 00:15:18.370457  563925 api_server.go:52] waiting for apiserver process to appear ...
	I1122 00:15:18.370519  563925 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1122 00:15:18.871467  563925 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1122 00:15:19.371300  563925 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1122 00:15:19.387145  563925 api_server.go:72] duration metric: took 20.797721396s to wait for apiserver process to appear ...
	I1122 00:15:19.387224  563925 api_server.go:88] waiting for apiserver healthz status ...
	I1122 00:15:19.387265  563925 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1122 00:15:19.396105  563925 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/start-kubernetes-service-cidr-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1122 00:15:19.396183  563925 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/start-kubernetes-service-cidr-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1122 00:15:19.887636  563925 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1122 00:15:19.899172  563925 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1122 00:15:19.899202  563925 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1122 00:15:20.387390  563925 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1122 00:15:20.399975  563925 api_server.go:279] https://192.168.49.2:8443/healthz returned 200:
	ok
	I1122 00:15:20.401338  563925 api_server.go:141] control plane version: v1.34.1
	I1122 00:15:20.401367  563925 api_server.go:131] duration metric: took 1.014115281s to wait for apiserver health ...
	I1122 00:15:20.401377  563925 system_pods.go:43] waiting for kube-system pods to appear ...
	I1122 00:15:20.428331  563925 system_pods.go:59] 25 kube-system pods found
	I1122 00:15:20.428372  563925 system_pods.go:61] "coredns-66bc5c9577-rrkkw" [97c7e1c9-e499-4131-957e-6da8bd29c994] Running
	I1122 00:15:20.428379  563925 system_pods.go:61] "coredns-66bc5c9577-vp8f5" [6d945620-203b-4e4e-b9e2-ef07e6b0f89b] Running
	I1122 00:15:20.428413  563925 system_pods.go:61] "etcd-ha-561110" [5a87193f-0871-4a4c-a409-4d52da31b88b] Running
	I1122 00:15:20.428428  563925 system_pods.go:61] "etcd-ha-561110-m02" [2c4dde3d-3a4c-4d47-b52c-980920facb09] Running
	I1122 00:15:20.428433  563925 system_pods.go:61] "etcd-ha-561110-m03" [d9d64b02-a6c9-48d1-9633-71cfae997fa8] Running
	I1122 00:15:20.428436  563925 system_pods.go:61] "kindnet-4tkd6" [63b063bf-1876-47e2-acb2-a5561b22b975] Running
	I1122 00:15:20.428440  563925 system_pods.go:61] "kindnet-7g65m" [edeca4a6-de24-4444-be9c-cdcbf744f52a] Running
	I1122 00:15:20.428448  563925 system_pods.go:61] "kindnet-dltvw" [ec75f262-ca6c-4766-bc81-60a4e51e94f0] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I1122 00:15:20.428457  563925 system_pods.go:61] "kindnet-w4kh7" [61649d36-e515-4c70-831e-2a509e3b67f3] Running
	I1122 00:15:20.428464  563925 system_pods.go:61] "kube-apiserver-ha-561110" [e94b2c4e-8cc8-45e3-9b89-d1805b254c99] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1122 00:15:20.428469  563925 system_pods.go:61] "kube-apiserver-ha-561110-m02" [98ee0c6b-6094-4264-98e8-69d3f1bd0c04] Running
	I1122 00:15:20.428491  563925 system_pods.go:61] "kube-apiserver-ha-561110-m03" [5b0131a7-0af0-48ff-8889-e82b8a2a2e43] Running
	I1122 00:15:20.428503  563925 system_pods.go:61] "kube-controller-manager-ha-561110" [db7b105b-9fa2-43a8-a08d-837b9960db31] Running
	I1122 00:15:20.428508  563925 system_pods.go:61] "kube-controller-manager-ha-561110-m02" [2bb17b90-45c6-4c74-96a1-81f05c51a0cf] Running
	I1122 00:15:20.428511  563925 system_pods.go:61] "kube-controller-manager-ha-561110-m03" [a1fefba1-3967-4b58-b8e7-2bec4a7b896b] Running
	I1122 00:15:20.428516  563925 system_pods.go:61] "kube-proxy-2vctt" [f89e3d32-bca1-4b9a-8531-7eab74e6e0da] Running
	I1122 00:15:20.428527  563925 system_pods.go:61] "kube-proxy-b8wb5" [ac8e8b19-cd59-454e-ab83-b9d08cf4cea0] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1122 00:15:20.428533  563925 system_pods.go:61] "kube-proxy-fh5cv" [318c6763-fea1-4564-86f6-18cfad691213] Running
	I1122 00:15:20.428542  563925 system_pods.go:61] "kube-proxy-v5ndg" [5e85dc4a-71dd-40c6-86f6-5c79b7f45194] Running
	I1122 00:15:20.428546  563925 system_pods.go:61] "kube-scheduler-ha-561110" [3267ceff-350f-471c-8e2b-9be8b8bdc471] Running
	I1122 00:15:20.428567  563925 system_pods.go:61] "kube-scheduler-ha-561110-m02" [75edb16c-cd99-46b4-bd49-e0646746877f] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1122 00:15:20.428578  563925 system_pods.go:61] "kube-scheduler-ha-561110-m03" [6763f28e-1726-4a48-bac3-1a7e5f82595e] Running
	I1122 00:15:20.428582  563925 system_pods.go:61] "kube-vip-ha-561110-m02" [e4be1217-de52-4c2a-8cfb-a411559af009] Running
	I1122 00:15:20.428596  563925 system_pods.go:61] "kube-vip-ha-561110-m03" [5e7072f7-2a3d-4add-bc1d-e69a03dd28cb] Running
	I1122 00:15:20.428608  563925 system_pods.go:61] "storage-provisioner" [6bf95a26-263b-4088-904d-b344d4826342] Running
	I1122 00:15:20.428614  563925 system_pods.go:74] duration metric: took 27.23022ms to wait for pod list to return data ...
	I1122 00:15:20.428622  563925 default_sa.go:34] waiting for default service account to be created ...
	I1122 00:15:20.444498  563925 default_sa.go:45] found service account: "default"
	I1122 00:15:20.444536  563925 default_sa.go:55] duration metric: took 15.88117ms for default service account to be created ...
	I1122 00:15:20.444583  563925 system_pods.go:116] waiting for k8s-apps to be running ...
	I1122 00:15:20.468591  563925 system_pods.go:86] 25 kube-system pods found
	I1122 00:15:20.468633  563925 system_pods.go:89] "coredns-66bc5c9577-rrkkw" [97c7e1c9-e499-4131-957e-6da8bd29c994] Running
	I1122 00:15:20.468662  563925 system_pods.go:89] "coredns-66bc5c9577-vp8f5" [6d945620-203b-4e4e-b9e2-ef07e6b0f89b] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1122 00:15:20.468674  563925 system_pods.go:89] "etcd-ha-561110" [5a87193f-0871-4a4c-a409-4d52da31b88b] Running
	I1122 00:15:20.468681  563925 system_pods.go:89] "etcd-ha-561110-m02" [2c4dde3d-3a4c-4d47-b52c-980920facb09] Running
	I1122 00:15:20.468703  563925 system_pods.go:89] "etcd-ha-561110-m03" [d9d64b02-a6c9-48d1-9633-71cfae997fa8] Running
	I1122 00:15:20.468713  563925 system_pods.go:89] "kindnet-4tkd6" [63b063bf-1876-47e2-acb2-a5561b22b975] Running
	I1122 00:15:20.468719  563925 system_pods.go:89] "kindnet-7g65m" [edeca4a6-de24-4444-be9c-cdcbf744f52a] Running
	I1122 00:15:20.468727  563925 system_pods.go:89] "kindnet-dltvw" [ec75f262-ca6c-4766-bc81-60a4e51e94f0] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I1122 00:15:20.468736  563925 system_pods.go:89] "kindnet-w4kh7" [61649d36-e515-4c70-831e-2a509e3b67f3] Running
	I1122 00:15:20.468743  563925 system_pods.go:89] "kube-apiserver-ha-561110" [e94b2c4e-8cc8-45e3-9b89-d1805b254c99] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1122 00:15:20.468753  563925 system_pods.go:89] "kube-apiserver-ha-561110-m02" [98ee0c6b-6094-4264-98e8-69d3f1bd0c04] Running
	I1122 00:15:20.468758  563925 system_pods.go:89] "kube-apiserver-ha-561110-m03" [5b0131a7-0af0-48ff-8889-e82b8a2a2e43] Running
	I1122 00:15:20.468762  563925 system_pods.go:89] "kube-controller-manager-ha-561110" [db7b105b-9fa2-43a8-a08d-837b9960db31] Running
	I1122 00:15:20.468785  563925 system_pods.go:89] "kube-controller-manager-ha-561110-m02" [2bb17b90-45c6-4c74-96a1-81f05c51a0cf] Running
	I1122 00:15:20.468796  563925 system_pods.go:89] "kube-controller-manager-ha-561110-m03" [a1fefba1-3967-4b58-b8e7-2bec4a7b896b] Running
	I1122 00:15:20.468800  563925 system_pods.go:89] "kube-proxy-2vctt" [f89e3d32-bca1-4b9a-8531-7eab74e6e0da] Running
	I1122 00:15:20.468809  563925 system_pods.go:89] "kube-proxy-b8wb5" [ac8e8b19-cd59-454e-ab83-b9d08cf4cea0] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1122 00:15:20.468818  563925 system_pods.go:89] "kube-proxy-fh5cv" [318c6763-fea1-4564-86f6-18cfad691213] Running
	I1122 00:15:20.468823  563925 system_pods.go:89] "kube-proxy-v5ndg" [5e85dc4a-71dd-40c6-86f6-5c79b7f45194] Running
	I1122 00:15:20.468827  563925 system_pods.go:89] "kube-scheduler-ha-561110" [3267ceff-350f-471c-8e2b-9be8b8bdc471] Running
	I1122 00:15:20.468833  563925 system_pods.go:89] "kube-scheduler-ha-561110-m02" [75edb16c-cd99-46b4-bd49-e0646746877f] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1122 00:15:20.468841  563925 system_pods.go:89] "kube-scheduler-ha-561110-m03" [6763f28e-1726-4a48-bac3-1a7e5f82595e] Running
	I1122 00:15:20.468869  563925 system_pods.go:89] "kube-vip-ha-561110-m02" [e4be1217-de52-4c2a-8cfb-a411559af009] Running
	I1122 00:15:20.468881  563925 system_pods.go:89] "kube-vip-ha-561110-m03" [5e7072f7-2a3d-4add-bc1d-e69a03dd28cb] Running
	I1122 00:15:20.468887  563925 system_pods.go:89] "storage-provisioner" [6bf95a26-263b-4088-904d-b344d4826342] Running
	I1122 00:15:20.468911  563925 system_pods.go:126] duration metric: took 24.319558ms to wait for k8s-apps to be running ...
	I1122 00:15:20.468936  563925 system_svc.go:44] waiting for kubelet service to be running ....
	I1122 00:15:20.469011  563925 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1122 00:15:20.486178  563925 system_svc.go:56] duration metric: took 17.232261ms WaitForService to wait for kubelet
	I1122 00:15:20.486213  563925 kubeadm.go:587] duration metric: took 21.896794227s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1122 00:15:20.486246  563925 node_conditions.go:102] verifying NodePressure condition ...
	I1122 00:15:20.505594  563925 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1122 00:15:20.505637  563925 node_conditions.go:123] node cpu capacity is 2
	I1122 00:15:20.505651  563925 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1122 00:15:20.505673  563925 node_conditions.go:123] node cpu capacity is 2
	I1122 00:15:20.505684  563925 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1122 00:15:20.505689  563925 node_conditions.go:123] node cpu capacity is 2
	I1122 00:15:20.505693  563925 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1122 00:15:20.505697  563925 node_conditions.go:123] node cpu capacity is 2
	I1122 00:15:20.505716  563925 node_conditions.go:105] duration metric: took 19.443078ms to run NodePressure ...
	I1122 00:15:20.505736  563925 start.go:242] waiting for startup goroutines ...
	I1122 00:15:20.505776  563925 start.go:256] writing updated cluster config ...
	I1122 00:15:20.509517  563925 out.go:203] 
	I1122 00:15:20.512839  563925 config.go:182] Loaded profile config "ha-561110": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1122 00:15:20.513009  563925 profile.go:143] Saving config to /home/jenkins/minikube-integration/21934-513600/.minikube/profiles/ha-561110/config.json ...
	I1122 00:15:20.516821  563925 out.go:179] * Starting "ha-561110-m03" control-plane node in "ha-561110" cluster
	I1122 00:15:20.520742  563925 cache.go:134] Beginning downloading kic base image for docker with crio
	I1122 00:15:20.524203  563925 out.go:179] * Pulling base image v0.0.48-1763588073-21934 ...
	I1122 00:15:20.527654  563925 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1122 00:15:20.527732  563925 cache.go:65] Caching tarball of preloaded images
	I1122 00:15:20.527695  563925 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763588073-21934@sha256:19d3da0413e1bfa354cbb88004c6796f8e9772a083e0230b0f6e50212ee04c7e in local docker daemon
	I1122 00:15:20.528031  563925 preload.go:238] Found /home/jenkins/minikube-integration/21934-513600/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1122 00:15:20.528049  563925 cache.go:68] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1122 00:15:20.528201  563925 profile.go:143] Saving config to /home/jenkins/minikube-integration/21934-513600/.minikube/profiles/ha-561110/config.json ...
	I1122 00:15:20.552866  563925 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763588073-21934@sha256:19d3da0413e1bfa354cbb88004c6796f8e9772a083e0230b0f6e50212ee04c7e in local docker daemon, skipping pull
	I1122 00:15:20.552887  563925 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763588073-21934@sha256:19d3da0413e1bfa354cbb88004c6796f8e9772a083e0230b0f6e50212ee04c7e exists in daemon, skipping load
	I1122 00:15:20.552899  563925 cache.go:243] Successfully downloaded all kic artifacts
	I1122 00:15:20.552922  563925 start.go:360] acquireMachinesLock for ha-561110-m03: {Name:mk8a19cfae84d78ad843d3f8169a3190cadb2d92 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1122 00:15:20.552971  563925 start.go:364] duration metric: took 34.805µs to acquireMachinesLock for "ha-561110-m03"
	I1122 00:15:20.552989  563925 start.go:96] Skipping create...Using existing machine configuration
	I1122 00:15:20.552994  563925 fix.go:54] fixHost starting: m03
	I1122 00:15:20.553255  563925 cli_runner.go:164] Run: docker container inspect ha-561110-m03 --format={{.State.Status}}
	I1122 00:15:20.581965  563925 fix.go:112] recreateIfNeeded on ha-561110-m03: state=Stopped err=<nil>
	W1122 00:15:20.581999  563925 fix.go:138] unexpected machine state, will restart: <nil>
	I1122 00:15:20.586013  563925 out.go:252] * Restarting existing docker container for "ha-561110-m03" ...
	I1122 00:15:20.586099  563925 cli_runner.go:164] Run: docker start ha-561110-m03
	I1122 00:15:20.954348  563925 cli_runner.go:164] Run: docker container inspect ha-561110-m03 --format={{.State.Status}}
	I1122 00:15:20.979345  563925 kic.go:430] container "ha-561110-m03" state is running.
	I1122 00:15:20.979708  563925 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-561110-m03
	I1122 00:15:21.002371  563925 profile.go:143] Saving config to /home/jenkins/minikube-integration/21934-513600/.minikube/profiles/ha-561110/config.json ...
	I1122 00:15:21.002682  563925 machine.go:94] provisionDockerMachine start ...
	I1122 00:15:21.002758  563925 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-561110-m03
	I1122 00:15:21.032872  563925 main.go:143] libmachine: Using SSH client type: native
	I1122 00:15:21.033195  563925 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33545 <nil> <nil>}
	I1122 00:15:21.033211  563925 main.go:143] libmachine: About to run SSH command:
	hostname
	I1122 00:15:21.033881  563925 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I1122 00:15:24.293634  563925 main.go:143] libmachine: SSH cmd err, output: <nil>: ha-561110-m03
	
	I1122 00:15:24.293664  563925 ubuntu.go:182] provisioning hostname "ha-561110-m03"
	I1122 00:15:24.293763  563925 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-561110-m03
	I1122 00:15:24.324599  563925 main.go:143] libmachine: Using SSH client type: native
	I1122 00:15:24.324926  563925 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33545 <nil> <nil>}
	I1122 00:15:24.324939  563925 main.go:143] libmachine: About to run SSH command:
	sudo hostname ha-561110-m03 && echo "ha-561110-m03" | sudo tee /etc/hostname
	I1122 00:15:24.595129  563925 main.go:143] libmachine: SSH cmd err, output: <nil>: ha-561110-m03
	
	I1122 00:15:24.595249  563925 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-561110-m03
	I1122 00:15:24.620733  563925 main.go:143] libmachine: Using SSH client type: native
	I1122 00:15:24.621049  563925 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33545 <nil> <nil>}
	I1122 00:15:24.621676  563925 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-561110-m03' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-561110-m03/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-561110-m03' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1122 00:15:24.856356  563925 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1122 00:15:24.856384  563925 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21934-513600/.minikube CaCertPath:/home/jenkins/minikube-integration/21934-513600/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21934-513600/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21934-513600/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21934-513600/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21934-513600/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21934-513600/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21934-513600/.minikube}
	I1122 00:15:24.856400  563925 ubuntu.go:190] setting up certificates
	I1122 00:15:24.856434  563925 provision.go:84] configureAuth start
	I1122 00:15:24.856521  563925 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-561110-m03
	I1122 00:15:24.885855  563925 provision.go:143] copyHostCerts
	I1122 00:15:24.885898  563925 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21934-513600/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/21934-513600/.minikube/ca.pem
	I1122 00:15:24.885930  563925 exec_runner.go:144] found /home/jenkins/minikube-integration/21934-513600/.minikube/ca.pem, removing ...
	I1122 00:15:24.885941  563925 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21934-513600/.minikube/ca.pem
	I1122 00:15:24.886031  563925 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21934-513600/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21934-513600/.minikube/ca.pem (1078 bytes)
	I1122 00:15:24.886116  563925 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21934-513600/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/21934-513600/.minikube/cert.pem
	I1122 00:15:24.886139  563925 exec_runner.go:144] found /home/jenkins/minikube-integration/21934-513600/.minikube/cert.pem, removing ...
	I1122 00:15:24.886147  563925 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21934-513600/.minikube/cert.pem
	I1122 00:15:24.886175  563925 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21934-513600/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21934-513600/.minikube/cert.pem (1123 bytes)
	I1122 00:15:24.886221  563925 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21934-513600/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/21934-513600/.minikube/key.pem
	I1122 00:15:24.886242  563925 exec_runner.go:144] found /home/jenkins/minikube-integration/21934-513600/.minikube/key.pem, removing ...
	I1122 00:15:24.886246  563925 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21934-513600/.minikube/key.pem
	I1122 00:15:24.886271  563925 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21934-513600/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21934-513600/.minikube/key.pem (1675 bytes)
	I1122 00:15:24.886322  563925 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21934-513600/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21934-513600/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21934-513600/.minikube/certs/ca-key.pem org=jenkins.ha-561110-m03 san=[127.0.0.1 192.168.49.4 ha-561110-m03 localhost minikube]
	I1122 00:15:25.343405  563925 provision.go:177] copyRemoteCerts
	I1122 00:15:25.343499  563925 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1122 00:15:25.343569  563925 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-561110-m03
	I1122 00:15:25.363935  563925 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33545 SSHKeyPath:/home/jenkins/minikube-integration/21934-513600/.minikube/machines/ha-561110-m03/id_rsa Username:docker}
	I1122 00:15:25.550286  563925 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21934-513600/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1122 00:15:25.550350  563925 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21934-513600/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1122 00:15:25.575299  563925 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21934-513600/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1122 00:15:25.575374  563925 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21934-513600/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1122 00:15:25.598237  563925 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21934-513600/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1122 00:15:25.598338  563925 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21934-513600/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1122 00:15:25.628049  563925 provision.go:87] duration metric: took 771.594834ms to configureAuth
	I1122 00:15:25.628077  563925 ubuntu.go:206] setting minikube options for container-runtime
	I1122 00:15:25.628358  563925 config.go:182] Loaded profile config "ha-561110": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1122 00:15:25.628508  563925 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-561110-m03
	I1122 00:15:25.662079  563925 main.go:143] libmachine: Using SSH client type: native
	I1122 00:15:25.662398  563925 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33545 <nil> <nil>}
	I1122 00:15:25.662419  563925 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1122 00:15:26.350066  563925 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1122 00:15:26.350092  563925 machine.go:97] duration metric: took 5.34739065s to provisionDockerMachine
	I1122 00:15:26.350164  563925 start.go:293] postStartSetup for "ha-561110-m03" (driver="docker")
	I1122 00:15:26.350184  563925 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1122 00:15:26.350274  563925 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1122 00:15:26.350334  563925 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-561110-m03
	I1122 00:15:26.375980  563925 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33545 SSHKeyPath:/home/jenkins/minikube-integration/21934-513600/.minikube/machines/ha-561110-m03/id_rsa Username:docker}
	I1122 00:15:26.492303  563925 ssh_runner.go:195] Run: cat /etc/os-release
	I1122 00:15:26.496241  563925 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1122 00:15:26.496272  563925 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1122 00:15:26.496284  563925 filesync.go:126] Scanning /home/jenkins/minikube-integration/21934-513600/.minikube/addons for local assets ...
	I1122 00:15:26.496339  563925 filesync.go:126] Scanning /home/jenkins/minikube-integration/21934-513600/.minikube/files for local assets ...
	I1122 00:15:26.496422  563925 filesync.go:149] local asset: /home/jenkins/minikube-integration/21934-513600/.minikube/files/etc/ssl/certs/5169372.pem -> 5169372.pem in /etc/ssl/certs
	I1122 00:15:26.496433  563925 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21934-513600/.minikube/files/etc/ssl/certs/5169372.pem -> /etc/ssl/certs/5169372.pem
	I1122 00:15:26.496535  563925 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1122 00:15:26.505321  563925 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21934-513600/.minikube/files/etc/ssl/certs/5169372.pem --> /etc/ssl/certs/5169372.pem (1708 bytes)
	I1122 00:15:26.526339  563925 start.go:296] duration metric: took 176.150409ms for postStartSetup
	I1122 00:15:26.526443  563925 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1122 00:15:26.526504  563925 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-561110-m03
	I1122 00:15:26.550085  563925 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33545 SSHKeyPath:/home/jenkins/minikube-integration/21934-513600/.minikube/machines/ha-561110-m03/id_rsa Username:docker}
	I1122 00:15:26.663353  563925 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1122 00:15:26.670831  563925 fix.go:56] duration metric: took 6.117814975s for fixHost
	I1122 00:15:26.670857  563925 start.go:83] releasing machines lock for "ha-561110-m03", held for 6.117877799s
	I1122 00:15:26.670925  563925 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-561110-m03
	I1122 00:15:26.706528  563925 out.go:179] * Found network options:
	I1122 00:15:26.709469  563925 out.go:179]   - NO_PROXY=192.168.49.2,192.168.49.3
	W1122 00:15:26.712333  563925 proxy.go:120] fail to check proxy env: Error ip not in block
	W1122 00:15:26.712371  563925 proxy.go:120] fail to check proxy env: Error ip not in block
	W1122 00:15:26.712395  563925 proxy.go:120] fail to check proxy env: Error ip not in block
	W1122 00:15:26.712406  563925 proxy.go:120] fail to check proxy env: Error ip not in block
	I1122 00:15:26.712494  563925 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1122 00:15:26.712541  563925 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-561110-m03
	I1122 00:15:26.712807  563925 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1122 00:15:26.712873  563925 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-561110-m03
	I1122 00:15:26.749585  563925 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33545 SSHKeyPath:/home/jenkins/minikube-integration/21934-513600/.minikube/machines/ha-561110-m03/id_rsa Username:docker}
	I1122 00:15:26.751996  563925 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33545 SSHKeyPath:/home/jenkins/minikube-integration/21934-513600/.minikube/machines/ha-561110-m03/id_rsa Username:docker}
	I1122 00:15:27.082598  563925 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1122 00:15:27.101543  563925 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1122 00:15:27.101616  563925 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1122 00:15:27.126235  563925 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1122 00:15:27.126257  563925 start.go:496] detecting cgroup driver to use...
	I1122 00:15:27.126287  563925 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1122 00:15:27.126334  563925 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1122 00:15:27.165923  563925 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1122 00:15:27.239673  563925 docker.go:218] disabling cri-docker service (if available) ...
	I1122 00:15:27.239811  563925 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1122 00:15:27.293000  563925 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1122 00:15:27.338853  563925 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1122 00:15:27.741533  563925 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1122 00:15:28.092677  563925 docker.go:234] disabling docker service ...
	I1122 00:15:28.092771  563925 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1122 00:15:28.168796  563925 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1122 00:15:28.226242  563925 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1122 00:15:28.659941  563925 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1122 00:15:29.058606  563925 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1122 00:15:29.101920  563925 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1122 00:15:29.136744  563925 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1122 00:15:29.136856  563925 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1122 00:15:29.162030  563925 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1122 00:15:29.162149  563925 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1122 00:15:29.183947  563925 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1122 00:15:29.221891  563925 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1122 00:15:29.244672  563925 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1122 00:15:29.275560  563925 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1122 00:15:29.306222  563925 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1122 00:15:29.332094  563925 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1122 00:15:29.350775  563925 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1122 00:15:29.370006  563925 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1122 00:15:29.391362  563925 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1122 00:15:29.706214  563925 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1122 00:17:00.097219  563925 ssh_runner.go:235] Completed: sudo systemctl restart crio: (1m30.390962529s)
	I1122 00:17:00.097249  563925 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1122 00:17:00.097319  563925 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1122 00:17:00.113544  563925 start.go:564] Will wait 60s for crictl version
	I1122 00:17:00.113649  563925 ssh_runner.go:195] Run: which crictl
	I1122 00:17:00.136784  563925 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1122 00:17:00.321902  563925 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.2
	RuntimeApiVersion:  v1
	I1122 00:17:00.322038  563925 ssh_runner.go:195] Run: crio --version
	I1122 00:17:00.437751  563925 ssh_runner.go:195] Run: crio --version
	I1122 00:17:00.498700  563925 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.2 ...
	I1122 00:17:00.502322  563925 out.go:179]   - env NO_PROXY=192.168.49.2
	I1122 00:17:00.505365  563925 out.go:179]   - env NO_PROXY=192.168.49.2,192.168.49.3
	I1122 00:17:00.508493  563925 cli_runner.go:164] Run: docker network inspect ha-561110 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1122 00:17:00.538039  563925 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1122 00:17:00.545403  563925 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1122 00:17:00.558621  563925 mustload.go:66] Loading cluster: ha-561110
	I1122 00:17:00.558938  563925 config.go:182] Loaded profile config "ha-561110": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1122 00:17:00.559221  563925 cli_runner.go:164] Run: docker container inspect ha-561110 --format={{.State.Status}}
	I1122 00:17:00.586783  563925 host.go:66] Checking if "ha-561110" exists ...
	I1122 00:17:00.587143  563925 certs.go:69] Setting up /home/jenkins/minikube-integration/21934-513600/.minikube/profiles/ha-561110 for IP: 192.168.49.4
	I1122 00:17:00.587159  563925 certs.go:195] generating shared ca certs ...
	I1122 00:17:00.587181  563925 certs.go:227] acquiring lock for ca certs: {Name:mkaf4c79493334cb2058b349c36be7473837f9b7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1122 00:17:00.587353  563925 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21934-513600/.minikube/ca.key
	I1122 00:17:00.587400  563925 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21934-513600/.minikube/proxy-client-ca.key
	I1122 00:17:00.587412  563925 certs.go:257] generating profile certs ...
	I1122 00:17:00.587496  563925 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21934-513600/.minikube/profiles/ha-561110/client.key
	I1122 00:17:00.587573  563925 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21934-513600/.minikube/profiles/ha-561110/apiserver.key.be48eb15
	I1122 00:17:00.587622  563925 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21934-513600/.minikube/profiles/ha-561110/proxy-client.key
	I1122 00:17:00.587635  563925 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21934-513600/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1122 00:17:00.587651  563925 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21934-513600/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1122 00:17:00.587667  563925 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21934-513600/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1122 00:17:00.587723  563925 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21934-513600/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1122 00:17:00.587739  563925 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21934-513600/.minikube/profiles/ha-561110/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1122 00:17:00.587752  563925 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21934-513600/.minikube/profiles/ha-561110/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1122 00:17:00.587768  563925 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21934-513600/.minikube/profiles/ha-561110/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1122 00:17:00.587778  563925 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21934-513600/.minikube/profiles/ha-561110/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1122 00:17:00.587836  563925 certs.go:484] found cert: /home/jenkins/minikube-integration/21934-513600/.minikube/certs/516937.pem (1338 bytes)
	W1122 00:17:00.587877  563925 certs.go:480] ignoring /home/jenkins/minikube-integration/21934-513600/.minikube/certs/516937_empty.pem, impossibly tiny 0 bytes
	I1122 00:17:00.587891  563925 certs.go:484] found cert: /home/jenkins/minikube-integration/21934-513600/.minikube/certs/ca-key.pem (1679 bytes)
	I1122 00:17:00.587929  563925 certs.go:484] found cert: /home/jenkins/minikube-integration/21934-513600/.minikube/certs/ca.pem (1078 bytes)
	I1122 00:17:00.587961  563925 certs.go:484] found cert: /home/jenkins/minikube-integration/21934-513600/.minikube/certs/cert.pem (1123 bytes)
	I1122 00:17:00.587990  563925 certs.go:484] found cert: /home/jenkins/minikube-integration/21934-513600/.minikube/certs/key.pem (1675 bytes)
	I1122 00:17:00.588101  563925 certs.go:484] found cert: /home/jenkins/minikube-integration/21934-513600/.minikube/files/etc/ssl/certs/5169372.pem (1708 bytes)
	I1122 00:17:00.588199  563925 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21934-513600/.minikube/certs/516937.pem -> /usr/share/ca-certificates/516937.pem
	I1122 00:17:00.588226  563925 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21934-513600/.minikube/files/etc/ssl/certs/5169372.pem -> /usr/share/ca-certificates/5169372.pem
	I1122 00:17:00.588241  563925 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21934-513600/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1122 00:17:00.588312  563925 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-561110
	I1122 00:17:00.613873  563925 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33535 SSHKeyPath:/home/jenkins/minikube-integration/21934-513600/.minikube/machines/ha-561110/id_rsa Username:docker}
	I1122 00:17:00.714215  563925 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.pub
	I1122 00:17:00.718718  563925 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I1122 00:17:00.729019  563925 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.key
	I1122 00:17:00.733330  563925 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1679 bytes)
	I1122 00:17:00.743477  563925 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.crt
	I1122 00:17:00.747658  563925 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I1122 00:17:00.758201  563925 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.key
	I1122 00:17:00.763435  563925 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1679 bytes)
	I1122 00:17:00.773425  563925 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.crt
	I1122 00:17:00.777456  563925 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I1122 00:17:00.787246  563925 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.key
	I1122 00:17:00.791598  563925 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1675 bytes)
	I1122 00:17:00.801660  563925 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21934-513600/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1122 00:17:00.826055  563925 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21934-513600/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1122 00:17:00.848933  563925 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21934-513600/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1122 00:17:00.888604  563925 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21934-513600/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1122 00:17:00.921496  563925 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21934-513600/.minikube/profiles/ha-561110/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I1122 00:17:00.951086  563925 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21934-513600/.minikube/profiles/ha-561110/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1122 00:17:00.975145  563925 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21934-513600/.minikube/profiles/ha-561110/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1122 00:17:00.999138  563925 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21934-513600/.minikube/profiles/ha-561110/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1122 00:17:01.024534  563925 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21934-513600/.minikube/certs/516937.pem --> /usr/share/ca-certificates/516937.pem (1338 bytes)
	I1122 00:17:01.046560  563925 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21934-513600/.minikube/files/etc/ssl/certs/5169372.pem --> /usr/share/ca-certificates/5169372.pem (1708 bytes)
	I1122 00:17:01.072877  563925 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21934-513600/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1122 00:17:01.103089  563925 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I1122 00:17:01.119601  563925 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1679 bytes)
	I1122 00:17:01.136419  563925 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I1122 00:17:01.153380  563925 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1679 bytes)
	I1122 00:17:01.171240  563925 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I1122 00:17:01.202584  563925 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1675 bytes)
	I1122 00:17:01.223852  563925 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I1122 00:17:01.247292  563925 ssh_runner.go:195] Run: openssl version
	I1122 00:17:01.259516  563925 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1122 00:17:01.280780  563925 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1122 00:17:01.289039  563925 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov 21 23:48 /usr/share/ca-certificates/minikubeCA.pem
	I1122 00:17:01.289158  563925 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1122 00:17:01.373640  563925 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1122 00:17:01.395461  563925 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/516937.pem && ln -fs /usr/share/ca-certificates/516937.pem /etc/ssl/certs/516937.pem"
	I1122 00:17:01.420524  563925 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/516937.pem
	I1122 00:17:01.426623  563925 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov 21 23:55 /usr/share/ca-certificates/516937.pem
	I1122 00:17:01.426698  563925 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/516937.pem
	I1122 00:17:01.478449  563925 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/516937.pem /etc/ssl/certs/51391683.0"
	I1122 00:17:01.490493  563925 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/5169372.pem && ln -fs /usr/share/ca-certificates/5169372.pem /etc/ssl/certs/5169372.pem"
	I1122 00:17:01.502084  563925 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/5169372.pem
	I1122 00:17:01.507855  563925 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov 21 23:55 /usr/share/ca-certificates/5169372.pem
	I1122 00:17:01.507956  563925 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/5169372.pem
	I1122 00:17:01.587957  563925 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/5169372.pem /etc/ssl/certs/3ec20f2e.0"
	I1122 00:17:01.599719  563925 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1122 00:17:01.605126  563925 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1122 00:17:01.660029  563925 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1122 00:17:01.712345  563925 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1122 00:17:01.786467  563925 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1122 00:17:01.862166  563925 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1122 00:17:01.946187  563925 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1122 00:17:02.010384  563925 kubeadm.go:935] updating node {m03 192.168.49.4 8443 v1.34.1 crio true true} ...
	I1122 00:17:02.010523  563925 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=ha-561110-m03 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.4
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:ha-561110 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1122 00:17:02.010557  563925 kube-vip.go:115] generating kube-vip config ...
	I1122 00:17:02.010619  563925 ssh_runner.go:195] Run: sudo sh -c "lsmod | grep ip_vs"
	I1122 00:17:02.037246  563925 kube-vip.go:163] giving up enabling control-plane load-balancing as ipvs kernel modules appears not to be available: sudo sh -c "lsmod | grep ip_vs": Process exited with status 1
	stdout:
	
	stderr:
	I1122 00:17:02.037316  563925 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.49.254
	    - name: prometheus_server
	      value: :2112
	    image: ghcr.io/kube-vip/kube-vip:v1.0.1
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I1122 00:17:02.037405  563925 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1122 00:17:02.052472  563925 binaries.go:51] Found k8s binaries, skipping transfer
	I1122 00:17:02.052567  563925 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I1122 00:17:02.073857  563925 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (363 bytes)
	I1122 00:17:02.112139  563925 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1122 00:17:02.133854  563925 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1358 bytes)
	I1122 00:17:02.152649  563925 ssh_runner.go:195] Run: grep 192.168.49.254	control-plane.minikube.internal$ /etc/hosts
	I1122 00:17:02.158389  563925 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1122 00:17:02.184228  563925 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1122 00:17:02.493772  563925 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1122 00:17:02.514312  563925 start.go:236] Will wait 6m0s for node &{Name:m03 IP:192.168.49.4 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1122 00:17:02.514696  563925 config.go:182] Loaded profile config "ha-561110": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1122 00:17:02.518824  563925 out.go:179] * Verifying Kubernetes components...
	I1122 00:17:02.521919  563925 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1122 00:17:02.746981  563925 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1122 00:17:02.765468  563925 kapi.go:59] client config for ha-561110: &rest.Config{Host:"https://192.168.49.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21934-513600/.minikube/profiles/ha-561110/client.crt", KeyFile:"/home/jenkins/minikube-integration/21934-513600/.minikube/profiles/ha-561110/client.key", CAFile:"/home/jenkins/minikube-integration/21934-513600/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(n
il)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1fb2df0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W1122 00:17:02.765589  563925 kubeadm.go:492] Overriding stale ClientConfig host https://192.168.49.254:8443 with https://192.168.49.2:8443
	I1122 00:17:02.765898  563925 node_ready.go:35] waiting up to 6m0s for node "ha-561110-m03" to be "Ready" ...
	W1122 00:17:04.770183  563925 node_ready.go:57] node "ha-561110-m03" has "Ready":"Unknown" status (will retry)
	W1122 00:17:06.771513  563925 node_ready.go:57] node "ha-561110-m03" has "Ready":"Unknown" status (will retry)
	W1122 00:17:09.269611  563925 node_ready.go:57] node "ha-561110-m03" has "Ready":"Unknown" status (will retry)
	W1122 00:17:11.270683  563925 node_ready.go:57] node "ha-561110-m03" has "Ready":"Unknown" status (will retry)
	W1122 00:17:13.275612  563925 node_ready.go:57] node "ha-561110-m03" has "Ready":"Unknown" status (will retry)
	W1122 00:17:15.769660  563925 node_ready.go:57] node "ha-561110-m03" has "Ready":"Unknown" status (will retry)
	W1122 00:17:17.769933  563925 node_ready.go:57] node "ha-561110-m03" has "Ready":"Unknown" status (will retry)
	W1122 00:17:20.269315  563925 node_ready.go:57] node "ha-561110-m03" has "Ready":"Unknown" status (will retry)
	W1122 00:17:22.270943  563925 node_ready.go:57] node "ha-561110-m03" has "Ready":"Unknown" status (will retry)
	W1122 00:17:24.769260  563925 node_ready.go:57] node "ha-561110-m03" has "Ready":"Unknown" status (will retry)
	W1122 00:17:26.770369  563925 node_ready.go:57] node "ha-561110-m03" has "Ready":"Unknown" status (will retry)
	W1122 00:17:29.269015  563925 node_ready.go:57] node "ha-561110-m03" has "Ready":"Unknown" status (will retry)
	W1122 00:17:31.269858  563925 node_ready.go:57] node "ha-561110-m03" has "Ready":"Unknown" status (will retry)
	W1122 00:17:33.269945  563925 node_ready.go:57] node "ha-561110-m03" has "Ready":"Unknown" status (will retry)
	W1122 00:17:35.769971  563925 node_ready.go:57] node "ha-561110-m03" has "Ready":"Unknown" status (will retry)
	W1122 00:17:38.269922  563925 node_ready.go:57] node "ha-561110-m03" has "Ready":"Unknown" status (will retry)
	W1122 00:17:40.270335  563925 node_ready.go:57] node "ha-561110-m03" has "Ready":"Unknown" status (will retry)
	W1122 00:17:42.271149  563925 node_ready.go:57] node "ha-561110-m03" has "Ready":"Unknown" status (will retry)
	W1122 00:17:44.770140  563925 node_ready.go:57] node "ha-561110-m03" has "Ready":"Unknown" status (will retry)
	W1122 00:17:47.269690  563925 node_ready.go:57] node "ha-561110-m03" has "Ready":"Unknown" status (will retry)
	W1122 00:17:49.270654  563925 node_ready.go:57] node "ha-561110-m03" has "Ready":"Unknown" status (will retry)
	W1122 00:17:51.770465  563925 node_ready.go:57] node "ha-561110-m03" has "Ready":"Unknown" status (will retry)
	W1122 00:17:54.269768  563925 node_ready.go:57] node "ha-561110-m03" has "Ready":"Unknown" status (will retry)
	W1122 00:17:56.769254  563925 node_ready.go:57] node "ha-561110-m03" has "Ready":"Unknown" status (will retry)
	W1122 00:17:58.769625  563925 node_ready.go:57] node "ha-561110-m03" has "Ready":"Unknown" status (will retry)
	W1122 00:18:00.770202  563925 node_ready.go:57] node "ha-561110-m03" has "Ready":"Unknown" status (will retry)
	W1122 00:18:02.773270  563925 node_ready.go:57] node "ha-561110-m03" has "Ready":"Unknown" status (will retry)
	W1122 00:18:05.270130  563925 node_ready.go:57] node "ha-561110-m03" has "Ready":"Unknown" status (will retry)
	W1122 00:18:07.271583  563925 node_ready.go:57] node "ha-561110-m03" has "Ready":"Unknown" status (will retry)
	W1122 00:18:09.769397  563925 node_ready.go:57] node "ha-561110-m03" has "Ready":"Unknown" status (will retry)
	W1122 00:18:11.770012  563925 node_ready.go:57] node "ha-561110-m03" has "Ready":"Unknown" status (will retry)
	W1122 00:18:13.770106  563925 node_ready.go:57] node "ha-561110-m03" has "Ready":"Unknown" status (will retry)
	W1122 00:18:16.270008  563925 node_ready.go:57] node "ha-561110-m03" has "Ready":"Unknown" status (will retry)
	W1122 00:18:18.771373  563925 node_ready.go:57] node "ha-561110-m03" has "Ready":"Unknown" status (will retry)
	W1122 00:18:21.270047  563925 node_ready.go:57] node "ha-561110-m03" has "Ready":"Unknown" status (will retry)
	W1122 00:18:23.768948  563925 node_ready.go:57] node "ha-561110-m03" has "Ready":"Unknown" status (will retry)
	W1122 00:18:25.770213  563925 node_ready.go:57] node "ha-561110-m03" has "Ready":"Unknown" status (will retry)
	W1122 00:18:28.269635  563925 node_ready.go:57] node "ha-561110-m03" has "Ready":"Unknown" status (will retry)
	W1122 00:18:30.770096  563925 node_ready.go:57] node "ha-561110-m03" has "Ready":"Unknown" status (will retry)
	W1122 00:18:32.771794  563925 node_ready.go:57] node "ha-561110-m03" has "Ready":"Unknown" status (will retry)
	W1122 00:18:35.270059  563925 node_ready.go:57] node "ha-561110-m03" has "Ready":"Unknown" status (will retry)
	W1122 00:18:37.769842  563925 node_ready.go:57] node "ha-561110-m03" has "Ready":"Unknown" status (will retry)
	W1122 00:18:40.269289  563925 node_ready.go:57] node "ha-561110-m03" has "Ready":"Unknown" status (will retry)
	W1122 00:18:42.273345  563925 node_ready.go:57] node "ha-561110-m03" has "Ready":"Unknown" status (will retry)
	W1122 00:18:44.275125  563925 node_ready.go:57] node "ha-561110-m03" has "Ready":"Unknown" status (will retry)
	W1122 00:18:46.776656  563925 node_ready.go:57] node "ha-561110-m03" has "Ready":"Unknown" status (will retry)
	W1122 00:18:49.270280  563925 node_ready.go:57] node "ha-561110-m03" has "Ready":"Unknown" status (will retry)
	W1122 00:18:51.770076  563925 node_ready.go:57] node "ha-561110-m03" has "Ready":"Unknown" status (will retry)
	W1122 00:18:54.269588  563925 node_ready.go:57] node "ha-561110-m03" has "Ready":"Unknown" status (will retry)
	W1122 00:18:56.270135  563925 node_ready.go:57] node "ha-561110-m03" has "Ready":"Unknown" status (will retry)
	W1122 00:18:58.768991  563925 node_ready.go:57] node "ha-561110-m03" has "Ready":"Unknown" status (will retry)
	W1122 00:19:00.771422  563925 node_ready.go:57] node "ha-561110-m03" has "Ready":"Unknown" status (will retry)
	W1122 00:19:03.269840  563925 node_ready.go:57] node "ha-561110-m03" has "Ready":"Unknown" status (will retry)
	W1122 00:19:05.270420  563925 node_ready.go:57] node "ha-561110-m03" has "Ready":"Unknown" status (will retry)
	W1122 00:19:07.770020  563925 node_ready.go:57] node "ha-561110-m03" has "Ready":"Unknown" status (will retry)
	W1122 00:19:10.268980  563925 node_ready.go:57] node "ha-561110-m03" has "Ready":"Unknown" status (will retry)
	W1122 00:19:12.269695  563925 node_ready.go:57] node "ha-561110-m03" has "Ready":"Unknown" status (will retry)
	W1122 00:19:14.769271  563925 node_ready.go:57] node "ha-561110-m03" has "Ready":"Unknown" status (will retry)
	W1122 00:19:16.769509  563925 node_ready.go:57] node "ha-561110-m03" has "Ready":"Unknown" status (will retry)
	W1122 00:19:19.270240  563925 node_ready.go:57] node "ha-561110-m03" has "Ready":"Unknown" status (will retry)
	W1122 00:19:21.769249  563925 node_ready.go:57] node "ha-561110-m03" has "Ready":"Unknown" status (will retry)
	W1122 00:19:23.770580  563925 node_ready.go:57] node "ha-561110-m03" has "Ready":"Unknown" status (will retry)
	W1122 00:19:26.269982  563925 node_ready.go:57] node "ha-561110-m03" has "Ready":"Unknown" status (will retry)
	W1122 00:19:28.770054  563925 node_ready.go:57] node "ha-561110-m03" has "Ready":"Unknown" status (will retry)
	W1122 00:19:31.269163  563925 node_ready.go:57] node "ha-561110-m03" has "Ready":"Unknown" status (will retry)
	W1122 00:19:33.269886  563925 node_ready.go:57] node "ha-561110-m03" has "Ready":"Unknown" status (will retry)
	W1122 00:19:35.270677  563925 node_ready.go:57] node "ha-561110-m03" has "Ready":"Unknown" status (will retry)
	W1122 00:19:37.769622  563925 node_ready.go:57] node "ha-561110-m03" has "Ready":"Unknown" status (will retry)
	W1122 00:19:39.769703  563925 node_ready.go:57] node "ha-561110-m03" has "Ready":"Unknown" status (will retry)
	W1122 00:19:42.270956  563925 node_ready.go:57] node "ha-561110-m03" has "Ready":"Unknown" status (will retry)
	W1122 00:19:44.768762  563925 node_ready.go:57] node "ha-561110-m03" has "Ready":"Unknown" status (will retry)
	W1122 00:19:46.769989  563925 node_ready.go:57] node "ha-561110-m03" has "Ready":"Unknown" status (will retry)
	W1122 00:19:49.269515  563925 node_ready.go:57] node "ha-561110-m03" has "Ready":"Unknown" status (will retry)
	W1122 00:19:51.270122  563925 node_ready.go:57] node "ha-561110-m03" has "Ready":"Unknown" status (will retry)
	W1122 00:19:53.769467  563925 node_ready.go:57] node "ha-561110-m03" has "Ready":"Unknown" status (will retry)
	W1122 00:19:55.770293  563925 node_ready.go:57] node "ha-561110-m03" has "Ready":"Unknown" status (will retry)
	W1122 00:19:58.269947  563925 node_ready.go:57] node "ha-561110-m03" has "Ready":"Unknown" status (will retry)
	W1122 00:20:00.322810  563925 node_ready.go:57] node "ha-561110-m03" has "Ready":"Unknown" status (will retry)
	W1122 00:20:02.769554  563925 node_ready.go:57] node "ha-561110-m03" has "Ready":"Unknown" status (will retry)
	W1122 00:20:04.770551  563925 node_ready.go:57] node "ha-561110-m03" has "Ready":"Unknown" status (will retry)
	W1122 00:20:07.269784  563925 node_ready.go:57] node "ha-561110-m03" has "Ready":"Unknown" status (will retry)
	W1122 00:20:09.769344  563925 node_ready.go:57] node "ha-561110-m03" has "Ready":"Unknown" status (will retry)
	W1122 00:20:11.769990  563925 node_ready.go:57] node "ha-561110-m03" has "Ready":"Unknown" status (will retry)
	W1122 00:20:14.269132  563925 node_ready.go:57] node "ha-561110-m03" has "Ready":"Unknown" status (will retry)
	W1122 00:20:16.269765  563925 node_ready.go:57] node "ha-561110-m03" has "Ready":"Unknown" status (will retry)
	W1122 00:20:18.770174  563925 node_ready.go:57] node "ha-561110-m03" has "Ready":"Unknown" status (will retry)
	W1122 00:20:21.269837  563925 node_ready.go:57] node "ha-561110-m03" has "Ready":"Unknown" status (will retry)
	W1122 00:20:23.270065  563925 node_ready.go:57] node "ha-561110-m03" has "Ready":"Unknown" status (will retry)
	W1122 00:20:25.770172  563925 node_ready.go:57] node "ha-561110-m03" has "Ready":"Unknown" status (will retry)
	W1122 00:20:28.269279  563925 node_ready.go:57] node "ha-561110-m03" has "Ready":"Unknown" status (will retry)
	W1122 00:20:30.270734  563925 node_ready.go:57] node "ha-561110-m03" has "Ready":"Unknown" status (will retry)
	W1122 00:20:32.769392  563925 node_ready.go:57] node "ha-561110-m03" has "Ready":"Unknown" status (will retry)
	W1122 00:20:34.769668  563925 node_ready.go:57] node "ha-561110-m03" has "Ready":"Unknown" status (will retry)
	W1122 00:20:36.770010  563925 node_ready.go:57] node "ha-561110-m03" has "Ready":"Unknown" status (will retry)
	W1122 00:20:38.770203  563925 node_ready.go:57] node "ha-561110-m03" has "Ready":"Unknown" status (will retry)
	W1122 00:20:40.770721  563925 node_ready.go:57] node "ha-561110-m03" has "Ready":"Unknown" status (will retry)
	W1122 00:20:43.270389  563925 node_ready.go:57] node "ha-561110-m03" has "Ready":"Unknown" status (will retry)
	W1122 00:20:45.276123  563925 node_ready.go:57] node "ha-561110-m03" has "Ready":"Unknown" status (will retry)
	W1122 00:20:47.770112  563925 node_ready.go:57] node "ha-561110-m03" has "Ready":"Unknown" status (will retry)
	W1122 00:20:50.269310  563925 node_ready.go:57] node "ha-561110-m03" has "Ready":"Unknown" status (will retry)
	W1122 00:20:52.269861  563925 node_ready.go:57] node "ha-561110-m03" has "Ready":"Unknown" status (will retry)
	W1122 00:20:54.270570  563925 node_ready.go:57] node "ha-561110-m03" has "Ready":"Unknown" status (will retry)
	W1122 00:20:56.769591  563925 node_ready.go:57] node "ha-561110-m03" has "Ready":"Unknown" status (will retry)
	W1122 00:20:58.770126  563925 node_ready.go:57] node "ha-561110-m03" has "Ready":"Unknown" status (will retry)
	W1122 00:21:01.270099  563925 node_ready.go:57] node "ha-561110-m03" has "Ready":"Unknown" status (will retry)
	W1122 00:21:03.769793  563925 node_ready.go:57] node "ha-561110-m03" has "Ready":"Unknown" status (will retry)
	W1122 00:21:05.771503  563925 node_ready.go:57] node "ha-561110-m03" has "Ready":"Unknown" status (will retry)
	W1122 00:21:08.269537  563925 node_ready.go:57] node "ha-561110-m03" has "Ready":"Unknown" status (will retry)
	W1122 00:21:10.770347  563925 node_ready.go:57] node "ha-561110-m03" has "Ready":"Unknown" status (will retry)
	W1122 00:21:13.269687  563925 node_ready.go:57] node "ha-561110-m03" has "Ready":"Unknown" status (will retry)
	W1122 00:21:15.270464  563925 node_ready.go:57] node "ha-561110-m03" has "Ready":"Unknown" status (will retry)
	W1122 00:21:17.271724  563925 node_ready.go:57] node "ha-561110-m03" has "Ready":"Unknown" status (will retry)
	W1122 00:21:19.769950  563925 node_ready.go:57] node "ha-561110-m03" has "Ready":"Unknown" status (will retry)
	W1122 00:21:22.269581  563925 node_ready.go:57] node "ha-561110-m03" has "Ready":"Unknown" status (will retry)
	W1122 00:21:24.269903  563925 node_ready.go:57] node "ha-561110-m03" has "Ready":"Unknown" status (will retry)
	W1122 00:21:26.269977  563925 node_ready.go:57] node "ha-561110-m03" has "Ready":"Unknown" status (will retry)
	W1122 00:21:28.769453  563925 node_ready.go:57] node "ha-561110-m03" has "Ready":"Unknown" status (will retry)
	W1122 00:21:30.770323  563925 node_ready.go:57] node "ha-561110-m03" has "Ready":"Unknown" status (will retry)
	W1122 00:21:33.270153  563925 node_ready.go:57] node "ha-561110-m03" has "Ready":"Unknown" status (will retry)
	W1122 00:21:35.769486  563925 node_ready.go:57] node "ha-561110-m03" has "Ready":"Unknown" status (will retry)
	W1122 00:21:37.770126  563925 node_ready.go:57] node "ha-561110-m03" has "Ready":"Unknown" status (will retry)
	W1122 00:21:39.770389  563925 node_ready.go:57] node "ha-561110-m03" has "Ready":"Unknown" status (will retry)
	W1122 00:21:42.273464  563925 node_ready.go:57] node "ha-561110-m03" has "Ready":"Unknown" status (will retry)
	W1122 00:21:44.769688  563925 node_ready.go:57] node "ha-561110-m03" has "Ready":"Unknown" status (will retry)
	W1122 00:21:46.770370  563925 node_ready.go:57] node "ha-561110-m03" has "Ready":"Unknown" status (will retry)
	W1122 00:21:49.269335  563925 node_ready.go:57] node "ha-561110-m03" has "Ready":"Unknown" status (will retry)
	W1122 00:21:51.270430  563925 node_ready.go:57] node "ha-561110-m03" has "Ready":"Unknown" status (will retry)
	W1122 00:21:53.769776  563925 node_ready.go:57] node "ha-561110-m03" has "Ready":"Unknown" status (will retry)
	W1122 00:21:56.269697  563925 node_ready.go:57] node "ha-561110-m03" has "Ready":"Unknown" status (will retry)
	W1122 00:21:58.270251  563925 node_ready.go:57] node "ha-561110-m03" has "Ready":"Unknown" status (will retry)
	W1122 00:22:00.292924  563925 node_ready.go:57] node "ha-561110-m03" has "Ready":"Unknown" status (will retry)
	W1122 00:22:02.779828  563925 node_ready.go:57] node "ha-561110-m03" has "Ready":"Unknown" status (will retry)
	W1122 00:22:05.270290  563925 node_ready.go:57] node "ha-561110-m03" has "Ready":"Unknown" status (will retry)
	W1122 00:22:07.270475  563925 node_ready.go:57] node "ha-561110-m03" has "Ready":"Unknown" status (will retry)
	W1122 00:22:09.769072  563925 node_ready.go:57] node "ha-561110-m03" has "Ready":"Unknown" status (will retry)
	W1122 00:22:11.769917  563925 node_ready.go:57] node "ha-561110-m03" has "Ready":"Unknown" status (will retry)
	W1122 00:22:13.770097  563925 node_ready.go:57] node "ha-561110-m03" has "Ready":"Unknown" status (will retry)
	W1122 00:22:16.269780  563925 node_ready.go:57] node "ha-561110-m03" has "Ready":"Unknown" status (will retry)
	W1122 00:22:18.269850  563925 node_ready.go:57] node "ha-561110-m03" has "Ready":"Unknown" status (will retry)
	W1122 00:22:20.276178  563925 node_ready.go:57] node "ha-561110-m03" has "Ready":"Unknown" status (will retry)
	W1122 00:22:22.770032  563925 node_ready.go:57] node "ha-561110-m03" has "Ready":"Unknown" status (will retry)
	W1122 00:22:25.270326  563925 node_ready.go:57] node "ha-561110-m03" has "Ready":"Unknown" status (will retry)
	W1122 00:22:27.769736  563925 node_ready.go:57] node "ha-561110-m03" has "Ready":"Unknown" status (will retry)
	W1122 00:22:30.270331  563925 node_ready.go:57] node "ha-561110-m03" has "Ready":"Unknown" status (will retry)
	W1122 00:22:32.768987  563925 node_ready.go:57] node "ha-561110-m03" has "Ready":"Unknown" status (will retry)
	W1122 00:22:35.269587  563925 node_ready.go:57] node "ha-561110-m03" has "Ready":"Unknown" status (will retry)
	W1122 00:22:37.770642  563925 node_ready.go:57] node "ha-561110-m03" has "Ready":"Unknown" status (will retry)
	W1122 00:22:40.269226  563925 node_ready.go:57] node "ha-561110-m03" has "Ready":"Unknown" status (will retry)
	W1122 00:22:42.281918  563925 node_ready.go:57] node "ha-561110-m03" has "Ready":"Unknown" status (will retry)
	W1122 00:22:44.770302  563925 node_ready.go:57] node "ha-561110-m03" has "Ready":"Unknown" status (will retry)
	W1122 00:22:47.269651  563925 node_ready.go:57] node "ha-561110-m03" has "Ready":"Unknown" status (will retry)
	W1122 00:22:49.270011  563925 node_ready.go:57] node "ha-561110-m03" has "Ready":"Unknown" status (will retry)
	W1122 00:22:51.770305  563925 node_ready.go:57] node "ha-561110-m03" has "Ready":"Unknown" status (will retry)
	W1122 00:22:54.269848  563925 node_ready.go:57] node "ha-561110-m03" has "Ready":"Unknown" status (will retry)
	W1122 00:22:56.269962  563925 node_ready.go:57] node "ha-561110-m03" has "Ready":"Unknown" status (will retry)
	W1122 00:22:58.770073  563925 node_ready.go:57] node "ha-561110-m03" has "Ready":"Unknown" status (will retry)
	W1122 00:23:00.770445  563925 node_ready.go:57] node "ha-561110-m03" has "Ready":"Unknown" status (will retry)
	I1122 00:23:02.766152  563925 node_ready.go:38] duration metric: took 6m0.000206678s for node "ha-561110-m03" to be "Ready" ...
	I1122 00:23:02.769486  563925 out.go:203] 
	W1122 00:23:02.772416  563925 out.go:285] X Exiting due to GUEST_START: failed to start node: adding node: wait 6m0s for node: waiting for node to be ready: WaitNodeCondition: context deadline exceeded
	W1122 00:23:02.772436  563925 out.go:285] * 
	W1122 00:23:02.774635  563925 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1122 00:23:02.776836  563925 out.go:203] 
	
	
	==> CRI-O <==
	Nov 22 00:15:52 ha-561110 crio[666]: time="2025-11-22T00:15:52.39043996Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=6b55a33c-982b-407b-a39e-f5c092d837ad name=/runtime.v1.ImageService/ImageStatus
	Nov 22 00:15:52 ha-561110 crio[666]: time="2025-11-22T00:15:52.391455898Z" level=info msg="Creating container: kube-system/storage-provisioner/storage-provisioner" id=aed84f71-7deb-4060-a2b1-3504a94ddccd name=/runtime.v1.RuntimeService/CreateContainer
	Nov 22 00:15:52 ha-561110 crio[666]: time="2025-11-22T00:15:52.391592756Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 22 00:15:52 ha-561110 crio[666]: time="2025-11-22T00:15:52.398141795Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 22 00:15:52 ha-561110 crio[666]: time="2025-11-22T00:15:52.398456674Z" level=warning msg="Failed to open /etc/passwd: open /var/lib/containers/storage/overlay/5080ecedb1aca210f92642c0da614341ac5baee6bb123e6d3efa15080462423f/merged/etc/passwd: no such file or directory"
	Nov 22 00:15:52 ha-561110 crio[666]: time="2025-11-22T00:15:52.398549644Z" level=warning msg="Failed to open /etc/group: open /var/lib/containers/storage/overlay/5080ecedb1aca210f92642c0da614341ac5baee6bb123e6d3efa15080462423f/merged/etc/group: no such file or directory"
	Nov 22 00:15:52 ha-561110 crio[666]: time="2025-11-22T00:15:52.398849032Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 22 00:15:52 ha-561110 crio[666]: time="2025-11-22T00:15:52.429646993Z" level=info msg="Created container 135f8581d288b240b9c444b0861bec261a02882a56b15c99e1bb476a861d296a: kube-system/storage-provisioner/storage-provisioner" id=aed84f71-7deb-4060-a2b1-3504a94ddccd name=/runtime.v1.RuntimeService/CreateContainer
	Nov 22 00:15:52 ha-561110 crio[666]: time="2025-11-22T00:15:52.430745119Z" level=info msg="Starting container: 135f8581d288b240b9c444b0861bec261a02882a56b15c99e1bb476a861d296a" id=781c7a19-539c-4417-a691-8f4e096b71ed name=/runtime.v1.RuntimeService/StartContainer
	Nov 22 00:15:52 ha-561110 crio[666]: time="2025-11-22T00:15:52.435089701Z" level=info msg="Started container" PID=1391 containerID=135f8581d288b240b9c444b0861bec261a02882a56b15c99e1bb476a861d296a description=kube-system/storage-provisioner/storage-provisioner id=781c7a19-539c-4417-a691-8f4e096b71ed name=/runtime.v1.RuntimeService/StartContainer sandboxID=de4629de69837fe0447ae13245102ae0d04524a3858dcce8a9d5b8e10bb91eaf
	Nov 22 00:16:01 ha-561110 crio[666]: time="2025-11-22T00:16:01.470325793Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 22 00:16:01 ha-561110 crio[666]: time="2025-11-22T00:16:01.474774281Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 22 00:16:01 ha-561110 crio[666]: time="2025-11-22T00:16:01.474811154Z" level=info msg="Updated default CNI network name to kindnet"
	Nov 22 00:16:01 ha-561110 crio[666]: time="2025-11-22T00:16:01.474835695Z" level=info msg="CNI monitoring event WRITE         \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 22 00:16:01 ha-561110 crio[666]: time="2025-11-22T00:16:01.478898906Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 22 00:16:01 ha-561110 crio[666]: time="2025-11-22T00:16:01.478939659Z" level=info msg="Updated default CNI network name to kindnet"
	Nov 22 00:16:01 ha-561110 crio[666]: time="2025-11-22T00:16:01.478962272Z" level=info msg="CNI monitoring event WRITE         \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 22 00:16:01 ha-561110 crio[666]: time="2025-11-22T00:16:01.482066282Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 22 00:16:01 ha-561110 crio[666]: time="2025-11-22T00:16:01.482101062Z" level=info msg="Updated default CNI network name to kindnet"
	Nov 22 00:16:01 ha-561110 crio[666]: time="2025-11-22T00:16:01.482122772Z" level=info msg="CNI monitoring event RENAME        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 22 00:16:01 ha-561110 crio[666]: time="2025-11-22T00:16:01.485482939Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 22 00:16:01 ha-561110 crio[666]: time="2025-11-22T00:16:01.485521674Z" level=info msg="Updated default CNI network name to kindnet"
	Nov 22 00:16:01 ha-561110 crio[666]: time="2025-11-22T00:16:01.485545829Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist\" ← \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 22 00:16:01 ha-561110 crio[666]: time="2025-11-22T00:16:01.488891801Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 22 00:16:01 ha-561110 crio[666]: time="2025-11-22T00:16:01.48892796Z" level=info msg="Updated default CNI network name to kindnet"
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                 NAMESPACE
	135f8581d288b       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6   7 minutes ago       Running             storage-provisioner       2                   de4629de69837       storage-provisioner                 kube-system
	fe1c6226bf4c6       138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc   7 minutes ago       Running             coredns                   1                   b641fd83b9816       coredns-66bc5c9577-vp8f5            kube-system
	69ffa71725510       138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc   7 minutes ago       Running             coredns                   1                   1104258d7fdef       coredns-66bc5c9577-rrkkw            kube-system
	60513ca704c00       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6   7 minutes ago       Exited              storage-provisioner       1                   de4629de69837       storage-provisioner                 kube-system
	d9e4613f17ffd       05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9   7 minutes ago       Running             kube-proxy                1                   1ff3e662bdd09       kube-proxy-fh5cv                    kube-system
	a2d8ce4bb1edd       89a35e2ebb6b938201966889b5e8c85b931db6432c5643966116cd1c28bf45cd   7 minutes ago       Running             busybox                   1                   b8e440f614e56       busybox-7b57f96db7-fbtrb            default
	5a2fb45570b8d       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c   7 minutes ago       Running             kindnet-cni               1                   55f44270c0111       kindnet-7g65m                       kube-system
	555f050993ba2       7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a   8 minutes ago       Running             kube-controller-manager   2                   10dbad5a4508a       kube-controller-manager-ha-561110   kube-system
	4cbb3fde391bd       43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196   8 minutes ago       Running             kube-apiserver            1                   38691a4dbf6ea       kube-apiserver-ha-561110            kube-system
	4360f5517fd5e       b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0   8 minutes ago       Running             kube-scheduler            1                   e0baba9cafe90       kube-scheduler-ha-561110            kube-system
	a395e7473ffe2       a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e   8 minutes ago       Running             etcd                      1                   193446051a803       etcd-ha-561110                      kube-system
	9fdf72902e6e0       2a8917f902489be5a8dd414209c32b77bd644d187ea646d86dbdc31e85efb551   8 minutes ago       Running             kube-vip                  0                   884d14e2e6045       kube-vip-ha-561110                  kube-system
	1c929db60119a       7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a   8 minutes ago       Exited              kube-controller-manager   1                   10dbad5a4508a       kube-controller-manager-ha-561110   kube-system
	
	
	==> coredns [69ffa7172551035e0586a2f61f518f9846bd0b87abc14ba1505f02248c5a9a02] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 9e2996f8cb67ac53e0259ab1f8d615d07d1beb0bd07e6a1e39769c3bf486a905bb991cc47f8d2f14d0d3a90a87dfc625a0b4c524fed169d8158c40657c0694b1
	CoreDNS-1.12.1
	linux/arm64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:39796 - 60732 "HINFO IN 576766510875163090.3461274759123809982. udp 56 false 512" NXDOMAIN qr,rd,ra 56 0.004198928s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	
	
	==> coredns [fe1c6226bf4c6a8f0d43125ecd01e36e538a750fd9dd5c3edb73d4ffd5a90aff] <==
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 9e2996f8cb67ac53e0259ab1f8d615d07d1beb0bd07e6a1e39769c3bf486a905bb991cc47f8d2f14d0d3a90a87dfc625a0b4c524fed169d8158c40657c0694b1
	CoreDNS-1.12.1
	linux/arm64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:58159 - 30701 "HINFO IN 6742751567940684104.616832762995402637. udp 56 false 512" NXDOMAIN qr,rd,ra 56 0.025967847s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	
	
	==> describe nodes <==
	Name:               ha-561110
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=ha-561110
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=299bbe887a12c40541707cc636234f35f4ff1785
	                    minikube.k8s.io/name=ha-561110
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_11_22T00_09_04_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 22 Nov 2025 00:09:01 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-561110
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 22 Nov 2025 00:23:08 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 22 Nov 2025 00:21:46 +0000   Sat, 22 Nov 2025 00:08:56 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 22 Nov 2025 00:21:46 +0000   Sat, 22 Nov 2025 00:08:56 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 22 Nov 2025 00:21:46 +0000   Sat, 22 Nov 2025 00:08:56 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 22 Nov 2025 00:21:46 +0000   Sat, 22 Nov 2025 00:15:19 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    ha-561110
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	System Info:
	  Machine ID:                 c92eea4d3c03c0156e01fcf8691e3907
	  System UUID:                77a39681-2950-4264-8660-77e1aeddeb83
	  Boot ID:                    72ac7385-472f-47d1-a23e-bd80468d6e09
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.2
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                 CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                 ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7b57f96db7-fbtrb             0 (0%)        0 (0%)      0 (0%)           0 (0%)         11m
	  kube-system                 coredns-66bc5c9577-rrkkw             100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     14m
	  kube-system                 coredns-66bc5c9577-vp8f5             100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     14m
	  kube-system                 etcd-ha-561110                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         14m
	  kube-system                 kindnet-7g65m                        100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      14m
	  kube-system                 kube-apiserver-ha-561110             250m (12%)    0 (0%)      0 (0%)           0 (0%)         14m
	  kube-system                 kube-controller-manager-ha-561110    200m (10%)    0 (0%)      0 (0%)           0 (0%)         14m
	  kube-system                 kube-proxy-fh5cv                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         14m
	  kube-system                 kube-scheduler-ha-561110             100m (5%)     0 (0%)      0 (0%)           0 (0%)         14m
	  kube-system                 kube-vip-ha-561110                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m48s
	  kube-system                 storage-provisioner                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         14m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                950m (47%)  100m (5%)
	  memory             290Mi (3%)  390Mi (4%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                    From             Message
	  ----     ------                   ----                   ----             -------
	  Normal   Starting                 7m50s                  kube-proxy       
	  Normal   Starting                 14m                    kube-proxy       
	  Normal   NodeHasNoDiskPressure    14m (x8 over 14m)      kubelet          Node ha-561110 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     14m (x8 over 14m)      kubelet          Node ha-561110 status is now: NodeHasSufficientPID
	  Normal   Starting                 14m                    kubelet          Starting kubelet.
	  Warning  CgroupV1                 14m                    kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  14m (x8 over 14m)      kubelet          Node ha-561110 status is now: NodeHasSufficientMemory
	  Warning  CgroupV1                 14m                    kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   Starting                 14m                    kubelet          Starting kubelet.
	  Normal   NodeHasSufficientPID     14m                    kubelet          Node ha-561110 status is now: NodeHasSufficientPID
	  Normal   NodeHasSufficientMemory  14m                    kubelet          Node ha-561110 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    14m                    kubelet          Node ha-561110 status is now: NodeHasNoDiskPressure
	  Normal   RegisteredNode           14m                    node-controller  Node ha-561110 event: Registered Node ha-561110 in Controller
	  Normal   RegisteredNode           13m                    node-controller  Node ha-561110 event: Registered Node ha-561110 in Controller
	  Normal   NodeReady                13m                    kubelet          Node ha-561110 status is now: NodeReady
	  Normal   RegisteredNode           12m                    node-controller  Node ha-561110 event: Registered Node ha-561110 in Controller
	  Normal   RegisteredNode           8m54s                  node-controller  Node ha-561110 event: Registered Node ha-561110 in Controller
	  Warning  CgroupV1                 8m26s                  kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  8m25s (x8 over 8m25s)  kubelet          Node ha-561110 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    8m25s (x8 over 8m25s)  kubelet          Node ha-561110 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     8m25s (x8 over 8m25s)  kubelet          Node ha-561110 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           7m51s                  node-controller  Node ha-561110 event: Registered Node ha-561110 in Controller
	  Normal   RegisteredNode           7m43s                  node-controller  Node ha-561110 event: Registered Node ha-561110 in Controller
	
	
	Name:               ha-561110-m02
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=ha-561110-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=299bbe887a12c40541707cc636234f35f4ff1785
	                    minikube.k8s.io/name=ha-561110
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2025_11_22T00_09_47_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 22 Nov 2025 00:09:47 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-561110-m02
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 22 Nov 2025 00:23:05 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 22 Nov 2025 00:23:08 +0000   Sat, 22 Nov 2025 00:09:47 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 22 Nov 2025 00:23:08 +0000   Sat, 22 Nov 2025 00:09:47 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 22 Nov 2025 00:23:08 +0000   Sat, 22 Nov 2025 00:09:47 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 22 Nov 2025 00:23:08 +0000   Sat, 22 Nov 2025 00:10:29 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.3
	  Hostname:    ha-561110-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	System Info:
	  Machine ID:                 c92eea4d3c03c0156e01fcf8691e3907
	  System UUID:                a2162c95-cc29-4cd8-8a91-589e6eb1ab6b
	  Boot ID:                    72ac7385-472f-47d1-a23e-bd80468d6e09
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.2
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7b57f96db7-dx9nw                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         11m
	  kube-system                 etcd-ha-561110-m02                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         13m
	  kube-system                 kindnet-dltvw                            100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      13m
	  kube-system                 kube-apiserver-ha-561110-m02             250m (12%)    0 (0%)      0 (0%)           0 (0%)         13m
	  kube-system                 kube-controller-manager-ha-561110-m02    200m (10%)    0 (0%)      0 (0%)           0 (0%)         13m
	  kube-system                 kube-proxy-b8wb5                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         13m
	  kube-system                 kube-scheduler-ha-561110-m02             100m (5%)     0 (0%)      0 (0%)           0 (0%)         13m
	  kube-system                 kube-vip-ha-561110-m02                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         13m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  100m (5%)
	  memory             150Mi (1%)  50Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                    From             Message
	  ----     ------                   ----                   ----             -------
	  Normal   Starting                 7m39s                  kube-proxy       
	  Normal   Starting                 13m                    kube-proxy       
	  Normal   RegisteredNode           13m                    node-controller  Node ha-561110-m02 event: Registered Node ha-561110-m02 in Controller
	  Normal   RegisteredNode           13m                    node-controller  Node ha-561110-m02 event: Registered Node ha-561110-m02 in Controller
	  Normal   RegisteredNode           12m                    node-controller  Node ha-561110-m02 event: Registered Node ha-561110-m02 in Controller
	  Normal   NodeHasSufficientPID     9m27s (x8 over 9m27s)  kubelet          Node ha-561110-m02 status is now: NodeHasSufficientPID
	  Normal   Starting                 9m27s                  kubelet          Starting kubelet.
	  Warning  CgroupV1                 9m27s                  kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  9m27s (x8 over 9m27s)  kubelet          Node ha-561110-m02 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    9m27s (x8 over 9m27s)  kubelet          Node ha-561110-m02 status is now: NodeHasNoDiskPressure
	  Normal   RegisteredNode           8m54s                  node-controller  Node ha-561110-m02 event: Registered Node ha-561110-m02 in Controller
	  Normal   Starting                 8m22s                  kubelet          Starting kubelet.
	  Warning  CgroupV1                 8m22s                  kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  8m21s (x8 over 8m22s)  kubelet          Node ha-561110-m02 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    8m21s (x8 over 8m22s)  kubelet          Node ha-561110-m02 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     8m21s (x8 over 8m22s)  kubelet          Node ha-561110-m02 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           7m51s                  node-controller  Node ha-561110-m02 event: Registered Node ha-561110-m02 in Controller
	  Normal   RegisteredNode           7m43s                  node-controller  Node ha-561110-m02 event: Registered Node ha-561110-m02 in Controller
	
	
	Name:               ha-561110-m04
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=ha-561110-m04
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=299bbe887a12c40541707cc636234f35f4ff1785
	                    minikube.k8s.io/name=ha-561110
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2025_11_22T00_12_27_0700
	                    minikube.k8s.io/version=v1.37.0
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 22 Nov 2025 00:12:26 +0000
	Taints:             node.kubernetes.io/unreachable:NoExecute
	                    node.kubernetes.io/unreachable:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-561110-m04
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 22 Nov 2025 00:14:09 +0000
	Conditions:
	  Type             Status    LastHeartbeatTime                 LastTransitionTime                Reason              Message
	  ----             ------    -----------------                 ------------------                ------              -------
	  MemoryPressure   Unknown   Sat, 22 Nov 2025 00:13:09 +0000   Sat, 22 Nov 2025 00:16:12 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  DiskPressure     Unknown   Sat, 22 Nov 2025 00:13:09 +0000   Sat, 22 Nov 2025 00:16:12 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  PIDPressure      Unknown   Sat, 22 Nov 2025 00:13:09 +0000   Sat, 22 Nov 2025 00:16:12 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  Ready            Unknown   Sat, 22 Nov 2025 00:13:09 +0000   Sat, 22 Nov 2025 00:16:12 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	Addresses:
	  InternalIP:  192.168.49.5
	  Hostname:    ha-561110-m04
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	System Info:
	  Machine ID:                 c92eea4d3c03c0156e01fcf8691e3907
	  System UUID:                00d86356-c884-4dfd-a214-95f51a02c157
	  Boot ID:                    72ac7385-472f-47d1-a23e-bd80468d6e09
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.2
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.3.0/24
	PodCIDRs:                     10.244.3.0/24
	Non-terminated Pods:          (2 in total)
	  Namespace                   Name                CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                ------------  ----------  ---------------  -------------  ---
	  kube-system                 kindnet-4tkd6       100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      10m
	  kube-system                 kube-proxy-2vctt    0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%)  100m (5%)
	  memory             50Mi (0%)  50Mi (0%)
	  ephemeral-storage  0 (0%)     0 (0%)
	  hugepages-1Gi      0 (0%)     0 (0%)
	  hugepages-2Mi      0 (0%)     0 (0%)
	  hugepages-32Mi     0 (0%)     0 (0%)
	  hugepages-64Ki     0 (0%)     0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 10m                kube-proxy       
	  Normal  NodeHasSufficientMemory  10m (x3 over 10m)  kubelet          Node ha-561110-m04 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    10m (x3 over 10m)  kubelet          Node ha-561110-m04 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     10m (x3 over 10m)  kubelet          Node ha-561110-m04 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           10m                node-controller  Node ha-561110-m04 event: Registered Node ha-561110-m04 in Controller
	  Normal  RegisteredNode           10m                node-controller  Node ha-561110-m04 event: Registered Node ha-561110-m04 in Controller
	  Normal  RegisteredNode           10m                node-controller  Node ha-561110-m04 event: Registered Node ha-561110-m04 in Controller
	  Normal  NodeReady                10m                kubelet          Node ha-561110-m04 status is now: NodeReady
	  Normal  RegisteredNode           8m54s              node-controller  Node ha-561110-m04 event: Registered Node ha-561110-m04 in Controller
	  Normal  RegisteredNode           7m51s              node-controller  Node ha-561110-m04 event: Registered Node ha-561110-m04 in Controller
	  Normal  RegisteredNode           7m43s              node-controller  Node ha-561110-m04 event: Registered Node ha-561110-m04 in Controller
	  Normal  NodeNotReady             7m1s               node-controller  Node ha-561110-m04 status is now: NodeNotReady
	
	
	==> dmesg <==
	[Nov21 23:16] overlayfs: idmapped layers are currently not supported
	[Nov21 23:17] overlayfs: idmapped layers are currently not supported
	[ +10.681159] overlayfs: idmapped layers are currently not supported
	[Nov21 23:19] overlayfs: idmapped layers are currently not supported
	[ +15.192296] overlayfs: idmapped layers are currently not supported
	[Nov21 23:20] overlayfs: idmapped layers are currently not supported
	[Nov21 23:21] overlayfs: idmapped layers are currently not supported
	[Nov21 23:22] overlayfs: idmapped layers are currently not supported
	[ +12.884842] overlayfs: idmapped layers are currently not supported
	[Nov21 23:23] overlayfs: idmapped layers are currently not supported
	[ +12.022080] overlayfs: idmapped layers are currently not supported
	[Nov21 23:25] overlayfs: idmapped layers are currently not supported
	[ +24.447615] overlayfs: idmapped layers are currently not supported
	[Nov21 23:46] kauditd_printk_skb: 8 callbacks suppressed
	[Nov21 23:48] overlayfs: idmapped layers are currently not supported
	[Nov21 23:54] overlayfs: idmapped layers are currently not supported
	[Nov21 23:55] overlayfs: idmapped layers are currently not supported
	[Nov22 00:08] overlayfs: idmapped layers are currently not supported
	[Nov22 00:09] overlayfs: idmapped layers are currently not supported
	[Nov22 00:10] overlayfs: idmapped layers are currently not supported
	[Nov22 00:12] overlayfs: idmapped layers are currently not supported
	[Nov22 00:13] overlayfs: idmapped layers are currently not supported
	[Nov22 00:14] overlayfs: idmapped layers are currently not supported
	[  +3.904643] overlayfs: idmapped layers are currently not supported
	[Nov22 00:15] overlayfs: idmapped layers are currently not supported
	
	
	==> etcd [a395e7473ffe2b7999ae75a70e19b4f153d459c8ccae48aeeb71b5b6248cc1f2] <==
	{"level":"warn","ts":"2025-11-22T00:22:53.988161Z","caller":"etcdserver/cluster_util.go:160","msg":"failed to get version","remote-member-id":"700ebc6e9635b48f","error":"Get \"https://192.168.49.4:2380/version\": dial tcp 192.168.49.4:2380: connect: connection refused"}
	{"level":"warn","ts":"2025-11-22T00:22:54.402648Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_RAFT_MESSAGE","remote-peer-id":"700ebc6e9635b48f","rtt":"51.609423ms","error":"dial tcp 192.168.49.4:2380: connect: connection refused"}
	{"level":"warn","ts":"2025-11-22T00:22:54.402639Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_SNAPSHOT","remote-peer-id":"700ebc6e9635b48f","rtt":"61.899654ms","error":"dial tcp 192.168.49.4:2380: connect: connection refused"}
	{"level":"warn","ts":"2025-11-22T00:22:57.989780Z","caller":"etcdserver/cluster_util.go:259","msg":"failed to reach the peer URL","address":"https://192.168.49.4:2380/version","remote-member-id":"700ebc6e9635b48f","error":"Get \"https://192.168.49.4:2380/version\": dial tcp 192.168.49.4:2380: connect: connection refused"}
	{"level":"warn","ts":"2025-11-22T00:22:57.989860Z","caller":"etcdserver/cluster_util.go:160","msg":"failed to get version","remote-member-id":"700ebc6e9635b48f","error":"Get \"https://192.168.49.4:2380/version\": dial tcp 192.168.49.4:2380: connect: connection refused"}
	{"level":"warn","ts":"2025-11-22T00:22:59.403601Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_RAFT_MESSAGE","remote-peer-id":"700ebc6e9635b48f","rtt":"51.609423ms","error":"dial tcp 192.168.49.4:2380: connect: connection refused"}
	{"level":"warn","ts":"2025-11-22T00:22:59.403589Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_SNAPSHOT","remote-peer-id":"700ebc6e9635b48f","rtt":"61.899654ms","error":"dial tcp 192.168.49.4:2380: connect: connection refused"}
	{"level":"warn","ts":"2025-11-22T00:23:01.991023Z","caller":"etcdserver/cluster_util.go:259","msg":"failed to reach the peer URL","address":"https://192.168.49.4:2380/version","remote-member-id":"700ebc6e9635b48f","error":"Get \"https://192.168.49.4:2380/version\": dial tcp 192.168.49.4:2380: connect: connection refused"}
	{"level":"warn","ts":"2025-11-22T00:23:01.991083Z","caller":"etcdserver/cluster_util.go:160","msg":"failed to get version","remote-member-id":"700ebc6e9635b48f","error":"Get \"https://192.168.49.4:2380/version\": dial tcp 192.168.49.4:2380: connect: connection refused"}
	{"level":"warn","ts":"2025-11-22T00:23:04.403878Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_SNAPSHOT","remote-peer-id":"700ebc6e9635b48f","rtt":"61.899654ms","error":"dial tcp 192.168.49.4:2380: connect: connection refused"}
	{"level":"warn","ts":"2025-11-22T00:23:04.403933Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_RAFT_MESSAGE","remote-peer-id":"700ebc6e9635b48f","rtt":"51.609423ms","error":"dial tcp 192.168.49.4:2380: connect: connection refused"}
	{"level":"warn","ts":"2025-11-22T00:23:05.992468Z","caller":"etcdserver/cluster_util.go:259","msg":"failed to reach the peer URL","address":"https://192.168.49.4:2380/version","remote-member-id":"700ebc6e9635b48f","error":"Get \"https://192.168.49.4:2380/version\": dial tcp 192.168.49.4:2380: connect: connection refused"}
	{"level":"warn","ts":"2025-11-22T00:23:05.992552Z","caller":"etcdserver/cluster_util.go:160","msg":"failed to get version","remote-member-id":"700ebc6e9635b48f","error":"Get \"https://192.168.49.4:2380/version\": dial tcp 192.168.49.4:2380: connect: connection refused"}
	{"level":"warn","ts":"2025-11-22T00:23:06.740226Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"192.168.49.4:36170","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-22T00:23:06.795797Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"192.168.49.4:36198","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-11-22T00:23:06.818468Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1981","msg":"aec36adc501070cc switched to configuration voters=(12355062781122549397 12593026477526642892)"}
	{"level":"info","ts":"2025-11-22T00:23:06.820790Z","caller":"membership/cluster.go:460","msg":"removed member","cluster-id":"fa54960ea34d58be","local-member-id":"aec36adc501070cc","removed-remote-peer-id":"700ebc6e9635b48f","removed-remote-peer-urls":["https://192.168.49.4:2380"],"removed-remote-peer-is-learner":false}
	{"level":"info","ts":"2025-11-22T00:23:06.820867Z","caller":"rafthttp/peer.go:316","msg":"stopping remote peer","remote-peer-id":"700ebc6e9635b48f"}
	{"level":"info","ts":"2025-11-22T00:23:06.820913Z","caller":"rafthttp/stream.go:293","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","remote-peer-id":"700ebc6e9635b48f"}
	{"level":"info","ts":"2025-11-22T00:23:06.820978Z","caller":"rafthttp/stream.go:293","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream Message","remote-peer-id":"700ebc6e9635b48f"}
	{"level":"info","ts":"2025-11-22T00:23:06.821052Z","caller":"rafthttp/pipeline.go:85","msg":"stopped HTTP pipelining with remote peer","local-member-id":"aec36adc501070cc","remote-peer-id":"700ebc6e9635b48f"}
	{"level":"info","ts":"2025-11-22T00:23:06.821092Z","caller":"rafthttp/stream.go:441","msg":"stopped stream reader with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"aec36adc501070cc","remote-peer-id":"700ebc6e9635b48f"}
	{"level":"info","ts":"2025-11-22T00:23:06.821172Z","caller":"rafthttp/stream.go:441","msg":"stopped stream reader with remote peer","stream-reader-type":"stream Message","local-member-id":"aec36adc501070cc","remote-peer-id":"700ebc6e9635b48f"}
	{"level":"info","ts":"2025-11-22T00:23:06.821201Z","caller":"rafthttp/peer.go:321","msg":"stopped remote peer","remote-peer-id":"700ebc6e9635b48f"}
	{"level":"info","ts":"2025-11-22T00:23:06.821231Z","caller":"rafthttp/transport.go:354","msg":"removed remote peer","local-member-id":"aec36adc501070cc","removed-remote-peer-id":"700ebc6e9635b48f"}
	
	
	==> kernel <==
	 00:23:13 up  5:05,  0 user,  load average: 0.17, 0.91, 1.17
	Linux ha-561110 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [5a2fb45570b8d8d9729d3fcc9460e054e1a5757ce0b35d5e4c6ab8f496780c4f] <==
	I1122 00:22:41.465247       1 main.go:324] Node ha-561110-m03 has CIDR [10.244.2.0/24] 
	I1122 00:22:41.465303       1 main.go:297] Handling node with IPs: map[192.168.49.5:{}]
	I1122 00:22:41.465309       1 main.go:324] Node ha-561110-m04 has CIDR [10.244.3.0/24] 
	I1122 00:22:51.473085       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1122 00:22:51.473126       1 main.go:301] handling current node
	I1122 00:22:51.473142       1 main.go:297] Handling node with IPs: map[192.168.49.3:{}]
	I1122 00:22:51.473148       1 main.go:324] Node ha-561110-m02 has CIDR [10.244.1.0/24] 
	I1122 00:22:51.473281       1 main.go:297] Handling node with IPs: map[192.168.49.4:{}]
	I1122 00:22:51.473294       1 main.go:324] Node ha-561110-m03 has CIDR [10.244.2.0/24] 
	I1122 00:22:51.473352       1 main.go:297] Handling node with IPs: map[192.168.49.5:{}]
	I1122 00:22:51.473363       1 main.go:324] Node ha-561110-m04 has CIDR [10.244.3.0/24] 
	I1122 00:23:01.470409       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1122 00:23:01.470552       1 main.go:301] handling current node
	I1122 00:23:01.470584       1 main.go:297] Handling node with IPs: map[192.168.49.3:{}]
	I1122 00:23:01.470592       1 main.go:324] Node ha-561110-m02 has CIDR [10.244.1.0/24] 
	I1122 00:23:01.470789       1 main.go:297] Handling node with IPs: map[192.168.49.4:{}]
	I1122 00:23:01.470804       1 main.go:324] Node ha-561110-m03 has CIDR [10.244.2.0/24] 
	I1122 00:23:01.470889       1 main.go:297] Handling node with IPs: map[192.168.49.5:{}]
	I1122 00:23:01.470900       1 main.go:324] Node ha-561110-m04 has CIDR [10.244.3.0/24] 
	I1122 00:23:11.465362       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1122 00:23:11.465394       1 main.go:301] handling current node
	I1122 00:23:11.465416       1 main.go:297] Handling node with IPs: map[192.168.49.3:{}]
	I1122 00:23:11.465423       1 main.go:324] Node ha-561110-m02 has CIDR [10.244.1.0/24] 
	I1122 00:23:11.465568       1 main.go:297] Handling node with IPs: map[192.168.49.5:{}]
	I1122 00:23:11.465574       1 main.go:324] Node ha-561110-m04 has CIDR [10.244.3.0/24] 
	
	
	==> kube-apiserver [4cbb3fde391bd86e756416ec260b0b8a5501d5139da802107965d9e012c4eca5] <==
	I1122 00:15:18.445997       1 shared_informer.go:356] "Caches are synced" controller="ipallocator-repair-controller"
	I1122 00:15:18.446301       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	W1122 00:15:18.447701       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.49.3]
	I1122 00:15:18.452038       1 shared_informer.go:356] "Caches are synced" controller="*generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]"
	I1122 00:15:18.452137       1 policy_source.go:240] refreshing policies
	I1122 00:15:18.460639       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1122 00:15:18.471883       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I1122 00:15:18.471973       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I1122 00:15:18.484728       1 cache.go:39] Caches are synced for LocalAvailability controller
	I1122 00:15:18.486315       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I1122 00:15:18.488710       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1122 00:15:18.492798       1 shared_informer.go:356] "Caches are synced" controller="node_authorizer"
	I1122 00:15:18.495280       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I1122 00:15:18.507423       1 cache.go:39] Caches are synced for autoregister controller
	I1122 00:15:18.534574       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1122 00:15:18.549678       1 controller.go:667] quota admission added evaluator for: endpoints
	I1122 00:15:18.565788       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	E1122 00:15:18.571045       1 controller.go:95] Found stale data, removed previous endpoints on kubernetes service, apiserver didn't exit successfully previously
	I1122 00:15:19.403170       1 shared_informer.go:356] "Caches are synced" controller="kubernetes-service-cidr-controller"
	I1122 00:15:19.403318       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	W1122 00:15:19.985311       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.49.2 192.168.49.3]
	I1122 00:15:20.110990       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1122 00:15:22.839985       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1122 00:15:22.952373       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1122 00:15:33.431623       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	
	
	==> kube-controller-manager [1c929db60119ab54f03020d00f2063dc6672d329ea34f4504e502142bffbe644] <==
	I1122 00:14:51.749993       1 serving.go:386] Generated self-signed cert in-memory
	I1122 00:14:53.094715       1 controllermanager.go:191] "Starting" version="v1.34.1"
	I1122 00:14:53.095280       1 controllermanager.go:193] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1122 00:14:53.099971       1 secure_serving.go:211] Serving securely on 127.0.0.1:10257
	I1122 00:14:53.101968       1 dynamic_cafile_content.go:161] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I1122 00:14:53.102195       1 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I1122 00:14:53.102364       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	E1122 00:15:08.891956       1 controllermanager.go:245] "Error building controller context" err="failed to wait for apiserver being healthy: timed out waiting for the condition: failed to get apiserver /healthz status: forbidden: User \"system:kube-controller-manager\" cannot get path \"/healthz\""
	
	
	==> kube-controller-manager [555f050993ba210ea8b5a432f7b9d055cece81e4f3e958134fe029c08873937f] <==
	I1122 00:15:22.665955       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1122 00:15:22.665980       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1122 00:15:22.665989       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1122 00:15:22.670916       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I1122 00:15:22.671810       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I1122 00:15:22.671975       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I1122 00:15:22.674739       1 shared_informer.go:356] "Caches are synced" controller="service account"
	I1122 00:15:22.700683       1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
	I1122 00:15:22.700732       1 shared_informer.go:356] "Caches are synced" controller="job"
	I1122 00:15:22.700975       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I1122 00:15:22.701031       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="ha-561110-m04"
	I1122 00:15:22.702027       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1122 00:15:22.702218       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1122 00:15:22.702265       1 shared_informer.go:356] "Caches are synced" controller="taint"
	I1122 00:15:22.702335       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I1122 00:15:22.702421       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="ha-561110-m04"
	I1122 00:15:22.702475       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="ha-561110"
	I1122 00:15:22.702508       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="ha-561110-m02"
	I1122 00:15:22.702530       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="ha-561110-m03"
	I1122 00:15:22.703121       1 node_lifecycle_controller.go:1067] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I1122 00:16:02.360319       1 endpointslice_controller.go:344] "Error syncing endpoint slices for service, retrying" logger="endpointslice-controller" key="kube-system/kube-dns" err="failed to update kube-dns-fg476 EndpointSlice for Service kube-system/kube-dns: Operation cannot be fulfilled on endpointslices.discovery.k8s.io \"kube-dns-fg476\": the object has been modified; please apply your changes to the latest version and try again"
	I1122 00:16:02.360917       1 event.go:377] Event(v1.ObjectReference{Kind:"Service", Namespace:"kube-system", Name:"kube-dns", UID:"9e1e93e1-00b2-4af4-b92a-649228d61b24", APIVersion:"v1", ResourceVersion:"291", FieldPath:""}): type: 'Warning' reason: 'FailedToUpdateEndpointSlices' Error updating Endpoint Slices for Service kube-system/kube-dns: failed to update kube-dns-fg476 EndpointSlice for Service kube-system/kube-dns: Operation cannot be fulfilled on endpointslices.discovery.k8s.io "kube-dns-fg476": the object has been modified; please apply your changes to the latest version and try again
	I1122 00:21:22.783104       1 taint_eviction.go:111] "Deleting pod" logger="taint-eviction-controller" controller="taint-eviction-controller" pod="default/busybox-7b57f96db7-jnjz9"
	E1122 00:21:23.049792       1 replica_set.go:587] "Unhandled Error" err="sync \"default/busybox-7b57f96db7\" failed with Operation cannot be fulfilled on replicasets.apps \"busybox-7b57f96db7\": the object has been modified; please apply your changes to the latest version and try again" logger="UnhandledError"
	E1122 00:23:07.392273       1 garbagecollector.go:360] "Unhandled Error" err="error syncing item &garbagecollector.node{identity:garbagecollector.objectReference{OwnerReference:v1.OwnerReference{APIVersion:\"coordination.k8s.io/v1\", Kind:\"Lease\", Name:\"ha-561110-m03\", UID:\"89ed6ab2-2d42-416d-85b4-495b62b93ace\", Controller:(*bool)(nil), BlockOwnerDeletion:(*bool)(nil)}, Namespace:\"kube-node-lease\"}, dependentsLock:sync.RWMutex{w:sync.Mutex{_:sync.noCopy{}, mu:sync.Mutex{state:0, sema:0x0}}, writerSem:0x0, readerSem:0x0, readerCount:atomic.Int32{_:atomic.noCopy{}, v:1}, readerWait:atomic.Int32{_:atomic.noCopy{}, v:0}}, dependents:map[*garbagecollector.node]struct {}{}, deletingDependents:false, deletingDependentsLock:sync.RWMutex{w:sync.Mutex{_:sync.noCopy{}, mu:sync.Mutex{state:0, sema:0x0}}, writerSem:0x0, readerSem:0x0, readerCount:atomic.Int32{_:atomic.noCopy{}, v:0}, readerWait:atomic.Int32{_:atomic.noCopy{}, v:0}}, beingDeleted:false, beingDeletedLock:sync.RWMutex{w:sync.Mutex{_:sync.noC
opy{}, mu:sync.Mutex{state:0, sema:0x0}}, writerSem:0x0, readerSem:0x0, readerCount:atomic.Int32{_:atomic.noCopy{}, v:0}, readerWait:atomic.Int32{_:atomic.noCopy{}, v:0}}, virtual:false, virtualLock:sync.RWMutex{w:sync.Mutex{_:sync.noCopy{}, mu:sync.Mutex{state:0, sema:0x0}}, writerSem:0x0, readerSem:0x0, readerCount:atomic.Int32{_:atomic.noCopy{}, v:0}, readerWait:atomic.Int32{_:atomic.noCopy{}, v:0}}, owners:[]v1.OwnerReference{v1.OwnerReference{APIVersion:\"v1\", Kind:\"Node\", Name:\"ha-561110-m03\", UID:\"60ac0879-7e66-4fe4-865c-9695d0489790\", Controller:(*bool)(nil), BlockOwnerDeletion:(*bool)(nil)}}}: leases.coordination.k8s.io \"ha-561110-m03\" not found" logger="UnhandledError"
	
	
	==> kube-proxy [d9e4613f17ffd567cd78a387d7add1e58e4b781fbb445147b8bfca54b9432ab5] <==
	I1122 00:15:21.735861       1 server_linux.go:53] "Using iptables proxy"
	I1122 00:15:22.275352       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1122 00:15:22.376449       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1122 00:15:22.376485       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.49.2"]
	E1122 00:15:22.376557       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1122 00:15:22.535668       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1122 00:15:22.535795       1 server_linux.go:132] "Using iptables Proxier"
	I1122 00:15:22.609065       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1122 00:15:22.609513       1 server.go:527] "Version info" version="v1.34.1"
	I1122 00:15:22.609711       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1122 00:15:22.617349       1 config.go:106] "Starting endpoint slice config controller"
	I1122 00:15:22.642095       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1122 00:15:22.642216       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1122 00:15:22.625017       1 config.go:309] "Starting node config controller"
	I1122 00:15:22.642311       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1122 00:15:22.661330       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1122 00:15:22.618034       1 config.go:403] "Starting serviceCIDR config controller"
	I1122 00:15:22.661456       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1122 00:15:22.661484       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1122 00:15:22.617914       1 config.go:200] "Starting service config controller"
	I1122 00:15:22.667161       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1122 00:15:22.669962       1 shared_informer.go:356] "Caches are synced" controller="service config"
	
	
	==> kube-scheduler [4360f5517fd5eb7d570a98dee1b801419d3b650d7e890d5ddecc79946fba46db] <==
	E1122 00:15:06.983690       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1122 00:15:07.083902       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1122 00:15:07.669484       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1122 00:15:08.371857       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1122 00:15:08.496001       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1122 00:15:09.010289       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1122 00:15:09.013639       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1122 00:15:09.181881       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1122 00:15:13.452037       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1122 00:15:13.489179       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1122 00:15:13.596578       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1122 00:15:15.338465       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1122 00:15:15.586170       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1122 00:15:15.778676       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1122 00:15:16.291567       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1122 00:15:16.393784       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1122 00:15:16.654452       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1122 00:15:16.676020       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1122 00:15:16.720023       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_arm64.s:1223" type="*v1.ConfigMap"
	E1122 00:15:16.867178       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1122 00:15:17.894056       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1122 00:15:17.894162       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1122 00:15:18.097763       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1122 00:15:18.407533       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope: RBAC: [clusterrole.rbac.authorization.k8s.io \"system:discovery\" not found, clusterrole.rbac.authorization.k8s.io \"system:basic-user\" not found, clusterrole.rbac.authorization.k8s.io \"system:volume-scheduler\" not found, clusterrole.rbac.authorization.k8s.io \"system:public-info-viewer\" not found, clusterrole.rbac.authorization.k8s.io \"system:kube-scheduler\" not found]" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	I1122 00:15:40.715350       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Nov 22 00:15:19 ha-561110 kubelet[804]: E1122 00:15:19.202909     804 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"etcd-ha-561110\" already exists" pod="kube-system/etcd-ha-561110"
	Nov 22 00:15:19 ha-561110 kubelet[804]: E1122 00:15:19.208208     804 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"etcd-ha-561110\" already exists" pod="kube-system/etcd-ha-561110"
	Nov 22 00:15:19 ha-561110 kubelet[804]: I1122 00:15:19.208378     804 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ha-561110"
	Nov 22 00:15:19 ha-561110 kubelet[804]: E1122 00:15:19.222213     804 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-apiserver-ha-561110\" already exists" pod="kube-system/kube-apiserver-ha-561110"
	Nov 22 00:15:19 ha-561110 kubelet[804]: I1122 00:15:19.222405     804 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ha-561110"
	Nov 22 00:15:19 ha-561110 kubelet[804]: E1122 00:15:19.238353     804 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-controller-manager-ha-561110\" already exists" pod="kube-system/kube-controller-manager-ha-561110"
	Nov 22 00:15:19 ha-561110 kubelet[804]: I1122 00:15:19.996652     804 apiserver.go:52] "Watching apiserver"
	Nov 22 00:15:20 ha-561110 kubelet[804]: I1122 00:15:20.004192     804 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world"
	Nov 22 00:15:20 ha-561110 kubelet[804]: I1122 00:15:20.010755     804 kubelet.go:3202] "Trying to delete pod" pod="kube-system/kube-vip-ha-561110" podUID="f9bbfb1b-cc91-44c4-be9d-f028e6f3038f"
	Nov 22 00:15:20 ha-561110 kubelet[804]: I1122 00:15:20.042558     804 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/318c6763-fea1-4564-86f6-18cfad691213-xtables-lock\") pod \"kube-proxy-fh5cv\" (UID: \"318c6763-fea1-4564-86f6-18cfad691213\") " pod="kube-system/kube-proxy-fh5cv"
	Nov 22 00:15:20 ha-561110 kubelet[804]: I1122 00:15:20.042916     804 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/edeca4a6-de24-4444-be9c-cdcbf744f52a-lib-modules\") pod \"kindnet-7g65m\" (UID: \"edeca4a6-de24-4444-be9c-cdcbf744f52a\") " pod="kube-system/kindnet-7g65m"
	Nov 22 00:15:20 ha-561110 kubelet[804]: I1122 00:15:20.043044     804 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/edeca4a6-de24-4444-be9c-cdcbf744f52a-xtables-lock\") pod \"kindnet-7g65m\" (UID: \"edeca4a6-de24-4444-be9c-cdcbf744f52a\") " pod="kube-system/kindnet-7g65m"
	Nov 22 00:15:20 ha-561110 kubelet[804]: I1122 00:15:20.043629     804 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/318c6763-fea1-4564-86f6-18cfad691213-lib-modules\") pod \"kube-proxy-fh5cv\" (UID: \"318c6763-fea1-4564-86f6-18cfad691213\") " pod="kube-system/kube-proxy-fh5cv"
	Nov 22 00:15:20 ha-561110 kubelet[804]: I1122 00:15:20.043908     804 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/6bf95a26-263b-4088-904d-b344d4826342-tmp\") pod \"storage-provisioner\" (UID: \"6bf95a26-263b-4088-904d-b344d4826342\") " pod="kube-system/storage-provisioner"
	Nov 22 00:15:20 ha-561110 kubelet[804]: I1122 00:15:20.044454     804 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/edeca4a6-de24-4444-be9c-cdcbf744f52a-cni-cfg\") pod \"kindnet-7g65m\" (UID: \"edeca4a6-de24-4444-be9c-cdcbf744f52a\") " pod="kube-system/kindnet-7g65m"
	Nov 22 00:15:20 ha-561110 kubelet[804]: I1122 00:15:20.069531     804 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="12f5cffcd2e0febd6c4ae07da010fd8f" path="/var/lib/kubelet/pods/12f5cffcd2e0febd6c4ae07da010fd8f/volumes"
	Nov 22 00:15:20 ha-561110 kubelet[804]: I1122 00:15:20.170059     804 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory"
	Nov 22 00:15:20 ha-561110 kubelet[804]: I1122 00:15:20.199192     804 kubelet.go:3208] "Deleted mirror pod as it didn't match the static Pod" pod="kube-system/kube-vip-ha-561110"
	Nov 22 00:15:20 ha-561110 kubelet[804]: I1122 00:15:20.199382     804 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-vip-ha-561110"
	Nov 22 00:15:20 ha-561110 kubelet[804]: W1122 00:15:20.465863     804 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/b491a219f5f6a6da2c04a012513c1a2266c783068f6131eddf3365f4209ece96/crio-b8e440f614e569ef72e19243d1540dd34639d19916d8b0e346545eb4867daf57 WatchSource:0}: Error finding container b8e440f614e569ef72e19243d1540dd34639d19916d8b0e346545eb4867daf57: Status 404 returned error can't find the container with id b8e440f614e569ef72e19243d1540dd34639d19916d8b0e346545eb4867daf57
	Nov 22 00:15:20 ha-561110 kubelet[804]: W1122 00:15:20.651808     804 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/b491a219f5f6a6da2c04a012513c1a2266c783068f6131eddf3365f4209ece96/crio-b641fd83b9816fb348d03cb35df6649a6ab3d78bdff2936914e0167db04fad0a WatchSource:0}: Error finding container b641fd83b9816fb348d03cb35df6649a6ab3d78bdff2936914e0167db04fad0a: Status 404 returned error can't find the container with id b641fd83b9816fb348d03cb35df6649a6ab3d78bdff2936914e0167db04fad0a
	Nov 22 00:15:47 ha-561110 kubelet[804]: E1122 00:15:47.996298     804 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"60db7bf551c52a828501521a7f79a373f51d5d988223afbc4f6f1a9ca6872e12\": container with ID starting with 60db7bf551c52a828501521a7f79a373f51d5d988223afbc4f6f1a9ca6872e12 not found: ID does not exist" containerID="60db7bf551c52a828501521a7f79a373f51d5d988223afbc4f6f1a9ca6872e12"
	Nov 22 00:15:47 ha-561110 kubelet[804]: I1122 00:15:47.996363     804 kuberuntime_gc.go:364] "Error getting ContainerStatus for containerID" containerID="60db7bf551c52a828501521a7f79a373f51d5d988223afbc4f6f1a9ca6872e12" err="rpc error: code = NotFound desc = could not find container \"60db7bf551c52a828501521a7f79a373f51d5d988223afbc4f6f1a9ca6872e12\": container with ID starting with 60db7bf551c52a828501521a7f79a373f51d5d988223afbc4f6f1a9ca6872e12 not found: ID does not exist"
	Nov 22 00:15:52 ha-561110 kubelet[804]: I1122 00:15:52.388242     804 scope.go:117] "RemoveContainer" containerID="60513ca704c00c488d3491dd4f8a9e84dd69cf4c098d6dddf6f9ecba18d70a70"
	Nov 22 00:16:25 ha-561110 kubelet[804]: I1122 00:16:25.065664     804 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-vip-ha-561110"
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p ha-561110 -n ha-561110
helpers_test.go:269: (dbg) Run:  kubectl --context ha-561110 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:280: non-running pods: busybox-7b57f96db7-hkwmz
helpers_test.go:282: ======> post-mortem[TestMultiControlPlane/serial/DeleteSecondaryNode]: describe non-running pods <======
helpers_test.go:285: (dbg) Run:  kubectl --context ha-561110 describe pod busybox-7b57f96db7-hkwmz
helpers_test.go:290: (dbg) kubectl --context ha-561110 describe pod busybox-7b57f96db7-hkwmz:

                                                
                                                
-- stdout --
	Name:             busybox-7b57f96db7-hkwmz
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             <none>
	Labels:           app=busybox
	                  pod-template-hash=7b57f96db7
	Annotations:      <none>
	Status:           Pending
	IP:               
	IPs:              <none>
	Controlled By:    ReplicaSet/busybox-7b57f96db7
	Containers:
	  busybox:
	    Image:      gcr.io/k8s-minikube/busybox:1.28
	    Port:       <none>
	    Host Port:  <none>
	    Command:
	      sleep
	      3600
	    Environment:  <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-82jj6 (ro)
	Conditions:
	  Type           Status
	  PodScheduled   False 
	Volumes:
	  kube-api-access-82jj6:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason            Age   From               Message
	  ----     ------            ----  ----               -------
	  Warning  FailedScheduling  112s  default-scheduler  0/4 nodes are available: 2 node(s) didn't match pod anti-affinity rules, 2 node(s) had untolerated taint {node.kubernetes.io/unreachable: }. no new claims to deallocate, preemption: 0/4 nodes are available: 2 No preemption victims found for incoming pod, 2 Preemption is not helpful for scheduling.
	  Warning  FailedScheduling  112s  default-scheduler  0/4 nodes are available: 2 node(s) didn't match pod anti-affinity rules, 2 node(s) had untolerated taint {node.kubernetes.io/unreachable: }. no new claims to deallocate, preemption: 0/4 nodes are available: 2 No preemption victims found for incoming pod, 2 Preemption is not helpful for scheduling.
	  Warning  FailedScheduling  8s    default-scheduler  0/4 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/unreachable: }, 1 node(s) were unschedulable, 2 node(s) didn't match pod anti-affinity rules. no new claims to deallocate, preemption: 0/4 nodes are available: 2 No preemption victims found for incoming pod, 2 Preemption is not helpful for scheduling.
	  Warning  FailedScheduling  8s    default-scheduler  0/4 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/unreachable: }, 1 node(s) were unschedulable, 2 node(s) didn't match pod anti-affinity rules. no new claims to deallocate, preemption: 0/4 nodes are available: 2 No preemption victims found for incoming pod, 2 Preemption is not helpful for scheduling.

                                                
                                                
-- /stdout --
helpers_test.go:293: <<< TestMultiControlPlane/serial/DeleteSecondaryNode FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiControlPlane/serial/DeleteSecondaryNode (8.65s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (3.22s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete
ha_test.go:392: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
ha_test.go:415: expected profile "ha-561110" in json of 'profile list' to have "Degraded" status but have "Starting" status. got *"{\"invalid\":[],\"valid\":[{\"Name\":\"ha-561110\",\"Status\":\"Starting\",\"Config\":{\"Name\":\"ha-561110\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763588073-21934@sha256:19d3da0413e1bfa354cbb88004c6796f8e9772a083e0230b0f6e50212ee04c7e\",\"Memory\":3072,\"CPUs\":2,\"DiskSize\":20000,\"Driver\":\"docker\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidden\":false,\"KVMNUMACount\":1,\"APIServerPort\":8443,\"DockerOpt\":null,\"DisableDriverMounts\":false,\"NFSShare\":[],\"NFSS
haresRoot\":\"/nfsshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.34.1\",\"ClusterName\":\"ha-561110\",\"Namespace\":\"default\",\"APIServerHAVIP\":\"192.168.49.254\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"crio\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\"},\"Nodes\":[{\"Name\":\"\",\"IP\":\"192.168.49.2\",\"Port\":8443,\"KubernetesVersion\":\"v1.34.1\",\"ContainerRuntime\":\"crio\",\"ControlPlane\":true,\"Worker\":true},{
\"Name\":\"m02\",\"IP\":\"192.168.49.3\",\"Port\":8443,\"KubernetesVersion\":\"v1.34.1\",\"ContainerRuntime\":\"crio\",\"ControlPlane\":true,\"Worker\":true},{\"Name\":\"m04\",\"IP\":\"192.168.49.5\",\"Port\":0,\"KubernetesVersion\":\"v1.34.1\",\"ContainerRuntime\":\"crio\",\"ControlPlane\":false,\"Worker\":true}],\"Addons\":{\"ambassador\":false,\"amd-gpu-device-plugin\":false,\"auto-pause\":false,\"cloud-spanner\":false,\"csi-hostpath-driver\":false,\"dashboard\":false,\"default-storageclass\":false,\"efk\":false,\"freshpod\":false,\"gcp-auth\":false,\"gvisor\":false,\"headlamp\":false,\"inaccel\":false,\"ingress\":false,\"ingress-dns\":false,\"inspektor-gadget\":false,\"istio\":false,\"istio-provisioner\":false,\"kong\":false,\"kubeflow\":false,\"kubetail\":false,\"kubevirt\":false,\"logviewer\":false,\"metallb\":false,\"metrics-server\":false,\"nvidia-device-plugin\":false,\"nvidia-driver-installer\":false,\"nvidia-gpu-device-plugin\":false,\"olm\":false,\"pod-security-policy\":false,\"portainer\":false,\
"registry\":false,\"registry-aliases\":false,\"registry-creds\":false,\"storage-provisioner\":false,\"storage-provisioner-rancher\":false,\"volcano\":false,\"volumesnapshots\":false,\"yakd\":false},\"CustomAddonImages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"MountString\":\"\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\"docker\",\"MountIP\":\"\",\"MountMSize\":262144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizations\":false,\"DisableMetrics\":false,\"DisableCoreDNSLog\":false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"\",\"SocketVMnetPath\":\"\",\"Sta
ticIP\":\"\",\"SSHAuthSock\":\"\",\"SSHAgentPID\":0,\"GPUs\":\"\",\"AutoPauseInterval\":60000000000},\"Active\":false,\"ActiveKubeContext\":true}]}"*. args: "out/minikube-linux-arm64 profile list --output json"
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect ha-561110
helpers_test.go:243: (dbg) docker inspect ha-561110:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "b491a219f5f6a6da2c04a012513c1a2266c783068f6131eddf3365f4209ece96",
	        "Created": "2025-11-22T00:08:39.249293688Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 564052,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-11-22T00:14:41.326793505Z",
	            "FinishedAt": "2025-11-22T00:14:40.718153366Z"
	        },
	        "Image": "sha256:97061622515a85470dad13cb046ad5fc5021fcd2b05aa921a620abb52951cd5d",
	        "ResolvConfPath": "/var/lib/docker/containers/b491a219f5f6a6da2c04a012513c1a2266c783068f6131eddf3365f4209ece96/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/b491a219f5f6a6da2c04a012513c1a2266c783068f6131eddf3365f4209ece96/hostname",
	        "HostsPath": "/var/lib/docker/containers/b491a219f5f6a6da2c04a012513c1a2266c783068f6131eddf3365f4209ece96/hosts",
	        "LogPath": "/var/lib/docker/containers/b491a219f5f6a6da2c04a012513c1a2266c783068f6131eddf3365f4209ece96/b491a219f5f6a6da2c04a012513c1a2266c783068f6131eddf3365f4209ece96-json.log",
	        "Name": "/ha-561110",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "ha-561110:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "ha-561110",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "b491a219f5f6a6da2c04a012513c1a2266c783068f6131eddf3365f4209ece96",
	                "LowerDir": "/var/lib/docker/overlay2/5b04665b7cab2ec18af91a710d518904c279e2a90668f078e04a26ace79c7488-init/diff:/var/lib/docker/overlay2/7e8788c6de692bc1c3758a2bb2c4b8da0fbba26855f855c0f3b655bfbdd92f8e/diff",
	                "MergedDir": "/var/lib/docker/overlay2/5b04665b7cab2ec18af91a710d518904c279e2a90668f078e04a26ace79c7488/merged",
	                "UpperDir": "/var/lib/docker/overlay2/5b04665b7cab2ec18af91a710d518904c279e2a90668f078e04a26ace79c7488/diff",
	                "WorkDir": "/var/lib/docker/overlay2/5b04665b7cab2ec18af91a710d518904c279e2a90668f078e04a26ace79c7488/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "ha-561110",
	                "Source": "/var/lib/docker/volumes/ha-561110/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "ha-561110",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763588073-21934@sha256:19d3da0413e1bfa354cbb88004c6796f8e9772a083e0230b0f6e50212ee04c7e",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "ha-561110",
	                "name.minikube.sigs.k8s.io": "ha-561110",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "63b3a8bfef41783609e300f295bd9c6ce0b188ddea8ed2fd34f5208c58b47581",
	            "SandboxKey": "/var/run/docker/netns/63b3a8bfef41",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33535"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33536"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33539"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33537"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33538"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "ha-561110": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "96:82:2a:2d:1a:a2",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "b16c782e3da877b947afab8daed1813e31e3d205de3fc5d50df3784dc479d217",
	                    "EndpointID": "61c267346b225270082d2c669fb1fa8e14bbb2c2c81a704ce5a2c8a50f3d07f7",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "ha-561110",
	                        "b491a219f5f6"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p ha-561110 -n ha-561110
helpers_test.go:252: <<< TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p ha-561110 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p ha-561110 logs -n 25: (1.406086585s)
helpers_test.go:260: TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                 ARGS                                                                 │  PROFILE  │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ ssh     │ ha-561110 ssh -n ha-561110-m03 sudo cat /home/docker/cp-test.txt                                                                     │ ha-561110 │ jenkins │ v1.37.0 │ 22 Nov 25 00:13 UTC │ 22 Nov 25 00:13 UTC │
	│ ssh     │ ha-561110 ssh -n ha-561110-m02 sudo cat /home/docker/cp-test_ha-561110-m03_ha-561110-m02.txt                                         │ ha-561110 │ jenkins │ v1.37.0 │ 22 Nov 25 00:13 UTC │ 22 Nov 25 00:13 UTC │
	│ cp      │ ha-561110 cp ha-561110-m03:/home/docker/cp-test.txt ha-561110-m04:/home/docker/cp-test_ha-561110-m03_ha-561110-m04.txt               │ ha-561110 │ jenkins │ v1.37.0 │ 22 Nov 25 00:13 UTC │ 22 Nov 25 00:13 UTC │
	│ ssh     │ ha-561110 ssh -n ha-561110-m03 sudo cat /home/docker/cp-test.txt                                                                     │ ha-561110 │ jenkins │ v1.37.0 │ 22 Nov 25 00:13 UTC │ 22 Nov 25 00:13 UTC │
	│ ssh     │ ha-561110 ssh -n ha-561110-m04 sudo cat /home/docker/cp-test_ha-561110-m03_ha-561110-m04.txt                                         │ ha-561110 │ jenkins │ v1.37.0 │ 22 Nov 25 00:13 UTC │ 22 Nov 25 00:13 UTC │
	│ cp      │ ha-561110 cp testdata/cp-test.txt ha-561110-m04:/home/docker/cp-test.txt                                                             │ ha-561110 │ jenkins │ v1.37.0 │ 22 Nov 25 00:13 UTC │ 22 Nov 25 00:13 UTC │
	│ ssh     │ ha-561110 ssh -n ha-561110-m04 sudo cat /home/docker/cp-test.txt                                                                     │ ha-561110 │ jenkins │ v1.37.0 │ 22 Nov 25 00:13 UTC │ 22 Nov 25 00:13 UTC │
	│ cp      │ ha-561110 cp ha-561110-m04:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile2616405813/001/cp-test_ha-561110-m04.txt │ ha-561110 │ jenkins │ v1.37.0 │ 22 Nov 25 00:13 UTC │ 22 Nov 25 00:13 UTC │
	│ ssh     │ ha-561110 ssh -n ha-561110-m04 sudo cat /home/docker/cp-test.txt                                                                     │ ha-561110 │ jenkins │ v1.37.0 │ 22 Nov 25 00:13 UTC │ 22 Nov 25 00:13 UTC │
	│ cp      │ ha-561110 cp ha-561110-m04:/home/docker/cp-test.txt ha-561110:/home/docker/cp-test_ha-561110-m04_ha-561110.txt                       │ ha-561110 │ jenkins │ v1.37.0 │ 22 Nov 25 00:13 UTC │ 22 Nov 25 00:13 UTC │
	│ ssh     │ ha-561110 ssh -n ha-561110-m04 sudo cat /home/docker/cp-test.txt                                                                     │ ha-561110 │ jenkins │ v1.37.0 │ 22 Nov 25 00:13 UTC │ 22 Nov 25 00:13 UTC │
	│ ssh     │ ha-561110 ssh -n ha-561110 sudo cat /home/docker/cp-test_ha-561110-m04_ha-561110.txt                                                 │ ha-561110 │ jenkins │ v1.37.0 │ 22 Nov 25 00:13 UTC │ 22 Nov 25 00:13 UTC │
	│ cp      │ ha-561110 cp ha-561110-m04:/home/docker/cp-test.txt ha-561110-m02:/home/docker/cp-test_ha-561110-m04_ha-561110-m02.txt               │ ha-561110 │ jenkins │ v1.37.0 │ 22 Nov 25 00:13 UTC │ 22 Nov 25 00:13 UTC │
	│ ssh     │ ha-561110 ssh -n ha-561110-m04 sudo cat /home/docker/cp-test.txt                                                                     │ ha-561110 │ jenkins │ v1.37.0 │ 22 Nov 25 00:13 UTC │ 22 Nov 25 00:13 UTC │
	│ ssh     │ ha-561110 ssh -n ha-561110-m02 sudo cat /home/docker/cp-test_ha-561110-m04_ha-561110-m02.txt                                         │ ha-561110 │ jenkins │ v1.37.0 │ 22 Nov 25 00:13 UTC │ 22 Nov 25 00:13 UTC │
	│ cp      │ ha-561110 cp ha-561110-m04:/home/docker/cp-test.txt ha-561110-m03:/home/docker/cp-test_ha-561110-m04_ha-561110-m03.txt               │ ha-561110 │ jenkins │ v1.37.0 │ 22 Nov 25 00:13 UTC │ 22 Nov 25 00:13 UTC │
	│ ssh     │ ha-561110 ssh -n ha-561110-m04 sudo cat /home/docker/cp-test.txt                                                                     │ ha-561110 │ jenkins │ v1.37.0 │ 22 Nov 25 00:13 UTC │ 22 Nov 25 00:13 UTC │
	│ ssh     │ ha-561110 ssh -n ha-561110-m03 sudo cat /home/docker/cp-test_ha-561110-m04_ha-561110-m03.txt                                         │ ha-561110 │ jenkins │ v1.37.0 │ 22 Nov 25 00:13 UTC │ 22 Nov 25 00:13 UTC │
	│ node    │ ha-561110 node stop m02 --alsologtostderr -v 5                                                                                       │ ha-561110 │ jenkins │ v1.37.0 │ 22 Nov 25 00:13 UTC │ 22 Nov 25 00:13 UTC │
	│ node    │ ha-561110 node start m02 --alsologtostderr -v 5                                                                                      │ ha-561110 │ jenkins │ v1.37.0 │ 22 Nov 25 00:13 UTC │ 22 Nov 25 00:14 UTC │
	│ node    │ ha-561110 node list --alsologtostderr -v 5                                                                                           │ ha-561110 │ jenkins │ v1.37.0 │ 22 Nov 25 00:14 UTC │                     │
	│ stop    │ ha-561110 stop --alsologtostderr -v 5                                                                                                │ ha-561110 │ jenkins │ v1.37.0 │ 22 Nov 25 00:14 UTC │ 22 Nov 25 00:14 UTC │
	│ start   │ ha-561110 start --wait true --alsologtostderr -v 5                                                                                   │ ha-561110 │ jenkins │ v1.37.0 │ 22 Nov 25 00:14 UTC │                     │
	│ node    │ ha-561110 node list --alsologtostderr -v 5                                                                                           │ ha-561110 │ jenkins │ v1.37.0 │ 22 Nov 25 00:23 UTC │                     │
	│ node    │ ha-561110 node delete m03 --alsologtostderr -v 5                                                                                     │ ha-561110 │ jenkins │ v1.37.0 │ 22 Nov 25 00:23 UTC │ 22 Nov 25 00:23 UTC │
	└─────────┴──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/22 00:14:41
	Running on machine: ip-172-31-21-244
	Binary: Built with gc go1.25.3 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1122 00:14:41.051374  563925 out.go:360] Setting OutFile to fd 1 ...
	I1122 00:14:41.051556  563925 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1122 00:14:41.051586  563925 out.go:374] Setting ErrFile to fd 2...
	I1122 00:14:41.051607  563925 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1122 00:14:41.051880  563925 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21934-513600/.minikube/bin
	I1122 00:14:41.052266  563925 out.go:368] Setting JSON to false
	I1122 00:14:41.053166  563925 start.go:133] hostinfo: {"hostname":"ip-172-31-21-244","uptime":17797,"bootTime":1763752684,"procs":152,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"da8ac1fd-6236-412a-a346-95873c98230d"}
	I1122 00:14:41.053270  563925 start.go:143] virtualization:  
	I1122 00:14:41.056667  563925 out.go:179] * [ha-561110] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1122 00:14:41.060532  563925 out.go:179]   - MINIKUBE_LOCATION=21934
	I1122 00:14:41.060603  563925 notify.go:221] Checking for updates...
	I1122 00:14:41.067352  563925 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1122 00:14:41.070297  563925 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21934-513600/kubeconfig
	I1122 00:14:41.073934  563925 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21934-513600/.minikube
	I1122 00:14:41.076934  563925 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1122 00:14:41.079898  563925 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1122 00:14:41.083494  563925 config.go:182] Loaded profile config "ha-561110": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1122 00:14:41.083606  563925 driver.go:422] Setting default libvirt URI to qemu:///system
	I1122 00:14:41.111284  563925 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1122 00:14:41.111387  563925 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1122 00:14:41.175037  563925 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:4 ContainersRunning:0 ContainersPaused:0 ContainersStopped:4 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:27 OomKillDisable:true NGoroutines:42 SystemTime:2025-11-22 00:14:41.165296296 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1122 00:14:41.175148  563925 docker.go:319] overlay module found
	I1122 00:14:41.178250  563925 out.go:179] * Using the docker driver based on existing profile
	I1122 00:14:41.180953  563925 start.go:309] selected driver: docker
	I1122 00:14:41.180971  563925 start.go:930] validating driver "docker" against &{Name:ha-561110 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763588073-21934@sha256:19d3da0413e1bfa354cbb88004c6796f8e9772a083e0230b0f6e50212ee04c7e Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:ha-561110 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName
:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.49.4 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.49.5 Port:0 KubernetesVersion:v1.34.1 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:f
alse ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false Dis
ableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1122 00:14:41.181129  563925 start.go:941] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1122 00:14:41.181235  563925 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1122 00:14:41.238102  563925 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:4 ContainersRunning:0 ContainersPaused:0 ContainersStopped:4 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:27 OomKillDisable:true NGoroutines:42 SystemTime:2025-11-22 00:14:41.228646014 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1122 00:14:41.238520  563925 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1122 00:14:41.238556  563925 cni.go:84] Creating CNI manager for ""
	I1122 00:14:41.238614  563925 cni.go:136] multinode detected (4 nodes found), recommending kindnet
	I1122 00:14:41.238661  563925 start.go:353] cluster config:
	{Name:ha-561110 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763588073-21934@sha256:19d3da0413e1bfa354cbb88004c6796f8e9772a083e0230b0f6e50212ee04c7e Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:ha-561110 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerR
untime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.49.4 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.49.5 Port:0 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false isti
o-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMne
tClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1122 00:14:41.241877  563925 out.go:179] * Starting "ha-561110" primary control-plane node in "ha-561110" cluster
	I1122 00:14:41.244623  563925 cache.go:134] Beginning downloading kic base image for docker with crio
	I1122 00:14:41.247356  563925 out.go:179] * Pulling base image v0.0.48-1763588073-21934 ...
	I1122 00:14:41.250191  563925 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1122 00:14:41.250238  563925 preload.go:203] Found local preload: /home/jenkins/minikube-integration/21934-513600/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4
	I1122 00:14:41.250251  563925 cache.go:65] Caching tarball of preloaded images
	I1122 00:14:41.250256  563925 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763588073-21934@sha256:19d3da0413e1bfa354cbb88004c6796f8e9772a083e0230b0f6e50212ee04c7e in local docker daemon
	I1122 00:14:41.250328  563925 preload.go:238] Found /home/jenkins/minikube-integration/21934-513600/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1122 00:14:41.250339  563925 cache.go:68] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1122 00:14:41.250480  563925 profile.go:143] Saving config to /home/jenkins/minikube-integration/21934-513600/.minikube/profiles/ha-561110/config.json ...
	I1122 00:14:41.275134  563925 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763588073-21934@sha256:19d3da0413e1bfa354cbb88004c6796f8e9772a083e0230b0f6e50212ee04c7e in local docker daemon, skipping pull
	I1122 00:14:41.275155  563925 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763588073-21934@sha256:19d3da0413e1bfa354cbb88004c6796f8e9772a083e0230b0f6e50212ee04c7e exists in daemon, skipping load
	I1122 00:14:41.275171  563925 cache.go:243] Successfully downloaded all kic artifacts
	I1122 00:14:41.275193  563925 start.go:360] acquireMachinesLock for ha-561110: {Name:mkb487371897d491a1a254bbfa266b10650bf7bc Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1122 00:14:41.275256  563925 start.go:364] duration metric: took 36.265µs to acquireMachinesLock for "ha-561110"
	I1122 00:14:41.275288  563925 start.go:96] Skipping create...Using existing machine configuration
	I1122 00:14:41.275297  563925 fix.go:54] fixHost starting: 
	I1122 00:14:41.275560  563925 cli_runner.go:164] Run: docker container inspect ha-561110 --format={{.State.Status}}
	I1122 00:14:41.292644  563925 fix.go:112] recreateIfNeeded on ha-561110: state=Stopped err=<nil>
	W1122 00:14:41.292679  563925 fix.go:138] unexpected machine state, will restart: <nil>
	I1122 00:14:41.295991  563925 out.go:252] * Restarting existing docker container for "ha-561110" ...
	I1122 00:14:41.296094  563925 cli_runner.go:164] Run: docker start ha-561110
	I1122 00:14:41.567342  563925 cli_runner.go:164] Run: docker container inspect ha-561110 --format={{.State.Status}}
	I1122 00:14:41.593759  563925 kic.go:430] container "ha-561110" state is running.
	I1122 00:14:41.594265  563925 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-561110
	I1122 00:14:41.625087  563925 profile.go:143] Saving config to /home/jenkins/minikube-integration/21934-513600/.minikube/profiles/ha-561110/config.json ...
	I1122 00:14:41.625337  563925 machine.go:94] provisionDockerMachine start ...
	I1122 00:14:41.625405  563925 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-561110
	I1122 00:14:41.644350  563925 main.go:143] libmachine: Using SSH client type: native
	I1122 00:14:41.644684  563925 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33535 <nil> <nil>}
	I1122 00:14:41.644692  563925 main.go:143] libmachine: About to run SSH command:
	hostname
	I1122 00:14:41.645633  563925 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I1122 00:14:44.789929  563925 main.go:143] libmachine: SSH cmd err, output: <nil>: ha-561110
	
	I1122 00:14:44.789988  563925 ubuntu.go:182] provisioning hostname "ha-561110"
	I1122 00:14:44.790089  563925 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-561110
	I1122 00:14:44.809008  563925 main.go:143] libmachine: Using SSH client type: native
	I1122 00:14:44.809338  563925 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33535 <nil> <nil>}
	I1122 00:14:44.809354  563925 main.go:143] libmachine: About to run SSH command:
	sudo hostname ha-561110 && echo "ha-561110" | sudo tee /etc/hostname
	I1122 00:14:44.959054  563925 main.go:143] libmachine: SSH cmd err, output: <nil>: ha-561110
	
	I1122 00:14:44.959174  563925 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-561110
	I1122 00:14:44.977402  563925 main.go:143] libmachine: Using SSH client type: native
	I1122 00:14:44.977725  563925 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33535 <nil> <nil>}
	I1122 00:14:44.977747  563925 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-561110' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-561110/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-561110' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1122 00:14:45.148701  563925 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1122 00:14:45.148780  563925 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21934-513600/.minikube CaCertPath:/home/jenkins/minikube-integration/21934-513600/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21934-513600/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21934-513600/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21934-513600/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21934-513600/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21934-513600/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21934-513600/.minikube}
	I1122 00:14:45.148894  563925 ubuntu.go:190] setting up certificates
	I1122 00:14:45.148911  563925 provision.go:84] configureAuth start
	I1122 00:14:45.149003  563925 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-561110
	I1122 00:14:45.178821  563925 provision.go:143] copyHostCerts
	I1122 00:14:45.178872  563925 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21934-513600/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/21934-513600/.minikube/ca.pem
	I1122 00:14:45.178980  563925 exec_runner.go:144] found /home/jenkins/minikube-integration/21934-513600/.minikube/ca.pem, removing ...
	I1122 00:14:45.179051  563925 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21934-513600/.minikube/ca.pem
	I1122 00:14:45.179147  563925 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21934-513600/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21934-513600/.minikube/ca.pem (1078 bytes)
	I1122 00:14:45.179368  563925 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21934-513600/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/21934-513600/.minikube/cert.pem
	I1122 00:14:45.179396  563925 exec_runner.go:144] found /home/jenkins/minikube-integration/21934-513600/.minikube/cert.pem, removing ...
	I1122 00:14:45.179408  563925 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21934-513600/.minikube/cert.pem
	I1122 00:14:45.179513  563925 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21934-513600/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21934-513600/.minikube/cert.pem (1123 bytes)
	I1122 00:14:45.179582  563925 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21934-513600/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/21934-513600/.minikube/key.pem
	I1122 00:14:45.179688  563925 exec_runner.go:144] found /home/jenkins/minikube-integration/21934-513600/.minikube/key.pem, removing ...
	I1122 00:14:45.179693  563925 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21934-513600/.minikube/key.pem
	I1122 00:14:45.179763  563925 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21934-513600/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21934-513600/.minikube/key.pem (1675 bytes)
	I1122 00:14:45.179869  563925 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21934-513600/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21934-513600/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21934-513600/.minikube/certs/ca-key.pem org=jenkins.ha-561110 san=[127.0.0.1 192.168.49.2 ha-561110 localhost minikube]
	I1122 00:14:45.360921  563925 provision.go:177] copyRemoteCerts
	I1122 00:14:45.360991  563925 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1122 00:14:45.361031  563925 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-561110
	I1122 00:14:45.379675  563925 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33535 SSHKeyPath:/home/jenkins/minikube-integration/21934-513600/.minikube/machines/ha-561110/id_rsa Username:docker}
	I1122 00:14:45.481986  563925 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21934-513600/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1122 00:14:45.482096  563925 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21934-513600/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1122 00:14:45.500661  563925 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21934-513600/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1122 00:14:45.500750  563925 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21934-513600/.minikube/machines/server.pem --> /etc/docker/server.pem (1196 bytes)
	I1122 00:14:45.519280  563925 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21934-513600/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1122 00:14:45.519388  563925 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21934-513600/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1122 00:14:45.538099  563925 provision.go:87] duration metric: took 389.17288ms to configureAuth
	I1122 00:14:45.538126  563925 ubuntu.go:206] setting minikube options for container-runtime
	I1122 00:14:45.538361  563925 config.go:182] Loaded profile config "ha-561110": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1122 00:14:45.538464  563925 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-561110
	I1122 00:14:45.557843  563925 main.go:143] libmachine: Using SSH client type: native
	I1122 00:14:45.558153  563925 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33535 <nil> <nil>}
	I1122 00:14:45.558173  563925 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1122 00:14:45.916699  563925 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1122 00:14:45.916722  563925 machine.go:97] duration metric: took 4.291375262s to provisionDockerMachine
	I1122 00:14:45.916734  563925 start.go:293] postStartSetup for "ha-561110" (driver="docker")
	I1122 00:14:45.916744  563925 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1122 00:14:45.916808  563925 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1122 00:14:45.916864  563925 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-561110
	I1122 00:14:45.937454  563925 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33535 SSHKeyPath:/home/jenkins/minikube-integration/21934-513600/.minikube/machines/ha-561110/id_rsa Username:docker}
	I1122 00:14:46.038557  563925 ssh_runner.go:195] Run: cat /etc/os-release
	I1122 00:14:46.042104  563925 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1122 00:14:46.042148  563925 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1122 00:14:46.042162  563925 filesync.go:126] Scanning /home/jenkins/minikube-integration/21934-513600/.minikube/addons for local assets ...
	I1122 00:14:46.042244  563925 filesync.go:126] Scanning /home/jenkins/minikube-integration/21934-513600/.minikube/files for local assets ...
	I1122 00:14:46.042340  563925 filesync.go:149] local asset: /home/jenkins/minikube-integration/21934-513600/.minikube/files/etc/ssl/certs/5169372.pem -> 5169372.pem in /etc/ssl/certs
	I1122 00:14:46.042358  563925 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21934-513600/.minikube/files/etc/ssl/certs/5169372.pem -> /etc/ssl/certs/5169372.pem
	I1122 00:14:46.042519  563925 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1122 00:14:46.050335  563925 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21934-513600/.minikube/files/etc/ssl/certs/5169372.pem --> /etc/ssl/certs/5169372.pem (1708 bytes)
	I1122 00:14:46.070075  563925 start.go:296] duration metric: took 153.324249ms for postStartSetup
	I1122 00:14:46.070158  563925 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1122 00:14:46.070200  563925 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-561110
	I1122 00:14:46.089314  563925 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33535 SSHKeyPath:/home/jenkins/minikube-integration/21934-513600/.minikube/machines/ha-561110/id_rsa Username:docker}
	I1122 00:14:46.187250  563925 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1122 00:14:46.192065  563925 fix.go:56] duration metric: took 4.916761973s for fixHost
	I1122 00:14:46.192091  563925 start.go:83] releasing machines lock for "ha-561110", held for 4.916821031s
	I1122 00:14:46.192188  563925 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-561110
	I1122 00:14:46.209139  563925 ssh_runner.go:195] Run: cat /version.json
	I1122 00:14:46.209197  563925 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-561110
	I1122 00:14:46.209461  563925 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1122 00:14:46.209511  563925 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-561110
	I1122 00:14:46.233161  563925 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33535 SSHKeyPath:/home/jenkins/minikube-integration/21934-513600/.minikube/machines/ha-561110/id_rsa Username:docker}
	I1122 00:14:46.237608  563925 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33535 SSHKeyPath:/home/jenkins/minikube-integration/21934-513600/.minikube/machines/ha-561110/id_rsa Username:docker}
	I1122 00:14:46.417414  563925 ssh_runner.go:195] Run: systemctl --version
	I1122 00:14:46.423708  563925 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1122 00:14:46.459853  563925 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1122 00:14:46.464430  563925 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1122 00:14:46.464499  563925 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1122 00:14:46.472070  563925 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1122 00:14:46.472092  563925 start.go:496] detecting cgroup driver to use...
	I1122 00:14:46.472140  563925 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1122 00:14:46.472192  563925 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1122 00:14:46.487805  563925 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1122 00:14:46.501008  563925 docker.go:218] disabling cri-docker service (if available) ...
	I1122 00:14:46.501113  563925 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1122 00:14:46.517083  563925 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1122 00:14:46.530035  563925 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1122 00:14:46.634532  563925 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1122 00:14:46.753160  563925 docker.go:234] disabling docker service ...
	I1122 00:14:46.753271  563925 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1122 00:14:46.768112  563925 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1122 00:14:46.781109  563925 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1122 00:14:46.889282  563925 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1122 00:14:47.012744  563925 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1122 00:14:47.026639  563925 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1122 00:14:47.040275  563925 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1122 00:14:47.040386  563925 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1122 00:14:47.049142  563925 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1122 00:14:47.049222  563925 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1122 00:14:47.057948  563925 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1122 00:14:47.066761  563925 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1122 00:14:47.076164  563925 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1122 00:14:47.085123  563925 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1122 00:14:47.094801  563925 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1122 00:14:47.102952  563925 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1122 00:14:47.111641  563925 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1122 00:14:47.119239  563925 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1122 00:14:47.126541  563925 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1122 00:14:47.233256  563925 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1122 00:14:47.384501  563925 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1122 00:14:47.384567  563925 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1122 00:14:47.388356  563925 start.go:564] Will wait 60s for crictl version
	I1122 00:14:47.388468  563925 ssh_runner.go:195] Run: which crictl
	I1122 00:14:47.392030  563925 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1122 00:14:47.416283  563925 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.2
	RuntimeApiVersion:  v1
	I1122 00:14:47.416422  563925 ssh_runner.go:195] Run: crio --version
	I1122 00:14:47.444890  563925 ssh_runner.go:195] Run: crio --version
	I1122 00:14:47.480934  563925 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.2 ...
	I1122 00:14:47.483635  563925 cli_runner.go:164] Run: docker network inspect ha-561110 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1122 00:14:47.499516  563925 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1122 00:14:47.503369  563925 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1122 00:14:47.513239  563925 kubeadm.go:884] updating cluster {Name:ha-561110 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763588073-21934@sha256:19d3da0413e1bfa354cbb88004c6796f8e9772a083e0230b0f6e50212ee04c7e Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:ha-561110 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APISe
rverNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.49.4 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.49.5 Port:0 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:
false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:f
alse DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1122 00:14:47.513386  563925 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1122 00:14:47.513453  563925 ssh_runner.go:195] Run: sudo crictl images --output json
	I1122 00:14:47.547714  563925 crio.go:514] all images are preloaded for cri-o runtime.
	I1122 00:14:47.547741  563925 crio.go:433] Images already preloaded, skipping extraction
	I1122 00:14:47.547794  563925 ssh_runner.go:195] Run: sudo crictl images --output json
	I1122 00:14:47.572446  563925 crio.go:514] all images are preloaded for cri-o runtime.
	I1122 00:14:47.572474  563925 cache_images.go:86] Images are preloaded, skipping loading
	I1122 00:14:47.572483  563925 kubeadm.go:935] updating node { 192.168.49.2 8443 v1.34.1 crio true true} ...
	I1122 00:14:47.572577  563925 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=ha-561110 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:ha-561110 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1122 00:14:47.572661  563925 ssh_runner.go:195] Run: crio config
	I1122 00:14:47.634066  563925 cni.go:84] Creating CNI manager for ""
	I1122 00:14:47.634094  563925 cni.go:136] multinode detected (4 nodes found), recommending kindnet
	I1122 00:14:47.634114  563925 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1122 00:14:47.634156  563925 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-561110 NodeName:ha-561110 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/mani
fests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1122 00:14:47.634316  563925 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "ha-561110"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1122 00:14:47.634340  563925 kube-vip.go:115] generating kube-vip config ...
	I1122 00:14:47.634397  563925 ssh_runner.go:195] Run: sudo sh -c "lsmod | grep ip_vs"
	I1122 00:14:47.646470  563925 kube-vip.go:163] giving up enabling control-plane load-balancing as ipvs kernel modules appears not to be available: sudo sh -c "lsmod | grep ip_vs": Process exited with status 1
	stdout:
	
	stderr:
	I1122 00:14:47.646593  563925 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.49.254
	    - name: prometheus_server
	      value: :2112
	    image: ghcr.io/kube-vip/kube-vip:v1.0.1
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I1122 00:14:47.646695  563925 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1122 00:14:47.654183  563925 binaries.go:51] Found k8s binaries, skipping transfer
	I1122 00:14:47.654249  563925 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I1122 00:14:47.661699  563925 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (359 bytes)
	I1122 00:14:47.674165  563925 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1122 00:14:47.686331  563925 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2206 bytes)
	I1122 00:14:47.698542  563925 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1358 bytes)
	I1122 00:14:47.711254  563925 ssh_runner.go:195] Run: grep 192.168.49.254	control-plane.minikube.internal$ /etc/hosts
	I1122 00:14:47.714862  563925 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1122 00:14:47.724174  563925 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1122 00:14:47.839371  563925 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1122 00:14:47.853685  563925 certs.go:69] Setting up /home/jenkins/minikube-integration/21934-513600/.minikube/profiles/ha-561110 for IP: 192.168.49.2
	I1122 00:14:47.853753  563925 certs.go:195] generating shared ca certs ...
	I1122 00:14:47.853787  563925 certs.go:227] acquiring lock for ca certs: {Name:mkaf4c79493334cb2058b349c36be7473837f9b7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1122 00:14:47.853987  563925 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21934-513600/.minikube/ca.key
	I1122 00:14:47.854075  563925 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21934-513600/.minikube/proxy-client-ca.key
	I1122 00:14:47.854111  563925 certs.go:257] generating profile certs ...
	I1122 00:14:47.854232  563925 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21934-513600/.minikube/profiles/ha-561110/client.key
	I1122 00:14:47.854280  563925 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21934-513600/.minikube/profiles/ha-561110/apiserver.key.17887f76
	I1122 00:14:47.854319  563925 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21934-513600/.minikube/profiles/ha-561110/apiserver.crt.17887f76 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2 192.168.49.3 192.168.49.4 192.168.49.254]
	I1122 00:14:47.941434  563925 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21934-513600/.minikube/profiles/ha-561110/apiserver.crt.17887f76 ...
	I1122 00:14:47.941949  563925 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21934-513600/.minikube/profiles/ha-561110/apiserver.crt.17887f76: {Name:mk196d114e0b17147f8bed35c49f594a2533cc5a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1122 00:14:47.942154  563925 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21934-513600/.minikube/profiles/ha-561110/apiserver.key.17887f76 ...
	I1122 00:14:47.942191  563925 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21934-513600/.minikube/profiles/ha-561110/apiserver.key.17887f76: {Name:mk34aa50af1cad4bd0a7687c2b98f2a65013e746 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1122 00:14:47.942314  563925 certs.go:382] copying /home/jenkins/minikube-integration/21934-513600/.minikube/profiles/ha-561110/apiserver.crt.17887f76 -> /home/jenkins/minikube-integration/21934-513600/.minikube/profiles/ha-561110/apiserver.crt
	I1122 00:14:47.942500  563925 certs.go:386] copying /home/jenkins/minikube-integration/21934-513600/.minikube/profiles/ha-561110/apiserver.key.17887f76 -> /home/jenkins/minikube-integration/21934-513600/.minikube/profiles/ha-561110/apiserver.key
	I1122 00:14:47.942693  563925 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21934-513600/.minikube/profiles/ha-561110/proxy-client.key
	I1122 00:14:47.942729  563925 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21934-513600/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1122 00:14:47.942772  563925 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21934-513600/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1122 00:14:47.942814  563925 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21934-513600/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1122 00:14:47.942845  563925 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21934-513600/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1122 00:14:47.942881  563925 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21934-513600/.minikube/profiles/ha-561110/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1122 00:14:47.942927  563925 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21934-513600/.minikube/profiles/ha-561110/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1122 00:14:47.942960  563925 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21934-513600/.minikube/profiles/ha-561110/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1122 00:14:47.942996  563925 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21934-513600/.minikube/profiles/ha-561110/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1122 00:14:47.943078  563925 certs.go:484] found cert: /home/jenkins/minikube-integration/21934-513600/.minikube/certs/516937.pem (1338 bytes)
	W1122 00:14:47.943133  563925 certs.go:480] ignoring /home/jenkins/minikube-integration/21934-513600/.minikube/certs/516937_empty.pem, impossibly tiny 0 bytes
	I1122 00:14:47.943156  563925 certs.go:484] found cert: /home/jenkins/minikube-integration/21934-513600/.minikube/certs/ca-key.pem (1679 bytes)
	I1122 00:14:47.943215  563925 certs.go:484] found cert: /home/jenkins/minikube-integration/21934-513600/.minikube/certs/ca.pem (1078 bytes)
	I1122 00:14:47.943265  563925 certs.go:484] found cert: /home/jenkins/minikube-integration/21934-513600/.minikube/certs/cert.pem (1123 bytes)
	I1122 00:14:47.943352  563925 certs.go:484] found cert: /home/jenkins/minikube-integration/21934-513600/.minikube/certs/key.pem (1675 bytes)
	I1122 00:14:47.943431  563925 certs.go:484] found cert: /home/jenkins/minikube-integration/21934-513600/.minikube/files/etc/ssl/certs/5169372.pem (1708 bytes)
	I1122 00:14:47.943512  563925 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21934-513600/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1122 00:14:47.943556  563925 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21934-513600/.minikube/certs/516937.pem -> /usr/share/ca-certificates/516937.pem
	I1122 00:14:47.943584  563925 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21934-513600/.minikube/files/etc/ssl/certs/5169372.pem -> /usr/share/ca-certificates/5169372.pem
	I1122 00:14:47.944164  563925 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21934-513600/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1122 00:14:47.970032  563925 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21934-513600/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1122 00:14:47.993299  563925 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21934-513600/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1122 00:14:48.024732  563925 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21934-513600/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1122 00:14:48.049916  563925 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21934-513600/.minikube/profiles/ha-561110/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I1122 00:14:48.074841  563925 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21934-513600/.minikube/profiles/ha-561110/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1122 00:14:48.093300  563925 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21934-513600/.minikube/profiles/ha-561110/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1122 00:14:48.113386  563925 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21934-513600/.minikube/profiles/ha-561110/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1122 00:14:48.133760  563925 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21934-513600/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1122 00:14:48.153049  563925 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21934-513600/.minikube/certs/516937.pem --> /usr/share/ca-certificates/516937.pem (1338 bytes)
	I1122 00:14:48.173569  563925 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21934-513600/.minikube/files/etc/ssl/certs/5169372.pem --> /usr/share/ca-certificates/5169372.pem (1708 bytes)
	I1122 00:14:48.198292  563925 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1122 00:14:48.211957  563925 ssh_runner.go:195] Run: openssl version
	I1122 00:14:48.218515  563925 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/516937.pem && ln -fs /usr/share/ca-certificates/516937.pem /etc/ssl/certs/516937.pem"
	I1122 00:14:48.228447  563925 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/516937.pem
	I1122 00:14:48.232426  563925 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov 21 23:55 /usr/share/ca-certificates/516937.pem
	I1122 00:14:48.232551  563925 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/516937.pem
	I1122 00:14:48.273469  563925 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/516937.pem /etc/ssl/certs/51391683.0"
	I1122 00:14:48.281348  563925 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/5169372.pem && ln -fs /usr/share/ca-certificates/5169372.pem /etc/ssl/certs/5169372.pem"
	I1122 00:14:48.289635  563925 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/5169372.pem
	I1122 00:14:48.293430  563925 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov 21 23:55 /usr/share/ca-certificates/5169372.pem
	I1122 00:14:48.293550  563925 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/5169372.pem
	I1122 00:14:48.335324  563925 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/5169372.pem /etc/ssl/certs/3ec20f2e.0"
	I1122 00:14:48.343382  563925 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1122 00:14:48.351346  563925 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1122 00:14:48.354892  563925 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov 21 23:48 /usr/share/ca-certificates/minikubeCA.pem
	I1122 00:14:48.354958  563925 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1122 00:14:48.398958  563925 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1122 00:14:48.406910  563925 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1122 00:14:48.410614  563925 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1122 00:14:48.451560  563925 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1122 00:14:48.492804  563925 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1122 00:14:48.540013  563925 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1122 00:14:48.585271  563925 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1122 00:14:48.653970  563925 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1122 00:14:48.747548  563925 kubeadm.go:401] StartCluster: {Name:ha-561110 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763588073-21934@sha256:19d3da0413e1bfa354cbb88004c6796f8e9772a083e0230b0f6e50212ee04c7e Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:ha-561110 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServe
rNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.49.4 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.49.5 Port:0 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:fal
se ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:fals
e DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1122 00:14:48.747694  563925 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1122 00:14:48.747775  563925 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1122 00:14:48.836090  563925 cri.go:89] found id: "4cbb3fde391bd86e756416ec260b0b8a5501d5139da802107965d9e012c4eca5"
	I1122 00:14:48.836127  563925 cri.go:89] found id: "4360f5517fd5eb7d570a98dee1b801419d3b650d7e890d5ddecc79946fba46db"
	I1122 00:14:48.836132  563925 cri.go:89] found id: "a395e7473ffe2b7999ae75a70e19b4f153d459c8ccae48aeeb71b5b6248cc1f2"
	I1122 00:14:48.836136  563925 cri.go:89] found id: "9fdf72902e6e01af8761552bc83ad83cdf5a34600401d1ee9126ac6a25ae0e37"
	I1122 00:14:48.836140  563925 cri.go:89] found id: "1c929db60119ab54f03020d00f2063dc6672d329ea34f4504e502142bffbe644"
	I1122 00:14:48.836148  563925 cri.go:89] found id: ""
	I1122 00:14:48.836216  563925 ssh_runner.go:195] Run: sudo runc list -f json
	W1122 00:14:48.857525  563925 kubeadm.go:408] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-22T00:14:48Z" level=error msg="open /run/runc: no such file or directory"
	I1122 00:14:48.857613  563925 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1122 00:14:48.878520  563925 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1122 00:14:48.878565  563925 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1122 00:14:48.878624  563925 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1122 00:14:48.898381  563925 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1122 00:14:48.898972  563925 kubeconfig.go:47] verify endpoint returned: get endpoint: "ha-561110" does not appear in /home/jenkins/minikube-integration/21934-513600/kubeconfig
	I1122 00:14:48.899101  563925 kubeconfig.go:62] /home/jenkins/minikube-integration/21934-513600/kubeconfig needs updating (will repair): [kubeconfig missing "ha-561110" cluster setting kubeconfig missing "ha-561110" context setting]
	I1122 00:14:48.900028  563925 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21934-513600/kubeconfig: {Name:mkcd094d8a765e5a81369c0fb33cb5e696b17bb8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1122 00:14:48.901567  563925 kapi.go:59] client config for ha-561110: &rest.Config{Host:"https://192.168.49.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21934-513600/.minikube/profiles/ha-561110/client.crt", KeyFile:"/home/jenkins/minikube-integration/21934-513600/.minikube/profiles/ha-561110/client.key", CAFile:"/home/jenkins/minikube-integration/21934-513600/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil
)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1fb2df0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1122 00:14:48.907943  563925 envvar.go:172] "Feature gate default state" feature="InOrderInformers" enabled=true
	I1122 00:14:48.907972  563925 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I1122 00:14:48.907979  563925 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
	I1122 00:14:48.907984  563925 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
	I1122 00:14:48.907993  563925 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I1122 00:14:48.908413  563925 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1122 00:14:48.908668  563925 cert_rotation.go:141] "Starting client certificate rotation controller" logger="tls-transport-cache"
	I1122 00:14:48.938459  563925 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.49.2
	I1122 00:14:48.938496  563925 kubeadm.go:602] duration metric: took 59.924061ms to restartPrimaryControlPlane
	I1122 00:14:48.938507  563925 kubeadm.go:403] duration metric: took 190.97977ms to StartCluster
	I1122 00:14:48.938533  563925 settings.go:142] acquiring lock: {Name:mk6c31eb57ec65b047b78b4e1046e03fe7cc77bb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1122 00:14:48.938632  563925 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21934-513600/kubeconfig
	I1122 00:14:48.939442  563925 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21934-513600/kubeconfig: {Name:mkcd094d8a765e5a81369c0fb33cb5e696b17bb8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1122 00:14:48.939701  563925 start.go:234] HA (multi-control plane) cluster: will skip waiting for primary control-plane node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1122 00:14:48.939739  563925 start.go:242] waiting for startup goroutines ...
	I1122 00:14:48.939758  563925 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1122 00:14:48.940342  563925 config.go:182] Loaded profile config "ha-561110": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1122 00:14:48.944134  563925 out.go:179] * Enabled addons: 
	I1122 00:14:48.947186  563925 addons.go:530] duration metric: took 7.425265ms for enable addons: enabled=[]
	I1122 00:14:48.947258  563925 start.go:247] waiting for cluster config update ...
	I1122 00:14:48.947278  563925 start.go:256] writing updated cluster config ...
	I1122 00:14:48.950835  563925 out.go:203] 
	I1122 00:14:48.954183  563925 config.go:182] Loaded profile config "ha-561110": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1122 00:14:48.954390  563925 profile.go:143] Saving config to /home/jenkins/minikube-integration/21934-513600/.minikube/profiles/ha-561110/config.json ...
	I1122 00:14:48.958001  563925 out.go:179] * Starting "ha-561110-m02" control-plane node in "ha-561110" cluster
	I1122 00:14:48.961037  563925 cache.go:134] Beginning downloading kic base image for docker with crio
	I1122 00:14:48.964123  563925 out.go:179] * Pulling base image v0.0.48-1763588073-21934 ...
	I1122 00:14:48.966981  563925 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1122 00:14:48.967024  563925 cache.go:65] Caching tarball of preloaded images
	I1122 00:14:48.967169  563925 preload.go:238] Found /home/jenkins/minikube-integration/21934-513600/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1122 00:14:48.967185  563925 cache.go:68] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1122 00:14:48.967352  563925 profile.go:143] Saving config to /home/jenkins/minikube-integration/21934-513600/.minikube/profiles/ha-561110/config.json ...
	I1122 00:14:48.967608  563925 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763588073-21934@sha256:19d3da0413e1bfa354cbb88004c6796f8e9772a083e0230b0f6e50212ee04c7e in local docker daemon
	I1122 00:14:49.000604  563925 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763588073-21934@sha256:19d3da0413e1bfa354cbb88004c6796f8e9772a083e0230b0f6e50212ee04c7e in local docker daemon, skipping pull
	I1122 00:14:49.000625  563925 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763588073-21934@sha256:19d3da0413e1bfa354cbb88004c6796f8e9772a083e0230b0f6e50212ee04c7e exists in daemon, skipping load
	I1122 00:14:49.000646  563925 cache.go:243] Successfully downloaded all kic artifacts
	I1122 00:14:49.000671  563925 start.go:360] acquireMachinesLock for ha-561110-m02: {Name:mkb358f78002efa4c17b8c7cead5ae57992aea2d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1122 00:14:49.000737  563925 start.go:364] duration metric: took 50.534µs to acquireMachinesLock for "ha-561110-m02"
	I1122 00:14:49.000757  563925 start.go:96] Skipping create...Using existing machine configuration
	I1122 00:14:49.000763  563925 fix.go:54] fixHost starting: m02
	I1122 00:14:49.001076  563925 cli_runner.go:164] Run: docker container inspect ha-561110-m02 --format={{.State.Status}}
	I1122 00:14:49.034056  563925 fix.go:112] recreateIfNeeded on ha-561110-m02: state=Stopped err=<nil>
	W1122 00:14:49.034088  563925 fix.go:138] unexpected machine state, will restart: <nil>
	I1122 00:14:49.037399  563925 out.go:252] * Restarting existing docker container for "ha-561110-m02" ...
	I1122 00:14:49.037518  563925 cli_runner.go:164] Run: docker start ha-561110-m02
	I1122 00:14:49.451675  563925 cli_runner.go:164] Run: docker container inspect ha-561110-m02 --format={{.State.Status}}
	I1122 00:14:49.475681  563925 kic.go:430] container "ha-561110-m02" state is running.
	I1122 00:14:49.476112  563925 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-561110-m02
	I1122 00:14:49.506374  563925 profile.go:143] Saving config to /home/jenkins/minikube-integration/21934-513600/.minikube/profiles/ha-561110/config.json ...
	I1122 00:14:49.506719  563925 machine.go:94] provisionDockerMachine start ...
	I1122 00:14:49.506835  563925 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-561110-m02
	I1122 00:14:49.550202  563925 main.go:143] libmachine: Using SSH client type: native
	I1122 00:14:49.550557  563925 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33540 <nil> <nil>}
	I1122 00:14:49.550573  563925 main.go:143] libmachine: About to run SSH command:
	hostname
	I1122 00:14:49.551331  563925 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:37062->127.0.0.1:33540: read: connection reset by peer
	I1122 00:14:52.908642  563925 main.go:143] libmachine: SSH cmd err, output: <nil>: ha-561110-m02
	
	I1122 00:14:52.908715  563925 ubuntu.go:182] provisioning hostname "ha-561110-m02"
	I1122 00:14:52.908805  563925 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-561110-m02
	I1122 00:14:52.953932  563925 main.go:143] libmachine: Using SSH client type: native
	I1122 00:14:52.954246  563925 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33540 <nil> <nil>}
	I1122 00:14:52.954258  563925 main.go:143] libmachine: About to run SSH command:
	sudo hostname ha-561110-m02 && echo "ha-561110-m02" | sudo tee /etc/hostname
	I1122 00:14:53.345252  563925 main.go:143] libmachine: SSH cmd err, output: <nil>: ha-561110-m02
	
	I1122 00:14:53.345401  563925 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-561110-m02
	I1122 00:14:53.377691  563925 main.go:143] libmachine: Using SSH client type: native
	I1122 00:14:53.378150  563925 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33540 <nil> <nil>}
	I1122 00:14:53.378172  563925 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-561110-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-561110-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-561110-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1122 00:14:53.591463  563925 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1122 00:14:53.591496  563925 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21934-513600/.minikube CaCertPath:/home/jenkins/minikube-integration/21934-513600/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21934-513600/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21934-513600/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21934-513600/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21934-513600/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21934-513600/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21934-513600/.minikube}
	I1122 00:14:53.591513  563925 ubuntu.go:190] setting up certificates
	I1122 00:14:53.591526  563925 provision.go:84] configureAuth start
	I1122 00:14:53.591597  563925 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-561110-m02
	I1122 00:14:53.618168  563925 provision.go:143] copyHostCerts
	I1122 00:14:53.618211  563925 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21934-513600/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/21934-513600/.minikube/ca.pem
	I1122 00:14:53.618242  563925 exec_runner.go:144] found /home/jenkins/minikube-integration/21934-513600/.minikube/ca.pem, removing ...
	I1122 00:14:53.618253  563925 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21934-513600/.minikube/ca.pem
	I1122 00:14:53.618333  563925 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21934-513600/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21934-513600/.minikube/ca.pem (1078 bytes)
	I1122 00:14:53.618435  563925 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21934-513600/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/21934-513600/.minikube/cert.pem
	I1122 00:14:53.618458  563925 exec_runner.go:144] found /home/jenkins/minikube-integration/21934-513600/.minikube/cert.pem, removing ...
	I1122 00:14:53.618465  563925 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21934-513600/.minikube/cert.pem
	I1122 00:14:53.618494  563925 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21934-513600/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21934-513600/.minikube/cert.pem (1123 bytes)
	I1122 00:14:53.618552  563925 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21934-513600/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/21934-513600/.minikube/key.pem
	I1122 00:14:53.618576  563925 exec_runner.go:144] found /home/jenkins/minikube-integration/21934-513600/.minikube/key.pem, removing ...
	I1122 00:14:53.618584  563925 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21934-513600/.minikube/key.pem
	I1122 00:14:53.618612  563925 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21934-513600/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21934-513600/.minikube/key.pem (1675 bytes)
	I1122 00:14:53.618665  563925 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21934-513600/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21934-513600/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21934-513600/.minikube/certs/ca-key.pem org=jenkins.ha-561110-m02 san=[127.0.0.1 192.168.49.3 ha-561110-m02 localhost minikube]
	I1122 00:14:53.787782  563925 provision.go:177] copyRemoteCerts
	I1122 00:14:53.787855  563925 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1122 00:14:53.787902  563925 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-561110-m02
	I1122 00:14:53.805764  563925 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33540 SSHKeyPath:/home/jenkins/minikube-integration/21934-513600/.minikube/machines/ha-561110-m02/id_rsa Username:docker}
	I1122 00:14:53.914816  563925 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21934-513600/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1122 00:14:53.914879  563925 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21934-513600/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1122 00:14:53.944075  563925 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21934-513600/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1122 00:14:53.944134  563925 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21934-513600/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1122 00:14:53.978384  563925 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21934-513600/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1122 00:14:53.978443  563925 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21934-513600/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1122 00:14:54.007139  563925 provision.go:87] duration metric: took 415.59481ms to configureAuth
	I1122 00:14:54.007174  563925 ubuntu.go:206] setting minikube options for container-runtime
	I1122 00:14:54.007455  563925 config.go:182] Loaded profile config "ha-561110": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1122 00:14:54.007583  563925 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-561110-m02
	I1122 00:14:54.047939  563925 main.go:143] libmachine: Using SSH client type: native
	I1122 00:14:54.048267  563925 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33540 <nil> <nil>}
	I1122 00:14:54.048291  563925 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1122 00:14:54.482099  563925 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1122 00:14:54.482120  563925 machine.go:97] duration metric: took 4.975378731s to provisionDockerMachine
	I1122 00:14:54.482133  563925 start.go:293] postStartSetup for "ha-561110-m02" (driver="docker")
	I1122 00:14:54.482144  563925 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1122 00:14:54.482209  563925 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1122 00:14:54.482252  563925 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-561110-m02
	I1122 00:14:54.500164  563925 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33540 SSHKeyPath:/home/jenkins/minikube-integration/21934-513600/.minikube/machines/ha-561110-m02/id_rsa Username:docker}
	I1122 00:14:54.602698  563925 ssh_runner.go:195] Run: cat /etc/os-release
	I1122 00:14:54.606253  563925 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1122 00:14:54.606285  563925 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1122 00:14:54.606296  563925 filesync.go:126] Scanning /home/jenkins/minikube-integration/21934-513600/.minikube/addons for local assets ...
	I1122 00:14:54.606352  563925 filesync.go:126] Scanning /home/jenkins/minikube-integration/21934-513600/.minikube/files for local assets ...
	I1122 00:14:54.606439  563925 filesync.go:149] local asset: /home/jenkins/minikube-integration/21934-513600/.minikube/files/etc/ssl/certs/5169372.pem -> 5169372.pem in /etc/ssl/certs
	I1122 00:14:54.606450  563925 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21934-513600/.minikube/files/etc/ssl/certs/5169372.pem -> /etc/ssl/certs/5169372.pem
	I1122 00:14:54.606572  563925 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1122 00:14:54.614732  563925 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21934-513600/.minikube/files/etc/ssl/certs/5169372.pem --> /etc/ssl/certs/5169372.pem (1708 bytes)
	I1122 00:14:54.633198  563925 start.go:296] duration metric: took 151.050123ms for postStartSetup
	I1122 00:14:54.633327  563925 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1122 00:14:54.633378  563925 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-561110-m02
	I1122 00:14:54.651888  563925 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33540 SSHKeyPath:/home/jenkins/minikube-integration/21934-513600/.minikube/machines/ha-561110-m02/id_rsa Username:docker}
	I1122 00:14:54.751498  563925 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1122 00:14:54.757858  563925 fix.go:56] duration metric: took 5.757088169s for fixHost
	I1122 00:14:54.757886  563925 start.go:83] releasing machines lock for "ha-561110-m02", held for 5.757140204s
	I1122 00:14:54.757958  563925 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-561110-m02
	I1122 00:14:54.778371  563925 out.go:179] * Found network options:
	I1122 00:14:54.781341  563925 out.go:179]   - NO_PROXY=192.168.49.2
	W1122 00:14:54.784285  563925 proxy.go:120] fail to check proxy env: Error ip not in block
	W1122 00:14:54.784332  563925 proxy.go:120] fail to check proxy env: Error ip not in block
	I1122 00:14:54.784409  563925 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1122 00:14:54.784457  563925 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-561110-m02
	I1122 00:14:54.784734  563925 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1122 00:14:54.784793  563925 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-561110-m02
	I1122 00:14:54.806895  563925 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33540 SSHKeyPath:/home/jenkins/minikube-integration/21934-513600/.minikube/machines/ha-561110-m02/id_rsa Username:docker}
	I1122 00:14:54.810601  563925 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33540 SSHKeyPath:/home/jenkins/minikube-integration/21934-513600/.minikube/machines/ha-561110-m02/id_rsa Username:docker}
	I1122 00:14:54.952580  563925 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1122 00:14:55.010644  563925 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1122 00:14:55.010736  563925 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1122 00:14:55.020151  563925 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1122 00:14:55.020182  563925 start.go:496] detecting cgroup driver to use...
	I1122 00:14:55.020226  563925 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1122 00:14:55.020299  563925 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1122 00:14:55.036774  563925 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1122 00:14:55.050901  563925 docker.go:218] disabling cri-docker service (if available) ...
	I1122 00:14:55.051008  563925 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1122 00:14:55.067844  563925 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1122 00:14:55.088601  563925 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1122 00:14:55.315735  563925 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1122 00:14:55.558850  563925 docker.go:234] disabling docker service ...
	I1122 00:14:55.558960  563925 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1122 00:14:55.576438  563925 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1122 00:14:55.595046  563925 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1122 00:14:55.815234  563925 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1122 00:14:56.006098  563925 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1122 00:14:56.021481  563925 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1122 00:14:56.044364  563925 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1122 00:14:56.044478  563925 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1122 00:14:56.068864  563925 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1122 00:14:56.068980  563925 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1122 00:14:56.084397  563925 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1122 00:14:56.114539  563925 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1122 00:14:56.145163  563925 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1122 00:14:56.167039  563925 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1122 00:14:56.186342  563925 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1122 00:14:56.205126  563925 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1122 00:14:56.216422  563925 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1122 00:14:56.246320  563925 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1122 00:14:56.266882  563925 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1122 00:14:56.589643  563925 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1122 00:14:56.984258  563925 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1122 00:14:56.984384  563925 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1122 00:14:56.988684  563925 start.go:564] Will wait 60s for crictl version
	I1122 00:14:56.988823  563925 ssh_runner.go:195] Run: which crictl
	I1122 00:14:56.993930  563925 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1122 00:14:57.036836  563925 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.2
	RuntimeApiVersion:  v1
	I1122 00:14:57.036996  563925 ssh_runner.go:195] Run: crio --version
	I1122 00:14:57.084070  563925 ssh_runner.go:195] Run: crio --version
	I1122 00:14:57.125443  563925 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.2 ...
	I1122 00:14:57.128539  563925 out.go:179]   - env NO_PROXY=192.168.49.2
	I1122 00:14:57.131626  563925 cli_runner.go:164] Run: docker network inspect ha-561110 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1122 00:14:57.158795  563925 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1122 00:14:57.173001  563925 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1122 00:14:57.195629  563925 mustload.go:66] Loading cluster: ha-561110
	I1122 00:14:57.195865  563925 config.go:182] Loaded profile config "ha-561110": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1122 00:14:57.196127  563925 cli_runner.go:164] Run: docker container inspect ha-561110 --format={{.State.Status}}
	I1122 00:14:57.223215  563925 host.go:66] Checking if "ha-561110" exists ...
	I1122 00:14:57.223486  563925 certs.go:69] Setting up /home/jenkins/minikube-integration/21934-513600/.minikube/profiles/ha-561110 for IP: 192.168.49.3
	I1122 00:14:57.223499  563925 certs.go:195] generating shared ca certs ...
	I1122 00:14:57.223514  563925 certs.go:227] acquiring lock for ca certs: {Name:mkaf4c79493334cb2058b349c36be7473837f9b7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1122 00:14:57.223627  563925 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21934-513600/.minikube/ca.key
	I1122 00:14:57.223673  563925 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21934-513600/.minikube/proxy-client-ca.key
	I1122 00:14:57.223683  563925 certs.go:257] generating profile certs ...
	I1122 00:14:57.223760  563925 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21934-513600/.minikube/profiles/ha-561110/client.key
	I1122 00:14:57.223818  563925 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21934-513600/.minikube/profiles/ha-561110/apiserver.key.1995a48d
	I1122 00:14:57.223886  563925 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21934-513600/.minikube/profiles/ha-561110/proxy-client.key
	I1122 00:14:57.223904  563925 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21934-513600/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1122 00:14:57.223916  563925 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21934-513600/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1122 00:14:57.223932  563925 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21934-513600/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1122 00:14:57.223943  563925 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21934-513600/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1122 00:14:57.223958  563925 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21934-513600/.minikube/profiles/ha-561110/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1122 00:14:57.223970  563925 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21934-513600/.minikube/profiles/ha-561110/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1122 00:14:57.223985  563925 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21934-513600/.minikube/profiles/ha-561110/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1122 00:14:57.223995  563925 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21934-513600/.minikube/profiles/ha-561110/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1122 00:14:57.224044  563925 certs.go:484] found cert: /home/jenkins/minikube-integration/21934-513600/.minikube/certs/516937.pem (1338 bytes)
	W1122 00:14:57.224081  563925 certs.go:480] ignoring /home/jenkins/minikube-integration/21934-513600/.minikube/certs/516937_empty.pem, impossibly tiny 0 bytes
	I1122 00:14:57.224093  563925 certs.go:484] found cert: /home/jenkins/minikube-integration/21934-513600/.minikube/certs/ca-key.pem (1679 bytes)
	I1122 00:14:57.224122  563925 certs.go:484] found cert: /home/jenkins/minikube-integration/21934-513600/.minikube/certs/ca.pem (1078 bytes)
	I1122 00:14:57.224153  563925 certs.go:484] found cert: /home/jenkins/minikube-integration/21934-513600/.minikube/certs/cert.pem (1123 bytes)
	I1122 00:14:57.224179  563925 certs.go:484] found cert: /home/jenkins/minikube-integration/21934-513600/.minikube/certs/key.pem (1675 bytes)
	I1122 00:14:57.224229  563925 certs.go:484] found cert: /home/jenkins/minikube-integration/21934-513600/.minikube/files/etc/ssl/certs/5169372.pem (1708 bytes)
	I1122 00:14:57.224300  563925 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21934-513600/.minikube/files/etc/ssl/certs/5169372.pem -> /usr/share/ca-certificates/5169372.pem
	I1122 00:14:57.224317  563925 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21934-513600/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1122 00:14:57.224334  563925 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21934-513600/.minikube/certs/516937.pem -> /usr/share/ca-certificates/516937.pem
	I1122 00:14:57.224393  563925 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-561110
	I1122 00:14:57.252760  563925 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33535 SSHKeyPath:/home/jenkins/minikube-integration/21934-513600/.minikube/machines/ha-561110/id_rsa Username:docker}
	I1122 00:14:57.354098  563925 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.pub
	I1122 00:14:57.358457  563925 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I1122 00:14:57.367394  563925 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.key
	I1122 00:14:57.371898  563925 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1679 bytes)
	I1122 00:14:57.380426  563925 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.crt
	I1122 00:14:57.384846  563925 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I1122 00:14:57.393409  563925 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.key
	I1122 00:14:57.397317  563925 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1679 bytes)
	I1122 00:14:57.405462  563925 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.crt
	I1122 00:14:57.409765  563925 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I1122 00:14:57.418123  563925 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.key
	I1122 00:14:57.422240  563925 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1675 bytes)
	I1122 00:14:57.430625  563925 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21934-513600/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1122 00:14:57.448740  563925 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21934-513600/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1122 00:14:57.466976  563925 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21934-513600/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1122 00:14:57.489136  563925 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21934-513600/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1122 00:14:57.510655  563925 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21934-513600/.minikube/profiles/ha-561110/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I1122 00:14:57.531352  563925 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21934-513600/.minikube/profiles/ha-561110/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1122 00:14:57.551538  563925 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21934-513600/.minikube/profiles/ha-561110/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1122 00:14:57.572743  563925 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21934-513600/.minikube/profiles/ha-561110/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1122 00:14:57.593047  563925 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21934-513600/.minikube/files/etc/ssl/certs/5169372.pem --> /usr/share/ca-certificates/5169372.pem (1708 bytes)
	I1122 00:14:57.616537  563925 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21934-513600/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1122 00:14:57.636347  563925 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21934-513600/.minikube/certs/516937.pem --> /usr/share/ca-certificates/516937.pem (1338 bytes)
	I1122 00:14:57.655714  563925 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I1122 00:14:57.671132  563925 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1679 bytes)
	I1122 00:14:57.686013  563925 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I1122 00:14:57.702655  563925 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1679 bytes)
	I1122 00:14:57.717580  563925 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I1122 00:14:57.733104  563925 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1675 bytes)
	I1122 00:14:57.748086  563925 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I1122 00:14:57.762829  563925 ssh_runner.go:195] Run: openssl version
	I1122 00:14:57.770255  563925 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/5169372.pem && ln -fs /usr/share/ca-certificates/5169372.pem /etc/ssl/certs/5169372.pem"
	I1122 00:14:57.779598  563925 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/5169372.pem
	I1122 00:14:57.784055  563925 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov 21 23:55 /usr/share/ca-certificates/5169372.pem
	I1122 00:14:57.784140  563925 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/5169372.pem
	I1122 00:14:57.827123  563925 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/5169372.pem /etc/ssl/certs/3ec20f2e.0"
	I1122 00:14:57.836065  563925 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1122 00:14:57.845341  563925 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1122 00:14:57.849594  563925 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov 21 23:48 /usr/share/ca-certificates/minikubeCA.pem
	I1122 00:14:57.849679  563925 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1122 00:14:57.893282  563925 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1122 00:14:57.903127  563925 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/516937.pem && ln -fs /usr/share/ca-certificates/516937.pem /etc/ssl/certs/516937.pem"
	I1122 00:14:57.912201  563925 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/516937.pem
	I1122 00:14:57.916336  563925 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov 21 23:55 /usr/share/ca-certificates/516937.pem
	I1122 00:14:57.916418  563925 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/516937.pem
	I1122 00:14:57.959761  563925 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/516937.pem /etc/ssl/certs/51391683.0"
	I1122 00:14:57.969369  563925 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1122 00:14:57.974254  563925 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1122 00:14:58.017064  563925 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1122 00:14:58.070486  563925 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1122 00:14:58.116182  563925 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1122 00:14:58.158146  563925 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1122 00:14:58.220397  563925 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1122 00:14:58.263034  563925 kubeadm.go:935] updating node {m02 192.168.49.3 8443 v1.34.1 crio true true} ...
	I1122 00:14:58.263156  563925 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=ha-561110-m02 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.3
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:ha-561110 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1122 00:14:58.263186  563925 kube-vip.go:115] generating kube-vip config ...
	I1122 00:14:58.263244  563925 ssh_runner.go:195] Run: sudo sh -c "lsmod | grep ip_vs"
	I1122 00:14:58.282844  563925 kube-vip.go:163] giving up enabling control-plane load-balancing as ipvs kernel modules appears not to be available: sudo sh -c "lsmod | grep ip_vs": Process exited with status 1
	stdout:
	
	stderr:
	I1122 00:14:58.282918  563925 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.49.254
	    - name: prometheus_server
	      value: :2112
	    image: ghcr.io/kube-vip/kube-vip:v1.0.1
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I1122 00:14:58.282999  563925 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1122 00:14:58.293245  563925 binaries.go:51] Found k8s binaries, skipping transfer
	I1122 00:14:58.293334  563925 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I1122 00:14:58.306481  563925 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (363 bytes)
	I1122 00:14:58.327177  563925 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1122 00:14:58.341755  563925 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1358 bytes)
	I1122 00:14:58.358483  563925 ssh_runner.go:195] Run: grep 192.168.49.254	control-plane.minikube.internal$ /etc/hosts
	I1122 00:14:58.362397  563925 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1122 00:14:58.372758  563925 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1122 00:14:58.574763  563925 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1122 00:14:58.589366  563925 config.go:182] Loaded profile config "ha-561110": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1122 00:14:58.589071  563925 start.go:236] Will wait 6m0s for node &{Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1122 00:14:58.595464  563925 out.go:179] * Verifying Kubernetes components...
	I1122 00:14:58.597975  563925 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1122 00:14:58.780512  563925 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1122 00:14:58.804624  563925 kapi.go:59] client config for ha-561110: &rest.Config{Host:"https://192.168.49.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21934-513600/.minikube/profiles/ha-561110/client.crt", KeyFile:"/home/jenkins/minikube-integration/21934-513600/.minikube/profiles/ha-561110/client.key", CAFile:"/home/jenkins/minikube-integration/21934-513600/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(n
il)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1fb2df0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W1122 00:14:58.804704  563925 kubeadm.go:492] Overriding stale ClientConfig host https://192.168.49.254:8443 with https://192.168.49.2:8443
	I1122 00:14:58.804940  563925 node_ready.go:35] waiting up to 6m0s for node "ha-561110-m02" to be "Ready" ...
	I1122 00:15:18.370415  563925 node_ready.go:49] node "ha-561110-m02" is "Ready"
	I1122 00:15:18.370443  563925 node_ready.go:38] duration metric: took 19.565489572s for node "ha-561110-m02" to be "Ready" ...
	I1122 00:15:18.370457  563925 api_server.go:52] waiting for apiserver process to appear ...
	I1122 00:15:18.370519  563925 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1122 00:15:18.871467  563925 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1122 00:15:19.371300  563925 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1122 00:15:19.387145  563925 api_server.go:72] duration metric: took 20.797721396s to wait for apiserver process to appear ...
	I1122 00:15:19.387224  563925 api_server.go:88] waiting for apiserver healthz status ...
	I1122 00:15:19.387265  563925 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1122 00:15:19.396105  563925 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/start-kubernetes-service-cidr-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1122 00:15:19.396183  563925 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/start-kubernetes-service-cidr-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1122 00:15:19.887636  563925 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1122 00:15:19.899172  563925 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1122 00:15:19.899202  563925 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1122 00:15:20.387390  563925 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1122 00:15:20.399975  563925 api_server.go:279] https://192.168.49.2:8443/healthz returned 200:
	ok
	I1122 00:15:20.401338  563925 api_server.go:141] control plane version: v1.34.1
	I1122 00:15:20.401367  563925 api_server.go:131] duration metric: took 1.014115281s to wait for apiserver health ...
	I1122 00:15:20.401377  563925 system_pods.go:43] waiting for kube-system pods to appear ...
	I1122 00:15:20.428331  563925 system_pods.go:59] 25 kube-system pods found
	I1122 00:15:20.428372  563925 system_pods.go:61] "coredns-66bc5c9577-rrkkw" [97c7e1c9-e499-4131-957e-6da8bd29c994] Running
	I1122 00:15:20.428379  563925 system_pods.go:61] "coredns-66bc5c9577-vp8f5" [6d945620-203b-4e4e-b9e2-ef07e6b0f89b] Running
	I1122 00:15:20.428413  563925 system_pods.go:61] "etcd-ha-561110" [5a87193f-0871-4a4c-a409-4d52da31b88b] Running
	I1122 00:15:20.428428  563925 system_pods.go:61] "etcd-ha-561110-m02" [2c4dde3d-3a4c-4d47-b52c-980920facb09] Running
	I1122 00:15:20.428433  563925 system_pods.go:61] "etcd-ha-561110-m03" [d9d64b02-a6c9-48d1-9633-71cfae997fa8] Running
	I1122 00:15:20.428436  563925 system_pods.go:61] "kindnet-4tkd6" [63b063bf-1876-47e2-acb2-a5561b22b975] Running
	I1122 00:15:20.428440  563925 system_pods.go:61] "kindnet-7g65m" [edeca4a6-de24-4444-be9c-cdcbf744f52a] Running
	I1122 00:15:20.428448  563925 system_pods.go:61] "kindnet-dltvw" [ec75f262-ca6c-4766-bc81-60a4e51e94f0] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I1122 00:15:20.428457  563925 system_pods.go:61] "kindnet-w4kh7" [61649d36-e515-4c70-831e-2a509e3b67f3] Running
	I1122 00:15:20.428464  563925 system_pods.go:61] "kube-apiserver-ha-561110" [e94b2c4e-8cc8-45e3-9b89-d1805b254c99] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1122 00:15:20.428469  563925 system_pods.go:61] "kube-apiserver-ha-561110-m02" [98ee0c6b-6094-4264-98e8-69d3f1bd0c04] Running
	I1122 00:15:20.428491  563925 system_pods.go:61] "kube-apiserver-ha-561110-m03" [5b0131a7-0af0-48ff-8889-e82b8a2a2e43] Running
	I1122 00:15:20.428503  563925 system_pods.go:61] "kube-controller-manager-ha-561110" [db7b105b-9fa2-43a8-a08d-837b9960db31] Running
	I1122 00:15:20.428508  563925 system_pods.go:61] "kube-controller-manager-ha-561110-m02" [2bb17b90-45c6-4c74-96a1-81f05c51a0cf] Running
	I1122 00:15:20.428511  563925 system_pods.go:61] "kube-controller-manager-ha-561110-m03" [a1fefba1-3967-4b58-b8e7-2bec4a7b896b] Running
	I1122 00:15:20.428516  563925 system_pods.go:61] "kube-proxy-2vctt" [f89e3d32-bca1-4b9a-8531-7eab74e6e0da] Running
	I1122 00:15:20.428527  563925 system_pods.go:61] "kube-proxy-b8wb5" [ac8e8b19-cd59-454e-ab83-b9d08cf4cea0] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1122 00:15:20.428533  563925 system_pods.go:61] "kube-proxy-fh5cv" [318c6763-fea1-4564-86f6-18cfad691213] Running
	I1122 00:15:20.428542  563925 system_pods.go:61] "kube-proxy-v5ndg" [5e85dc4a-71dd-40c6-86f6-5c79b7f45194] Running
	I1122 00:15:20.428546  563925 system_pods.go:61] "kube-scheduler-ha-561110" [3267ceff-350f-471c-8e2b-9be8b8bdc471] Running
	I1122 00:15:20.428567  563925 system_pods.go:61] "kube-scheduler-ha-561110-m02" [75edb16c-cd99-46b4-bd49-e0646746877f] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1122 00:15:20.428578  563925 system_pods.go:61] "kube-scheduler-ha-561110-m03" [6763f28e-1726-4a48-bac3-1a7e5f82595e] Running
	I1122 00:15:20.428582  563925 system_pods.go:61] "kube-vip-ha-561110-m02" [e4be1217-de52-4c2a-8cfb-a411559af009] Running
	I1122 00:15:20.428596  563925 system_pods.go:61] "kube-vip-ha-561110-m03" [5e7072f7-2a3d-4add-bc1d-e69a03dd28cb] Running
	I1122 00:15:20.428608  563925 system_pods.go:61] "storage-provisioner" [6bf95a26-263b-4088-904d-b344d4826342] Running
	I1122 00:15:20.428614  563925 system_pods.go:74] duration metric: took 27.23022ms to wait for pod list to return data ...
	I1122 00:15:20.428622  563925 default_sa.go:34] waiting for default service account to be created ...
	I1122 00:15:20.444498  563925 default_sa.go:45] found service account: "default"
	I1122 00:15:20.444536  563925 default_sa.go:55] duration metric: took 15.88117ms for default service account to be created ...
	I1122 00:15:20.444583  563925 system_pods.go:116] waiting for k8s-apps to be running ...
	I1122 00:15:20.468591  563925 system_pods.go:86] 25 kube-system pods found
	I1122 00:15:20.468633  563925 system_pods.go:89] "coredns-66bc5c9577-rrkkw" [97c7e1c9-e499-4131-957e-6da8bd29c994] Running
	I1122 00:15:20.468662  563925 system_pods.go:89] "coredns-66bc5c9577-vp8f5" [6d945620-203b-4e4e-b9e2-ef07e6b0f89b] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1122 00:15:20.468674  563925 system_pods.go:89] "etcd-ha-561110" [5a87193f-0871-4a4c-a409-4d52da31b88b] Running
	I1122 00:15:20.468681  563925 system_pods.go:89] "etcd-ha-561110-m02" [2c4dde3d-3a4c-4d47-b52c-980920facb09] Running
	I1122 00:15:20.468703  563925 system_pods.go:89] "etcd-ha-561110-m03" [d9d64b02-a6c9-48d1-9633-71cfae997fa8] Running
	I1122 00:15:20.468713  563925 system_pods.go:89] "kindnet-4tkd6" [63b063bf-1876-47e2-acb2-a5561b22b975] Running
	I1122 00:15:20.468719  563925 system_pods.go:89] "kindnet-7g65m" [edeca4a6-de24-4444-be9c-cdcbf744f52a] Running
	I1122 00:15:20.468727  563925 system_pods.go:89] "kindnet-dltvw" [ec75f262-ca6c-4766-bc81-60a4e51e94f0] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I1122 00:15:20.468736  563925 system_pods.go:89] "kindnet-w4kh7" [61649d36-e515-4c70-831e-2a509e3b67f3] Running
	I1122 00:15:20.468743  563925 system_pods.go:89] "kube-apiserver-ha-561110" [e94b2c4e-8cc8-45e3-9b89-d1805b254c99] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1122 00:15:20.468753  563925 system_pods.go:89] "kube-apiserver-ha-561110-m02" [98ee0c6b-6094-4264-98e8-69d3f1bd0c04] Running
	I1122 00:15:20.468758  563925 system_pods.go:89] "kube-apiserver-ha-561110-m03" [5b0131a7-0af0-48ff-8889-e82b8a2a2e43] Running
	I1122 00:15:20.468762  563925 system_pods.go:89] "kube-controller-manager-ha-561110" [db7b105b-9fa2-43a8-a08d-837b9960db31] Running
	I1122 00:15:20.468785  563925 system_pods.go:89] "kube-controller-manager-ha-561110-m02" [2bb17b90-45c6-4c74-96a1-81f05c51a0cf] Running
	I1122 00:15:20.468796  563925 system_pods.go:89] "kube-controller-manager-ha-561110-m03" [a1fefba1-3967-4b58-b8e7-2bec4a7b896b] Running
	I1122 00:15:20.468800  563925 system_pods.go:89] "kube-proxy-2vctt" [f89e3d32-bca1-4b9a-8531-7eab74e6e0da] Running
	I1122 00:15:20.468809  563925 system_pods.go:89] "kube-proxy-b8wb5" [ac8e8b19-cd59-454e-ab83-b9d08cf4cea0] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1122 00:15:20.468818  563925 system_pods.go:89] "kube-proxy-fh5cv" [318c6763-fea1-4564-86f6-18cfad691213] Running
	I1122 00:15:20.468823  563925 system_pods.go:89] "kube-proxy-v5ndg" [5e85dc4a-71dd-40c6-86f6-5c79b7f45194] Running
	I1122 00:15:20.468827  563925 system_pods.go:89] "kube-scheduler-ha-561110" [3267ceff-350f-471c-8e2b-9be8b8bdc471] Running
	I1122 00:15:20.468833  563925 system_pods.go:89] "kube-scheduler-ha-561110-m02" [75edb16c-cd99-46b4-bd49-e0646746877f] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1122 00:15:20.468841  563925 system_pods.go:89] "kube-scheduler-ha-561110-m03" [6763f28e-1726-4a48-bac3-1a7e5f82595e] Running
	I1122 00:15:20.468869  563925 system_pods.go:89] "kube-vip-ha-561110-m02" [e4be1217-de52-4c2a-8cfb-a411559af009] Running
	I1122 00:15:20.468881  563925 system_pods.go:89] "kube-vip-ha-561110-m03" [5e7072f7-2a3d-4add-bc1d-e69a03dd28cb] Running
	I1122 00:15:20.468887  563925 system_pods.go:89] "storage-provisioner" [6bf95a26-263b-4088-904d-b344d4826342] Running
	I1122 00:15:20.468911  563925 system_pods.go:126] duration metric: took 24.319558ms to wait for k8s-apps to be running ...
	I1122 00:15:20.468936  563925 system_svc.go:44] waiting for kubelet service to be running ....
	I1122 00:15:20.469011  563925 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1122 00:15:20.486178  563925 system_svc.go:56] duration metric: took 17.232261ms WaitForService to wait for kubelet
	I1122 00:15:20.486213  563925 kubeadm.go:587] duration metric: took 21.896794227s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1122 00:15:20.486246  563925 node_conditions.go:102] verifying NodePressure condition ...
	I1122 00:15:20.505594  563925 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1122 00:15:20.505637  563925 node_conditions.go:123] node cpu capacity is 2
	I1122 00:15:20.505651  563925 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1122 00:15:20.505673  563925 node_conditions.go:123] node cpu capacity is 2
	I1122 00:15:20.505684  563925 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1122 00:15:20.505689  563925 node_conditions.go:123] node cpu capacity is 2
	I1122 00:15:20.505693  563925 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1122 00:15:20.505697  563925 node_conditions.go:123] node cpu capacity is 2
	I1122 00:15:20.505716  563925 node_conditions.go:105] duration metric: took 19.443078ms to run NodePressure ...
	I1122 00:15:20.505736  563925 start.go:242] waiting for startup goroutines ...
	I1122 00:15:20.505776  563925 start.go:256] writing updated cluster config ...
	I1122 00:15:20.509517  563925 out.go:203] 
	I1122 00:15:20.512839  563925 config.go:182] Loaded profile config "ha-561110": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1122 00:15:20.513009  563925 profile.go:143] Saving config to /home/jenkins/minikube-integration/21934-513600/.minikube/profiles/ha-561110/config.json ...
	I1122 00:15:20.516821  563925 out.go:179] * Starting "ha-561110-m03" control-plane node in "ha-561110" cluster
	I1122 00:15:20.520742  563925 cache.go:134] Beginning downloading kic base image for docker with crio
	I1122 00:15:20.524203  563925 out.go:179] * Pulling base image v0.0.48-1763588073-21934 ...
	I1122 00:15:20.527654  563925 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1122 00:15:20.527732  563925 cache.go:65] Caching tarball of preloaded images
	I1122 00:15:20.527695  563925 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763588073-21934@sha256:19d3da0413e1bfa354cbb88004c6796f8e9772a083e0230b0f6e50212ee04c7e in local docker daemon
	I1122 00:15:20.528031  563925 preload.go:238] Found /home/jenkins/minikube-integration/21934-513600/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1122 00:15:20.528049  563925 cache.go:68] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1122 00:15:20.528201  563925 profile.go:143] Saving config to /home/jenkins/minikube-integration/21934-513600/.minikube/profiles/ha-561110/config.json ...
	I1122 00:15:20.552866  563925 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763588073-21934@sha256:19d3da0413e1bfa354cbb88004c6796f8e9772a083e0230b0f6e50212ee04c7e in local docker daemon, skipping pull
	I1122 00:15:20.552887  563925 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763588073-21934@sha256:19d3da0413e1bfa354cbb88004c6796f8e9772a083e0230b0f6e50212ee04c7e exists in daemon, skipping load
	I1122 00:15:20.552899  563925 cache.go:243] Successfully downloaded all kic artifacts
	I1122 00:15:20.552922  563925 start.go:360] acquireMachinesLock for ha-561110-m03: {Name:mk8a19cfae84d78ad843d3f8169a3190cadb2d92 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1122 00:15:20.552971  563925 start.go:364] duration metric: took 34.805µs to acquireMachinesLock for "ha-561110-m03"
	I1122 00:15:20.552989  563925 start.go:96] Skipping create...Using existing machine configuration
	I1122 00:15:20.552994  563925 fix.go:54] fixHost starting: m03
	I1122 00:15:20.553255  563925 cli_runner.go:164] Run: docker container inspect ha-561110-m03 --format={{.State.Status}}
	I1122 00:15:20.581965  563925 fix.go:112] recreateIfNeeded on ha-561110-m03: state=Stopped err=<nil>
	W1122 00:15:20.581999  563925 fix.go:138] unexpected machine state, will restart: <nil>
	I1122 00:15:20.586013  563925 out.go:252] * Restarting existing docker container for "ha-561110-m03" ...
	I1122 00:15:20.586099  563925 cli_runner.go:164] Run: docker start ha-561110-m03
	I1122 00:15:20.954348  563925 cli_runner.go:164] Run: docker container inspect ha-561110-m03 --format={{.State.Status}}
	I1122 00:15:20.979345  563925 kic.go:430] container "ha-561110-m03" state is running.
	I1122 00:15:20.979708  563925 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-561110-m03
	I1122 00:15:21.002371  563925 profile.go:143] Saving config to /home/jenkins/minikube-integration/21934-513600/.minikube/profiles/ha-561110/config.json ...
	I1122 00:15:21.002682  563925 machine.go:94] provisionDockerMachine start ...
	I1122 00:15:21.002758  563925 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-561110-m03
	I1122 00:15:21.032872  563925 main.go:143] libmachine: Using SSH client type: native
	I1122 00:15:21.033195  563925 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33545 <nil> <nil>}
	I1122 00:15:21.033211  563925 main.go:143] libmachine: About to run SSH command:
	hostname
	I1122 00:15:21.033881  563925 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I1122 00:15:24.293634  563925 main.go:143] libmachine: SSH cmd err, output: <nil>: ha-561110-m03
	
	I1122 00:15:24.293664  563925 ubuntu.go:182] provisioning hostname "ha-561110-m03"
	I1122 00:15:24.293763  563925 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-561110-m03
	I1122 00:15:24.324599  563925 main.go:143] libmachine: Using SSH client type: native
	I1122 00:15:24.324926  563925 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33545 <nil> <nil>}
	I1122 00:15:24.324939  563925 main.go:143] libmachine: About to run SSH command:
	sudo hostname ha-561110-m03 && echo "ha-561110-m03" | sudo tee /etc/hostname
	I1122 00:15:24.595129  563925 main.go:143] libmachine: SSH cmd err, output: <nil>: ha-561110-m03
	
	I1122 00:15:24.595249  563925 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-561110-m03
	I1122 00:15:24.620733  563925 main.go:143] libmachine: Using SSH client type: native
	I1122 00:15:24.621049  563925 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33545 <nil> <nil>}
	I1122 00:15:24.621676  563925 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-561110-m03' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-561110-m03/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-561110-m03' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1122 00:15:24.856356  563925 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1122 00:15:24.856384  563925 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21934-513600/.minikube CaCertPath:/home/jenkins/minikube-integration/21934-513600/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21934-513600/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21934-513600/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21934-513600/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21934-513600/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21934-513600/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21934-513600/.minikube}
	I1122 00:15:24.856400  563925 ubuntu.go:190] setting up certificates
	I1122 00:15:24.856434  563925 provision.go:84] configureAuth start
	I1122 00:15:24.856521  563925 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-561110-m03
	I1122 00:15:24.885855  563925 provision.go:143] copyHostCerts
	I1122 00:15:24.885898  563925 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21934-513600/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/21934-513600/.minikube/ca.pem
	I1122 00:15:24.885930  563925 exec_runner.go:144] found /home/jenkins/minikube-integration/21934-513600/.minikube/ca.pem, removing ...
	I1122 00:15:24.885941  563925 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21934-513600/.minikube/ca.pem
	I1122 00:15:24.886031  563925 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21934-513600/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21934-513600/.minikube/ca.pem (1078 bytes)
	I1122 00:15:24.886116  563925 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21934-513600/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/21934-513600/.minikube/cert.pem
	I1122 00:15:24.886139  563925 exec_runner.go:144] found /home/jenkins/minikube-integration/21934-513600/.minikube/cert.pem, removing ...
	I1122 00:15:24.886147  563925 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21934-513600/.minikube/cert.pem
	I1122 00:15:24.886175  563925 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21934-513600/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21934-513600/.minikube/cert.pem (1123 bytes)
	I1122 00:15:24.886221  563925 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21934-513600/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/21934-513600/.minikube/key.pem
	I1122 00:15:24.886242  563925 exec_runner.go:144] found /home/jenkins/minikube-integration/21934-513600/.minikube/key.pem, removing ...
	I1122 00:15:24.886246  563925 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21934-513600/.minikube/key.pem
	I1122 00:15:24.886271  563925 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21934-513600/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21934-513600/.minikube/key.pem (1675 bytes)
	I1122 00:15:24.886322  563925 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21934-513600/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21934-513600/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21934-513600/.minikube/certs/ca-key.pem org=jenkins.ha-561110-m03 san=[127.0.0.1 192.168.49.4 ha-561110-m03 localhost minikube]
	I1122 00:15:25.343405  563925 provision.go:177] copyRemoteCerts
	I1122 00:15:25.343499  563925 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1122 00:15:25.343569  563925 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-561110-m03
	I1122 00:15:25.363935  563925 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33545 SSHKeyPath:/home/jenkins/minikube-integration/21934-513600/.minikube/machines/ha-561110-m03/id_rsa Username:docker}
	I1122 00:15:25.550286  563925 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21934-513600/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1122 00:15:25.550350  563925 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21934-513600/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1122 00:15:25.575299  563925 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21934-513600/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1122 00:15:25.575374  563925 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21934-513600/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1122 00:15:25.598237  563925 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21934-513600/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1122 00:15:25.598338  563925 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21934-513600/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1122 00:15:25.628049  563925 provision.go:87] duration metric: took 771.594834ms to configureAuth
	I1122 00:15:25.628077  563925 ubuntu.go:206] setting minikube options for container-runtime
	I1122 00:15:25.628358  563925 config.go:182] Loaded profile config "ha-561110": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1122 00:15:25.628508  563925 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-561110-m03
	I1122 00:15:25.662079  563925 main.go:143] libmachine: Using SSH client type: native
	I1122 00:15:25.662398  563925 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33545 <nil> <nil>}
	I1122 00:15:25.662419  563925 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1122 00:15:26.350066  563925 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1122 00:15:26.350092  563925 machine.go:97] duration metric: took 5.34739065s to provisionDockerMachine
	I1122 00:15:26.350164  563925 start.go:293] postStartSetup for "ha-561110-m03" (driver="docker")
	I1122 00:15:26.350184  563925 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1122 00:15:26.350274  563925 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1122 00:15:26.350334  563925 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-561110-m03
	I1122 00:15:26.375980  563925 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33545 SSHKeyPath:/home/jenkins/minikube-integration/21934-513600/.minikube/machines/ha-561110-m03/id_rsa Username:docker}
	I1122 00:15:26.492303  563925 ssh_runner.go:195] Run: cat /etc/os-release
	I1122 00:15:26.496241  563925 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1122 00:15:26.496272  563925 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1122 00:15:26.496284  563925 filesync.go:126] Scanning /home/jenkins/minikube-integration/21934-513600/.minikube/addons for local assets ...
	I1122 00:15:26.496339  563925 filesync.go:126] Scanning /home/jenkins/minikube-integration/21934-513600/.minikube/files for local assets ...
	I1122 00:15:26.496422  563925 filesync.go:149] local asset: /home/jenkins/minikube-integration/21934-513600/.minikube/files/etc/ssl/certs/5169372.pem -> 5169372.pem in /etc/ssl/certs
	I1122 00:15:26.496433  563925 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21934-513600/.minikube/files/etc/ssl/certs/5169372.pem -> /etc/ssl/certs/5169372.pem
	I1122 00:15:26.496535  563925 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1122 00:15:26.505321  563925 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21934-513600/.minikube/files/etc/ssl/certs/5169372.pem --> /etc/ssl/certs/5169372.pem (1708 bytes)
	I1122 00:15:26.526339  563925 start.go:296] duration metric: took 176.150409ms for postStartSetup
	I1122 00:15:26.526443  563925 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1122 00:15:26.526504  563925 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-561110-m03
	I1122 00:15:26.550085  563925 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33545 SSHKeyPath:/home/jenkins/minikube-integration/21934-513600/.minikube/machines/ha-561110-m03/id_rsa Username:docker}
	I1122 00:15:26.663353  563925 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1122 00:15:26.670831  563925 fix.go:56] duration metric: took 6.117814975s for fixHost
	I1122 00:15:26.670857  563925 start.go:83] releasing machines lock for "ha-561110-m03", held for 6.117877799s
	I1122 00:15:26.670925  563925 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-561110-m03
	I1122 00:15:26.706528  563925 out.go:179] * Found network options:
	I1122 00:15:26.709469  563925 out.go:179]   - NO_PROXY=192.168.49.2,192.168.49.3
	W1122 00:15:26.712333  563925 proxy.go:120] fail to check proxy env: Error ip not in block
	W1122 00:15:26.712371  563925 proxy.go:120] fail to check proxy env: Error ip not in block
	W1122 00:15:26.712395  563925 proxy.go:120] fail to check proxy env: Error ip not in block
	W1122 00:15:26.712406  563925 proxy.go:120] fail to check proxy env: Error ip not in block
	I1122 00:15:26.712494  563925 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1122 00:15:26.712541  563925 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-561110-m03
	I1122 00:15:26.712807  563925 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1122 00:15:26.712873  563925 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-561110-m03
	I1122 00:15:26.749585  563925 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33545 SSHKeyPath:/home/jenkins/minikube-integration/21934-513600/.minikube/machines/ha-561110-m03/id_rsa Username:docker}
	I1122 00:15:26.751996  563925 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33545 SSHKeyPath:/home/jenkins/minikube-integration/21934-513600/.minikube/machines/ha-561110-m03/id_rsa Username:docker}
	I1122 00:15:27.082598  563925 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1122 00:15:27.101543  563925 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1122 00:15:27.101616  563925 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1122 00:15:27.126235  563925 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1122 00:15:27.126257  563925 start.go:496] detecting cgroup driver to use...
	I1122 00:15:27.126287  563925 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1122 00:15:27.126334  563925 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1122 00:15:27.165923  563925 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1122 00:15:27.239673  563925 docker.go:218] disabling cri-docker service (if available) ...
	I1122 00:15:27.239811  563925 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1122 00:15:27.293000  563925 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1122 00:15:27.338853  563925 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1122 00:15:27.741533  563925 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1122 00:15:28.092677  563925 docker.go:234] disabling docker service ...
	I1122 00:15:28.092771  563925 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1122 00:15:28.168796  563925 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1122 00:15:28.226242  563925 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1122 00:15:28.659941  563925 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1122 00:15:29.058606  563925 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1122 00:15:29.101920  563925 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1122 00:15:29.136744  563925 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1122 00:15:29.136856  563925 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1122 00:15:29.162030  563925 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1122 00:15:29.162149  563925 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1122 00:15:29.183947  563925 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1122 00:15:29.221891  563925 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1122 00:15:29.244672  563925 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1122 00:15:29.275560  563925 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1122 00:15:29.306222  563925 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1122 00:15:29.332094  563925 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1122 00:15:29.350775  563925 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1122 00:15:29.370006  563925 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1122 00:15:29.391362  563925 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1122 00:15:29.706214  563925 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1122 00:17:00.097219  563925 ssh_runner.go:235] Completed: sudo systemctl restart crio: (1m30.390962529s)
	I1122 00:17:00.097249  563925 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1122 00:17:00.097319  563925 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1122 00:17:00.113544  563925 start.go:564] Will wait 60s for crictl version
	I1122 00:17:00.113649  563925 ssh_runner.go:195] Run: which crictl
	I1122 00:17:00.136784  563925 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1122 00:17:00.321902  563925 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.2
	RuntimeApiVersion:  v1
	I1122 00:17:00.322038  563925 ssh_runner.go:195] Run: crio --version
	I1122 00:17:00.437751  563925 ssh_runner.go:195] Run: crio --version
	I1122 00:17:00.498700  563925 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.2 ...
	I1122 00:17:00.502322  563925 out.go:179]   - env NO_PROXY=192.168.49.2
	I1122 00:17:00.505365  563925 out.go:179]   - env NO_PROXY=192.168.49.2,192.168.49.3
	I1122 00:17:00.508493  563925 cli_runner.go:164] Run: docker network inspect ha-561110 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1122 00:17:00.538039  563925 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1122 00:17:00.545403  563925 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1122 00:17:00.558621  563925 mustload.go:66] Loading cluster: ha-561110
	I1122 00:17:00.558938  563925 config.go:182] Loaded profile config "ha-561110": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1122 00:17:00.559221  563925 cli_runner.go:164] Run: docker container inspect ha-561110 --format={{.State.Status}}
	I1122 00:17:00.586783  563925 host.go:66] Checking if "ha-561110" exists ...
	I1122 00:17:00.587143  563925 certs.go:69] Setting up /home/jenkins/minikube-integration/21934-513600/.minikube/profiles/ha-561110 for IP: 192.168.49.4
	I1122 00:17:00.587159  563925 certs.go:195] generating shared ca certs ...
	I1122 00:17:00.587181  563925 certs.go:227] acquiring lock for ca certs: {Name:mkaf4c79493334cb2058b349c36be7473837f9b7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1122 00:17:00.587353  563925 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21934-513600/.minikube/ca.key
	I1122 00:17:00.587400  563925 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21934-513600/.minikube/proxy-client-ca.key
	I1122 00:17:00.587412  563925 certs.go:257] generating profile certs ...
	I1122 00:17:00.587496  563925 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21934-513600/.minikube/profiles/ha-561110/client.key
	I1122 00:17:00.587573  563925 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21934-513600/.minikube/profiles/ha-561110/apiserver.key.be48eb15
	I1122 00:17:00.587622  563925 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21934-513600/.minikube/profiles/ha-561110/proxy-client.key
	I1122 00:17:00.587635  563925 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21934-513600/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1122 00:17:00.587651  563925 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21934-513600/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1122 00:17:00.587667  563925 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21934-513600/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1122 00:17:00.587723  563925 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21934-513600/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1122 00:17:00.587739  563925 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21934-513600/.minikube/profiles/ha-561110/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1122 00:17:00.587752  563925 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21934-513600/.minikube/profiles/ha-561110/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1122 00:17:00.587768  563925 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21934-513600/.minikube/profiles/ha-561110/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1122 00:17:00.587778  563925 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21934-513600/.minikube/profiles/ha-561110/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1122 00:17:00.587836  563925 certs.go:484] found cert: /home/jenkins/minikube-integration/21934-513600/.minikube/certs/516937.pem (1338 bytes)
	W1122 00:17:00.587877  563925 certs.go:480] ignoring /home/jenkins/minikube-integration/21934-513600/.minikube/certs/516937_empty.pem, impossibly tiny 0 bytes
	I1122 00:17:00.587891  563925 certs.go:484] found cert: /home/jenkins/minikube-integration/21934-513600/.minikube/certs/ca-key.pem (1679 bytes)
	I1122 00:17:00.587929  563925 certs.go:484] found cert: /home/jenkins/minikube-integration/21934-513600/.minikube/certs/ca.pem (1078 bytes)
	I1122 00:17:00.587961  563925 certs.go:484] found cert: /home/jenkins/minikube-integration/21934-513600/.minikube/certs/cert.pem (1123 bytes)
	I1122 00:17:00.587990  563925 certs.go:484] found cert: /home/jenkins/minikube-integration/21934-513600/.minikube/certs/key.pem (1675 bytes)
	I1122 00:17:00.588101  563925 certs.go:484] found cert: /home/jenkins/minikube-integration/21934-513600/.minikube/files/etc/ssl/certs/5169372.pem (1708 bytes)
	I1122 00:17:00.588199  563925 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21934-513600/.minikube/certs/516937.pem -> /usr/share/ca-certificates/516937.pem
	I1122 00:17:00.588226  563925 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21934-513600/.minikube/files/etc/ssl/certs/5169372.pem -> /usr/share/ca-certificates/5169372.pem
	I1122 00:17:00.588241  563925 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21934-513600/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1122 00:17:00.588312  563925 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-561110
	I1122 00:17:00.613873  563925 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33535 SSHKeyPath:/home/jenkins/minikube-integration/21934-513600/.minikube/machines/ha-561110/id_rsa Username:docker}
	I1122 00:17:00.714215  563925 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.pub
	I1122 00:17:00.718718  563925 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I1122 00:17:00.729019  563925 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.key
	I1122 00:17:00.733330  563925 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1679 bytes)
	I1122 00:17:00.743477  563925 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.crt
	I1122 00:17:00.747658  563925 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I1122 00:17:00.758201  563925 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.key
	I1122 00:17:00.763435  563925 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1679 bytes)
	I1122 00:17:00.773425  563925 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.crt
	I1122 00:17:00.777456  563925 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I1122 00:17:00.787246  563925 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.key
	I1122 00:17:00.791598  563925 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1675 bytes)
	I1122 00:17:00.801660  563925 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21934-513600/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1122 00:17:00.826055  563925 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21934-513600/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1122 00:17:00.848933  563925 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21934-513600/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1122 00:17:00.888604  563925 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21934-513600/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1122 00:17:00.921496  563925 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21934-513600/.minikube/profiles/ha-561110/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I1122 00:17:00.951086  563925 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21934-513600/.minikube/profiles/ha-561110/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1122 00:17:00.975145  563925 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21934-513600/.minikube/profiles/ha-561110/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1122 00:17:00.999138  563925 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21934-513600/.minikube/profiles/ha-561110/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1122 00:17:01.024534  563925 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21934-513600/.minikube/certs/516937.pem --> /usr/share/ca-certificates/516937.pem (1338 bytes)
	I1122 00:17:01.046560  563925 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21934-513600/.minikube/files/etc/ssl/certs/5169372.pem --> /usr/share/ca-certificates/5169372.pem (1708 bytes)
	I1122 00:17:01.072877  563925 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21934-513600/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1122 00:17:01.103089  563925 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I1122 00:17:01.119601  563925 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1679 bytes)
	I1122 00:17:01.136419  563925 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I1122 00:17:01.153380  563925 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1679 bytes)
	I1122 00:17:01.171240  563925 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I1122 00:17:01.202584  563925 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1675 bytes)
	I1122 00:17:01.223852  563925 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I1122 00:17:01.247292  563925 ssh_runner.go:195] Run: openssl version
	I1122 00:17:01.259516  563925 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1122 00:17:01.280780  563925 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1122 00:17:01.289039  563925 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov 21 23:48 /usr/share/ca-certificates/minikubeCA.pem
	I1122 00:17:01.289158  563925 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1122 00:17:01.373640  563925 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1122 00:17:01.395461  563925 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/516937.pem && ln -fs /usr/share/ca-certificates/516937.pem /etc/ssl/certs/516937.pem"
	I1122 00:17:01.420524  563925 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/516937.pem
	I1122 00:17:01.426623  563925 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov 21 23:55 /usr/share/ca-certificates/516937.pem
	I1122 00:17:01.426698  563925 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/516937.pem
	I1122 00:17:01.478449  563925 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/516937.pem /etc/ssl/certs/51391683.0"
	I1122 00:17:01.490493  563925 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/5169372.pem && ln -fs /usr/share/ca-certificates/5169372.pem /etc/ssl/certs/5169372.pem"
	I1122 00:17:01.502084  563925 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/5169372.pem
	I1122 00:17:01.507855  563925 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov 21 23:55 /usr/share/ca-certificates/5169372.pem
	I1122 00:17:01.507956  563925 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/5169372.pem
	I1122 00:17:01.587957  563925 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/5169372.pem /etc/ssl/certs/3ec20f2e.0"
	I1122 00:17:01.599719  563925 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1122 00:17:01.605126  563925 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1122 00:17:01.660029  563925 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1122 00:17:01.712345  563925 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1122 00:17:01.786467  563925 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1122 00:17:01.862166  563925 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1122 00:17:01.946187  563925 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1122 00:17:02.010384  563925 kubeadm.go:935] updating node {m03 192.168.49.4 8443 v1.34.1 crio true true} ...
	I1122 00:17:02.010523  563925 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=ha-561110-m03 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.4
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:ha-561110 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1122 00:17:02.010557  563925 kube-vip.go:115] generating kube-vip config ...
	I1122 00:17:02.010619  563925 ssh_runner.go:195] Run: sudo sh -c "lsmod | grep ip_vs"
	I1122 00:17:02.037246  563925 kube-vip.go:163] giving up enabling control-plane load-balancing as ipvs kernel modules appears not to be available: sudo sh -c "lsmod | grep ip_vs": Process exited with status 1
	stdout:
	
	stderr:
	I1122 00:17:02.037316  563925 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.49.254
	    - name: prometheus_server
	      value: :2112
	    image: ghcr.io/kube-vip/kube-vip:v1.0.1
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I1122 00:17:02.037405  563925 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1122 00:17:02.052472  563925 binaries.go:51] Found k8s binaries, skipping transfer
	I1122 00:17:02.052567  563925 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I1122 00:17:02.073857  563925 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (363 bytes)
	I1122 00:17:02.112139  563925 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1122 00:17:02.133854  563925 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1358 bytes)
	I1122 00:17:02.152649  563925 ssh_runner.go:195] Run: grep 192.168.49.254	control-plane.minikube.internal$ /etc/hosts
	I1122 00:17:02.158389  563925 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1122 00:17:02.184228  563925 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1122 00:17:02.493772  563925 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1122 00:17:02.514312  563925 start.go:236] Will wait 6m0s for node &{Name:m03 IP:192.168.49.4 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1122 00:17:02.514696  563925 config.go:182] Loaded profile config "ha-561110": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1122 00:17:02.518824  563925 out.go:179] * Verifying Kubernetes components...
	I1122 00:17:02.521919  563925 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1122 00:17:02.746981  563925 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1122 00:17:02.765468  563925 kapi.go:59] client config for ha-561110: &rest.Config{Host:"https://192.168.49.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21934-513600/.minikube/profiles/ha-561110/client.crt", KeyFile:"/home/jenkins/minikube-integration/21934-513600/.minikube/profiles/ha-561110/client.key", CAFile:"/home/jenkins/minikube-integration/21934-513600/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(n
il)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1fb2df0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W1122 00:17:02.765589  563925 kubeadm.go:492] Overriding stale ClientConfig host https://192.168.49.254:8443 with https://192.168.49.2:8443
	I1122 00:17:02.765898  563925 node_ready.go:35] waiting up to 6m0s for node "ha-561110-m03" to be "Ready" ...
	W1122 00:17:04.770183  563925 node_ready.go:57] node "ha-561110-m03" has "Ready":"Unknown" status (will retry)
	W1122 00:17:06.771513  563925 node_ready.go:57] node "ha-561110-m03" has "Ready":"Unknown" status (will retry)
	W1122 00:17:09.269611  563925 node_ready.go:57] node "ha-561110-m03" has "Ready":"Unknown" status (will retry)
	W1122 00:17:11.270683  563925 node_ready.go:57] node "ha-561110-m03" has "Ready":"Unknown" status (will retry)
	W1122 00:17:13.275612  563925 node_ready.go:57] node "ha-561110-m03" has "Ready":"Unknown" status (will retry)
	W1122 00:17:15.769660  563925 node_ready.go:57] node "ha-561110-m03" has "Ready":"Unknown" status (will retry)
	W1122 00:17:17.769933  563925 node_ready.go:57] node "ha-561110-m03" has "Ready":"Unknown" status (will retry)
	W1122 00:17:20.269315  563925 node_ready.go:57] node "ha-561110-m03" has "Ready":"Unknown" status (will retry)
	W1122 00:17:22.270943  563925 node_ready.go:57] node "ha-561110-m03" has "Ready":"Unknown" status (will retry)
	W1122 00:17:24.769260  563925 node_ready.go:57] node "ha-561110-m03" has "Ready":"Unknown" status (will retry)
	W1122 00:17:26.770369  563925 node_ready.go:57] node "ha-561110-m03" has "Ready":"Unknown" status (will retry)
	W1122 00:17:29.269015  563925 node_ready.go:57] node "ha-561110-m03" has "Ready":"Unknown" status (will retry)
	W1122 00:17:31.269858  563925 node_ready.go:57] node "ha-561110-m03" has "Ready":"Unknown" status (will retry)
	W1122 00:17:33.269945  563925 node_ready.go:57] node "ha-561110-m03" has "Ready":"Unknown" status (will retry)
	W1122 00:17:35.769971  563925 node_ready.go:57] node "ha-561110-m03" has "Ready":"Unknown" status (will retry)
	W1122 00:17:38.269922  563925 node_ready.go:57] node "ha-561110-m03" has "Ready":"Unknown" status (will retry)
	W1122 00:17:40.270335  563925 node_ready.go:57] node "ha-561110-m03" has "Ready":"Unknown" status (will retry)
	W1122 00:17:42.271149  563925 node_ready.go:57] node "ha-561110-m03" has "Ready":"Unknown" status (will retry)
	W1122 00:17:44.770140  563925 node_ready.go:57] node "ha-561110-m03" has "Ready":"Unknown" status (will retry)
	W1122 00:17:47.269690  563925 node_ready.go:57] node "ha-561110-m03" has "Ready":"Unknown" status (will retry)
	W1122 00:17:49.270654  563925 node_ready.go:57] node "ha-561110-m03" has "Ready":"Unknown" status (will retry)
	W1122 00:17:51.770465  563925 node_ready.go:57] node "ha-561110-m03" has "Ready":"Unknown" status (will retry)
	W1122 00:17:54.269768  563925 node_ready.go:57] node "ha-561110-m03" has "Ready":"Unknown" status (will retry)
	W1122 00:17:56.769254  563925 node_ready.go:57] node "ha-561110-m03" has "Ready":"Unknown" status (will retry)
	W1122 00:17:58.769625  563925 node_ready.go:57] node "ha-561110-m03" has "Ready":"Unknown" status (will retry)
	W1122 00:18:00.770202  563925 node_ready.go:57] node "ha-561110-m03" has "Ready":"Unknown" status (will retry)
	W1122 00:18:02.773270  563925 node_ready.go:57] node "ha-561110-m03" has "Ready":"Unknown" status (will retry)
	W1122 00:18:05.270130  563925 node_ready.go:57] node "ha-561110-m03" has "Ready":"Unknown" status (will retry)
	W1122 00:18:07.271583  563925 node_ready.go:57] node "ha-561110-m03" has "Ready":"Unknown" status (will retry)
	W1122 00:18:09.769397  563925 node_ready.go:57] node "ha-561110-m03" has "Ready":"Unknown" status (will retry)
	W1122 00:18:11.770012  563925 node_ready.go:57] node "ha-561110-m03" has "Ready":"Unknown" status (will retry)
	W1122 00:18:13.770106  563925 node_ready.go:57] node "ha-561110-m03" has "Ready":"Unknown" status (will retry)
	W1122 00:18:16.270008  563925 node_ready.go:57] node "ha-561110-m03" has "Ready":"Unknown" status (will retry)
	W1122 00:18:18.771373  563925 node_ready.go:57] node "ha-561110-m03" has "Ready":"Unknown" status (will retry)
	W1122 00:18:21.270047  563925 node_ready.go:57] node "ha-561110-m03" has "Ready":"Unknown" status (will retry)
	W1122 00:18:23.768948  563925 node_ready.go:57] node "ha-561110-m03" has "Ready":"Unknown" status (will retry)
	W1122 00:18:25.770213  563925 node_ready.go:57] node "ha-561110-m03" has "Ready":"Unknown" status (will retry)
	W1122 00:18:28.269635  563925 node_ready.go:57] node "ha-561110-m03" has "Ready":"Unknown" status (will retry)
	W1122 00:18:30.770096  563925 node_ready.go:57] node "ha-561110-m03" has "Ready":"Unknown" status (will retry)
	W1122 00:18:32.771794  563925 node_ready.go:57] node "ha-561110-m03" has "Ready":"Unknown" status (will retry)
	W1122 00:18:35.270059  563925 node_ready.go:57] node "ha-561110-m03" has "Ready":"Unknown" status (will retry)
	W1122 00:18:37.769842  563925 node_ready.go:57] node "ha-561110-m03" has "Ready":"Unknown" status (will retry)
	W1122 00:18:40.269289  563925 node_ready.go:57] node "ha-561110-m03" has "Ready":"Unknown" status (will retry)
	W1122 00:18:42.273345  563925 node_ready.go:57] node "ha-561110-m03" has "Ready":"Unknown" status (will retry)
	W1122 00:18:44.275125  563925 node_ready.go:57] node "ha-561110-m03" has "Ready":"Unknown" status (will retry)
	W1122 00:18:46.776656  563925 node_ready.go:57] node "ha-561110-m03" has "Ready":"Unknown" status (will retry)
	W1122 00:18:49.270280  563925 node_ready.go:57] node "ha-561110-m03" has "Ready":"Unknown" status (will retry)
	W1122 00:18:51.770076  563925 node_ready.go:57] node "ha-561110-m03" has "Ready":"Unknown" status (will retry)
	W1122 00:18:54.269588  563925 node_ready.go:57] node "ha-561110-m03" has "Ready":"Unknown" status (will retry)
	W1122 00:18:56.270135  563925 node_ready.go:57] node "ha-561110-m03" has "Ready":"Unknown" status (will retry)
	W1122 00:18:58.768991  563925 node_ready.go:57] node "ha-561110-m03" has "Ready":"Unknown" status (will retry)
	W1122 00:19:00.771422  563925 node_ready.go:57] node "ha-561110-m03" has "Ready":"Unknown" status (will retry)
	W1122 00:19:03.269840  563925 node_ready.go:57] node "ha-561110-m03" has "Ready":"Unknown" status (will retry)
	W1122 00:19:05.270420  563925 node_ready.go:57] node "ha-561110-m03" has "Ready":"Unknown" status (will retry)
	W1122 00:19:07.770020  563925 node_ready.go:57] node "ha-561110-m03" has "Ready":"Unknown" status (will retry)
	W1122 00:19:10.268980  563925 node_ready.go:57] node "ha-561110-m03" has "Ready":"Unknown" status (will retry)
	W1122 00:19:12.269695  563925 node_ready.go:57] node "ha-561110-m03" has "Ready":"Unknown" status (will retry)
	W1122 00:19:14.769271  563925 node_ready.go:57] node "ha-561110-m03" has "Ready":"Unknown" status (will retry)
	W1122 00:19:16.769509  563925 node_ready.go:57] node "ha-561110-m03" has "Ready":"Unknown" status (will retry)
	W1122 00:19:19.270240  563925 node_ready.go:57] node "ha-561110-m03" has "Ready":"Unknown" status (will retry)
	W1122 00:19:21.769249  563925 node_ready.go:57] node "ha-561110-m03" has "Ready":"Unknown" status (will retry)
	W1122 00:19:23.770580  563925 node_ready.go:57] node "ha-561110-m03" has "Ready":"Unknown" status (will retry)
	W1122 00:19:26.269982  563925 node_ready.go:57] node "ha-561110-m03" has "Ready":"Unknown" status (will retry)
	W1122 00:19:28.770054  563925 node_ready.go:57] node "ha-561110-m03" has "Ready":"Unknown" status (will retry)
	W1122 00:19:31.269163  563925 node_ready.go:57] node "ha-561110-m03" has "Ready":"Unknown" status (will retry)
	W1122 00:19:33.269886  563925 node_ready.go:57] node "ha-561110-m03" has "Ready":"Unknown" status (will retry)
	W1122 00:19:35.270677  563925 node_ready.go:57] node "ha-561110-m03" has "Ready":"Unknown" status (will retry)
	W1122 00:19:37.769622  563925 node_ready.go:57] node "ha-561110-m03" has "Ready":"Unknown" status (will retry)
	W1122 00:19:39.769703  563925 node_ready.go:57] node "ha-561110-m03" has "Ready":"Unknown" status (will retry)
	W1122 00:19:42.270956  563925 node_ready.go:57] node "ha-561110-m03" has "Ready":"Unknown" status (will retry)
	W1122 00:19:44.768762  563925 node_ready.go:57] node "ha-561110-m03" has "Ready":"Unknown" status (will retry)
	W1122 00:19:46.769989  563925 node_ready.go:57] node "ha-561110-m03" has "Ready":"Unknown" status (will retry)
	W1122 00:19:49.269515  563925 node_ready.go:57] node "ha-561110-m03" has "Ready":"Unknown" status (will retry)
	W1122 00:19:51.270122  563925 node_ready.go:57] node "ha-561110-m03" has "Ready":"Unknown" status (will retry)
	W1122 00:19:53.769467  563925 node_ready.go:57] node "ha-561110-m03" has "Ready":"Unknown" status (will retry)
	W1122 00:19:55.770293  563925 node_ready.go:57] node "ha-561110-m03" has "Ready":"Unknown" status (will retry)
	W1122 00:19:58.269947  563925 node_ready.go:57] node "ha-561110-m03" has "Ready":"Unknown" status (will retry)
	W1122 00:20:00.322810  563925 node_ready.go:57] node "ha-561110-m03" has "Ready":"Unknown" status (will retry)
	W1122 00:20:02.769554  563925 node_ready.go:57] node "ha-561110-m03" has "Ready":"Unknown" status (will retry)
	W1122 00:20:04.770551  563925 node_ready.go:57] node "ha-561110-m03" has "Ready":"Unknown" status (will retry)
	W1122 00:20:07.269784  563925 node_ready.go:57] node "ha-561110-m03" has "Ready":"Unknown" status (will retry)
	W1122 00:20:09.769344  563925 node_ready.go:57] node "ha-561110-m03" has "Ready":"Unknown" status (will retry)
	W1122 00:20:11.769990  563925 node_ready.go:57] node "ha-561110-m03" has "Ready":"Unknown" status (will retry)
	W1122 00:20:14.269132  563925 node_ready.go:57] node "ha-561110-m03" has "Ready":"Unknown" status (will retry)
	W1122 00:20:16.269765  563925 node_ready.go:57] node "ha-561110-m03" has "Ready":"Unknown" status (will retry)
	W1122 00:20:18.770174  563925 node_ready.go:57] node "ha-561110-m03" has "Ready":"Unknown" status (will retry)
	W1122 00:20:21.269837  563925 node_ready.go:57] node "ha-561110-m03" has "Ready":"Unknown" status (will retry)
	W1122 00:20:23.270065  563925 node_ready.go:57] node "ha-561110-m03" has "Ready":"Unknown" status (will retry)
	W1122 00:20:25.770172  563925 node_ready.go:57] node "ha-561110-m03" has "Ready":"Unknown" status (will retry)
	W1122 00:20:28.269279  563925 node_ready.go:57] node "ha-561110-m03" has "Ready":"Unknown" status (will retry)
	W1122 00:20:30.270734  563925 node_ready.go:57] node "ha-561110-m03" has "Ready":"Unknown" status (will retry)
	W1122 00:20:32.769392  563925 node_ready.go:57] node "ha-561110-m03" has "Ready":"Unknown" status (will retry)
	W1122 00:20:34.769668  563925 node_ready.go:57] node "ha-561110-m03" has "Ready":"Unknown" status (will retry)
	W1122 00:20:36.770010  563925 node_ready.go:57] node "ha-561110-m03" has "Ready":"Unknown" status (will retry)
	W1122 00:20:38.770203  563925 node_ready.go:57] node "ha-561110-m03" has "Ready":"Unknown" status (will retry)
	W1122 00:20:40.770721  563925 node_ready.go:57] node "ha-561110-m03" has "Ready":"Unknown" status (will retry)
	W1122 00:20:43.270389  563925 node_ready.go:57] node "ha-561110-m03" has "Ready":"Unknown" status (will retry)
	W1122 00:20:45.276123  563925 node_ready.go:57] node "ha-561110-m03" has "Ready":"Unknown" status (will retry)
	W1122 00:20:47.770112  563925 node_ready.go:57] node "ha-561110-m03" has "Ready":"Unknown" status (will retry)
	W1122 00:20:50.269310  563925 node_ready.go:57] node "ha-561110-m03" has "Ready":"Unknown" status (will retry)
	W1122 00:20:52.269861  563925 node_ready.go:57] node "ha-561110-m03" has "Ready":"Unknown" status (will retry)
	W1122 00:20:54.270570  563925 node_ready.go:57] node "ha-561110-m03" has "Ready":"Unknown" status (will retry)
	W1122 00:20:56.769591  563925 node_ready.go:57] node "ha-561110-m03" has "Ready":"Unknown" status (will retry)
	W1122 00:20:58.770126  563925 node_ready.go:57] node "ha-561110-m03" has "Ready":"Unknown" status (will retry)
	W1122 00:21:01.270099  563925 node_ready.go:57] node "ha-561110-m03" has "Ready":"Unknown" status (will retry)
	W1122 00:21:03.769793  563925 node_ready.go:57] node "ha-561110-m03" has "Ready":"Unknown" status (will retry)
	W1122 00:21:05.771503  563925 node_ready.go:57] node "ha-561110-m03" has "Ready":"Unknown" status (will retry)
	W1122 00:21:08.269537  563925 node_ready.go:57] node "ha-561110-m03" has "Ready":"Unknown" status (will retry)
	W1122 00:21:10.770347  563925 node_ready.go:57] node "ha-561110-m03" has "Ready":"Unknown" status (will retry)
	W1122 00:21:13.269687  563925 node_ready.go:57] node "ha-561110-m03" has "Ready":"Unknown" status (will retry)
	W1122 00:21:15.270464  563925 node_ready.go:57] node "ha-561110-m03" has "Ready":"Unknown" status (will retry)
	W1122 00:21:17.271724  563925 node_ready.go:57] node "ha-561110-m03" has "Ready":"Unknown" status (will retry)
	W1122 00:21:19.769950  563925 node_ready.go:57] node "ha-561110-m03" has "Ready":"Unknown" status (will retry)
	W1122 00:21:22.269581  563925 node_ready.go:57] node "ha-561110-m03" has "Ready":"Unknown" status (will retry)
	W1122 00:21:24.269903  563925 node_ready.go:57] node "ha-561110-m03" has "Ready":"Unknown" status (will retry)
	W1122 00:21:26.269977  563925 node_ready.go:57] node "ha-561110-m03" has "Ready":"Unknown" status (will retry)
	W1122 00:21:28.769453  563925 node_ready.go:57] node "ha-561110-m03" has "Ready":"Unknown" status (will retry)
	W1122 00:21:30.770323  563925 node_ready.go:57] node "ha-561110-m03" has "Ready":"Unknown" status (will retry)
	W1122 00:21:33.270153  563925 node_ready.go:57] node "ha-561110-m03" has "Ready":"Unknown" status (will retry)
	W1122 00:21:35.769486  563925 node_ready.go:57] node "ha-561110-m03" has "Ready":"Unknown" status (will retry)
	W1122 00:21:37.770126  563925 node_ready.go:57] node "ha-561110-m03" has "Ready":"Unknown" status (will retry)
	W1122 00:21:39.770389  563925 node_ready.go:57] node "ha-561110-m03" has "Ready":"Unknown" status (will retry)
	W1122 00:21:42.273464  563925 node_ready.go:57] node "ha-561110-m03" has "Ready":"Unknown" status (will retry)
	W1122 00:21:44.769688  563925 node_ready.go:57] node "ha-561110-m03" has "Ready":"Unknown" status (will retry)
	W1122 00:21:46.770370  563925 node_ready.go:57] node "ha-561110-m03" has "Ready":"Unknown" status (will retry)
	W1122 00:21:49.269335  563925 node_ready.go:57] node "ha-561110-m03" has "Ready":"Unknown" status (will retry)
	W1122 00:21:51.270430  563925 node_ready.go:57] node "ha-561110-m03" has "Ready":"Unknown" status (will retry)
	W1122 00:21:53.769776  563925 node_ready.go:57] node "ha-561110-m03" has "Ready":"Unknown" status (will retry)
	W1122 00:21:56.269697  563925 node_ready.go:57] node "ha-561110-m03" has "Ready":"Unknown" status (will retry)
	W1122 00:21:58.270251  563925 node_ready.go:57] node "ha-561110-m03" has "Ready":"Unknown" status (will retry)
	W1122 00:22:00.292924  563925 node_ready.go:57] node "ha-561110-m03" has "Ready":"Unknown" status (will retry)
	W1122 00:22:02.779828  563925 node_ready.go:57] node "ha-561110-m03" has "Ready":"Unknown" status (will retry)
	W1122 00:22:05.270290  563925 node_ready.go:57] node "ha-561110-m03" has "Ready":"Unknown" status (will retry)
	W1122 00:22:07.270475  563925 node_ready.go:57] node "ha-561110-m03" has "Ready":"Unknown" status (will retry)
	W1122 00:22:09.769072  563925 node_ready.go:57] node "ha-561110-m03" has "Ready":"Unknown" status (will retry)
	W1122 00:22:11.769917  563925 node_ready.go:57] node "ha-561110-m03" has "Ready":"Unknown" status (will retry)
	W1122 00:22:13.770097  563925 node_ready.go:57] node "ha-561110-m03" has "Ready":"Unknown" status (will retry)
	W1122 00:22:16.269780  563925 node_ready.go:57] node "ha-561110-m03" has "Ready":"Unknown" status (will retry)
	W1122 00:22:18.269850  563925 node_ready.go:57] node "ha-561110-m03" has "Ready":"Unknown" status (will retry)
	W1122 00:22:20.276178  563925 node_ready.go:57] node "ha-561110-m03" has "Ready":"Unknown" status (will retry)
	W1122 00:22:22.770032  563925 node_ready.go:57] node "ha-561110-m03" has "Ready":"Unknown" status (will retry)
	W1122 00:22:25.270326  563925 node_ready.go:57] node "ha-561110-m03" has "Ready":"Unknown" status (will retry)
	W1122 00:22:27.769736  563925 node_ready.go:57] node "ha-561110-m03" has "Ready":"Unknown" status (will retry)
	W1122 00:22:30.270331  563925 node_ready.go:57] node "ha-561110-m03" has "Ready":"Unknown" status (will retry)
	W1122 00:22:32.768987  563925 node_ready.go:57] node "ha-561110-m03" has "Ready":"Unknown" status (will retry)
	W1122 00:22:35.269587  563925 node_ready.go:57] node "ha-561110-m03" has "Ready":"Unknown" status (will retry)
	W1122 00:22:37.770642  563925 node_ready.go:57] node "ha-561110-m03" has "Ready":"Unknown" status (will retry)
	W1122 00:22:40.269226  563925 node_ready.go:57] node "ha-561110-m03" has "Ready":"Unknown" status (will retry)
	W1122 00:22:42.281918  563925 node_ready.go:57] node "ha-561110-m03" has "Ready":"Unknown" status (will retry)
	W1122 00:22:44.770302  563925 node_ready.go:57] node "ha-561110-m03" has "Ready":"Unknown" status (will retry)
	W1122 00:22:47.269651  563925 node_ready.go:57] node "ha-561110-m03" has "Ready":"Unknown" status (will retry)
	W1122 00:22:49.270011  563925 node_ready.go:57] node "ha-561110-m03" has "Ready":"Unknown" status (will retry)
	W1122 00:22:51.770305  563925 node_ready.go:57] node "ha-561110-m03" has "Ready":"Unknown" status (will retry)
	W1122 00:22:54.269848  563925 node_ready.go:57] node "ha-561110-m03" has "Ready":"Unknown" status (will retry)
	W1122 00:22:56.269962  563925 node_ready.go:57] node "ha-561110-m03" has "Ready":"Unknown" status (will retry)
	W1122 00:22:58.770073  563925 node_ready.go:57] node "ha-561110-m03" has "Ready":"Unknown" status (will retry)
	W1122 00:23:00.770445  563925 node_ready.go:57] node "ha-561110-m03" has "Ready":"Unknown" status (will retry)
	I1122 00:23:02.766152  563925 node_ready.go:38] duration metric: took 6m0.000206678s for node "ha-561110-m03" to be "Ready" ...
	I1122 00:23:02.769486  563925 out.go:203] 
	W1122 00:23:02.772416  563925 out.go:285] X Exiting due to GUEST_START: failed to start node: adding node: wait 6m0s for node: waiting for node to be ready: WaitNodeCondition: context deadline exceeded
	W1122 00:23:02.772436  563925 out.go:285] * 
	W1122 00:23:02.774635  563925 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1122 00:23:02.776836  563925 out.go:203] 
	
	
	==> CRI-O <==
	Nov 22 00:15:52 ha-561110 crio[666]: time="2025-11-22T00:15:52.39043996Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=6b55a33c-982b-407b-a39e-f5c092d837ad name=/runtime.v1.ImageService/ImageStatus
	Nov 22 00:15:52 ha-561110 crio[666]: time="2025-11-22T00:15:52.391455898Z" level=info msg="Creating container: kube-system/storage-provisioner/storage-provisioner" id=aed84f71-7deb-4060-a2b1-3504a94ddccd name=/runtime.v1.RuntimeService/CreateContainer
	Nov 22 00:15:52 ha-561110 crio[666]: time="2025-11-22T00:15:52.391592756Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 22 00:15:52 ha-561110 crio[666]: time="2025-11-22T00:15:52.398141795Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 22 00:15:52 ha-561110 crio[666]: time="2025-11-22T00:15:52.398456674Z" level=warning msg="Failed to open /etc/passwd: open /var/lib/containers/storage/overlay/5080ecedb1aca210f92642c0da614341ac5baee6bb123e6d3efa15080462423f/merged/etc/passwd: no such file or directory"
	Nov 22 00:15:52 ha-561110 crio[666]: time="2025-11-22T00:15:52.398549644Z" level=warning msg="Failed to open /etc/group: open /var/lib/containers/storage/overlay/5080ecedb1aca210f92642c0da614341ac5baee6bb123e6d3efa15080462423f/merged/etc/group: no such file or directory"
	Nov 22 00:15:52 ha-561110 crio[666]: time="2025-11-22T00:15:52.398849032Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 22 00:15:52 ha-561110 crio[666]: time="2025-11-22T00:15:52.429646993Z" level=info msg="Created container 135f8581d288b240b9c444b0861bec261a02882a56b15c99e1bb476a861d296a: kube-system/storage-provisioner/storage-provisioner" id=aed84f71-7deb-4060-a2b1-3504a94ddccd name=/runtime.v1.RuntimeService/CreateContainer
	Nov 22 00:15:52 ha-561110 crio[666]: time="2025-11-22T00:15:52.430745119Z" level=info msg="Starting container: 135f8581d288b240b9c444b0861bec261a02882a56b15c99e1bb476a861d296a" id=781c7a19-539c-4417-a691-8f4e096b71ed name=/runtime.v1.RuntimeService/StartContainer
	Nov 22 00:15:52 ha-561110 crio[666]: time="2025-11-22T00:15:52.435089701Z" level=info msg="Started container" PID=1391 containerID=135f8581d288b240b9c444b0861bec261a02882a56b15c99e1bb476a861d296a description=kube-system/storage-provisioner/storage-provisioner id=781c7a19-539c-4417-a691-8f4e096b71ed name=/runtime.v1.RuntimeService/StartContainer sandboxID=de4629de69837fe0447ae13245102ae0d04524a3858dcce8a9d5b8e10bb91eaf
	Nov 22 00:16:01 ha-561110 crio[666]: time="2025-11-22T00:16:01.470325793Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 22 00:16:01 ha-561110 crio[666]: time="2025-11-22T00:16:01.474774281Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 22 00:16:01 ha-561110 crio[666]: time="2025-11-22T00:16:01.474811154Z" level=info msg="Updated default CNI network name to kindnet"
	Nov 22 00:16:01 ha-561110 crio[666]: time="2025-11-22T00:16:01.474835695Z" level=info msg="CNI monitoring event WRITE         \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 22 00:16:01 ha-561110 crio[666]: time="2025-11-22T00:16:01.478898906Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 22 00:16:01 ha-561110 crio[666]: time="2025-11-22T00:16:01.478939659Z" level=info msg="Updated default CNI network name to kindnet"
	Nov 22 00:16:01 ha-561110 crio[666]: time="2025-11-22T00:16:01.478962272Z" level=info msg="CNI monitoring event WRITE         \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 22 00:16:01 ha-561110 crio[666]: time="2025-11-22T00:16:01.482066282Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 22 00:16:01 ha-561110 crio[666]: time="2025-11-22T00:16:01.482101062Z" level=info msg="Updated default CNI network name to kindnet"
	Nov 22 00:16:01 ha-561110 crio[666]: time="2025-11-22T00:16:01.482122772Z" level=info msg="CNI monitoring event RENAME        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 22 00:16:01 ha-561110 crio[666]: time="2025-11-22T00:16:01.485482939Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 22 00:16:01 ha-561110 crio[666]: time="2025-11-22T00:16:01.485521674Z" level=info msg="Updated default CNI network name to kindnet"
	Nov 22 00:16:01 ha-561110 crio[666]: time="2025-11-22T00:16:01.485545829Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist\" ← \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 22 00:16:01 ha-561110 crio[666]: time="2025-11-22T00:16:01.488891801Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 22 00:16:01 ha-561110 crio[666]: time="2025-11-22T00:16:01.48892796Z" level=info msg="Updated default CNI network name to kindnet"
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                 NAMESPACE
	135f8581d288b       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6   7 minutes ago       Running             storage-provisioner       2                   de4629de69837       storage-provisioner                 kube-system
	fe1c6226bf4c6       138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc   7 minutes ago       Running             coredns                   1                   b641fd83b9816       coredns-66bc5c9577-vp8f5            kube-system
	69ffa71725510       138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc   7 minutes ago       Running             coredns                   1                   1104258d7fdef       coredns-66bc5c9577-rrkkw            kube-system
	60513ca704c00       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6   7 minutes ago       Exited              storage-provisioner       1                   de4629de69837       storage-provisioner                 kube-system
	d9e4613f17ffd       05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9   7 minutes ago       Running             kube-proxy                1                   1ff3e662bdd09       kube-proxy-fh5cv                    kube-system
	a2d8ce4bb1edd       89a35e2ebb6b938201966889b5e8c85b931db6432c5643966116cd1c28bf45cd   7 minutes ago       Running             busybox                   1                   b8e440f614e56       busybox-7b57f96db7-fbtrb            default
	5a2fb45570b8d       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c   7 minutes ago       Running             kindnet-cni               1                   55f44270c0111       kindnet-7g65m                       kube-system
	555f050993ba2       7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a   8 minutes ago       Running             kube-controller-manager   2                   10dbad5a4508a       kube-controller-manager-ha-561110   kube-system
	4cbb3fde391bd       43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196   8 minutes ago       Running             kube-apiserver            1                   38691a4dbf6ea       kube-apiserver-ha-561110            kube-system
	4360f5517fd5e       b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0   8 minutes ago       Running             kube-scheduler            1                   e0baba9cafe90       kube-scheduler-ha-561110            kube-system
	a395e7473ffe2       a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e   8 minutes ago       Running             etcd                      1                   193446051a803       etcd-ha-561110                      kube-system
	9fdf72902e6e0       2a8917f902489be5a8dd414209c32b77bd644d187ea646d86dbdc31e85efb551   8 minutes ago       Running             kube-vip                  0                   884d14e2e6045       kube-vip-ha-561110                  kube-system
	1c929db60119a       7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a   8 minutes ago       Exited              kube-controller-manager   1                   10dbad5a4508a       kube-controller-manager-ha-561110   kube-system
	
	
	==> coredns [69ffa7172551035e0586a2f61f518f9846bd0b87abc14ba1505f02248c5a9a02] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 9e2996f8cb67ac53e0259ab1f8d615d07d1beb0bd07e6a1e39769c3bf486a905bb991cc47f8d2f14d0d3a90a87dfc625a0b4c524fed169d8158c40657c0694b1
	CoreDNS-1.12.1
	linux/arm64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:39796 - 60732 "HINFO IN 576766510875163090.3461274759123809982. udp 56 false 512" NXDOMAIN qr,rd,ra 56 0.004198928s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	
	
	==> coredns [fe1c6226bf4c6a8f0d43125ecd01e36e538a750fd9dd5c3edb73d4ffd5a90aff] <==
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 9e2996f8cb67ac53e0259ab1f8d615d07d1beb0bd07e6a1e39769c3bf486a905bb991cc47f8d2f14d0d3a90a87dfc625a0b4c524fed169d8158c40657c0694b1
	CoreDNS-1.12.1
	linux/arm64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:58159 - 30701 "HINFO IN 6742751567940684104.616832762995402637. udp 56 false 512" NXDOMAIN qr,rd,ra 56 0.025967847s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	
	
	==> describe nodes <==
	Name:               ha-561110
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=ha-561110
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=299bbe887a12c40541707cc636234f35f4ff1785
	                    minikube.k8s.io/name=ha-561110
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_11_22T00_09_04_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 22 Nov 2025 00:09:01 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-561110
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 22 Nov 2025 00:23:08 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 22 Nov 2025 00:21:46 +0000   Sat, 22 Nov 2025 00:08:56 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 22 Nov 2025 00:21:46 +0000   Sat, 22 Nov 2025 00:08:56 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 22 Nov 2025 00:21:46 +0000   Sat, 22 Nov 2025 00:08:56 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 22 Nov 2025 00:21:46 +0000   Sat, 22 Nov 2025 00:15:19 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    ha-561110
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	System Info:
	  Machine ID:                 c92eea4d3c03c0156e01fcf8691e3907
	  System UUID:                77a39681-2950-4264-8660-77e1aeddeb83
	  Boot ID:                    72ac7385-472f-47d1-a23e-bd80468d6e09
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.2
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                 CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                 ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7b57f96db7-fbtrb             0 (0%)        0 (0%)      0 (0%)           0 (0%)         11m
	  kube-system                 coredns-66bc5c9577-rrkkw             100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     14m
	  kube-system                 coredns-66bc5c9577-vp8f5             100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     14m
	  kube-system                 etcd-ha-561110                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         14m
	  kube-system                 kindnet-7g65m                        100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      14m
	  kube-system                 kube-apiserver-ha-561110             250m (12%)    0 (0%)      0 (0%)           0 (0%)         14m
	  kube-system                 kube-controller-manager-ha-561110    200m (10%)    0 (0%)      0 (0%)           0 (0%)         14m
	  kube-system                 kube-proxy-fh5cv                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         14m
	  kube-system                 kube-scheduler-ha-561110             100m (5%)     0 (0%)      0 (0%)           0 (0%)         14m
	  kube-system                 kube-vip-ha-561110                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m51s
	  kube-system                 storage-provisioner                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         14m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                950m (47%)  100m (5%)
	  memory             290Mi (3%)  390Mi (4%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                    From             Message
	  ----     ------                   ----                   ----             -------
	  Normal   Starting                 7m53s                  kube-proxy       
	  Normal   Starting                 14m                    kube-proxy       
	  Normal   NodeHasNoDiskPressure    14m (x8 over 14m)      kubelet          Node ha-561110 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     14m (x8 over 14m)      kubelet          Node ha-561110 status is now: NodeHasSufficientPID
	  Normal   Starting                 14m                    kubelet          Starting kubelet.
	  Warning  CgroupV1                 14m                    kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  14m (x8 over 14m)      kubelet          Node ha-561110 status is now: NodeHasSufficientMemory
	  Warning  CgroupV1                 14m                    kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   Starting                 14m                    kubelet          Starting kubelet.
	  Normal   NodeHasSufficientPID     14m                    kubelet          Node ha-561110 status is now: NodeHasSufficientPID
	  Normal   NodeHasSufficientMemory  14m                    kubelet          Node ha-561110 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    14m                    kubelet          Node ha-561110 status is now: NodeHasNoDiskPressure
	  Normal   RegisteredNode           14m                    node-controller  Node ha-561110 event: Registered Node ha-561110 in Controller
	  Normal   RegisteredNode           13m                    node-controller  Node ha-561110 event: Registered Node ha-561110 in Controller
	  Normal   NodeReady                13m                    kubelet          Node ha-561110 status is now: NodeReady
	  Normal   RegisteredNode           12m                    node-controller  Node ha-561110 event: Registered Node ha-561110 in Controller
	  Normal   RegisteredNode           8m57s                  node-controller  Node ha-561110 event: Registered Node ha-561110 in Controller
	  Warning  CgroupV1                 8m29s                  kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  8m28s (x8 over 8m28s)  kubelet          Node ha-561110 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    8m28s (x8 over 8m28s)  kubelet          Node ha-561110 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     8m28s (x8 over 8m28s)  kubelet          Node ha-561110 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           7m54s                  node-controller  Node ha-561110 event: Registered Node ha-561110 in Controller
	  Normal   RegisteredNode           7m46s                  node-controller  Node ha-561110 event: Registered Node ha-561110 in Controller
	
	
	Name:               ha-561110-m02
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=ha-561110-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=299bbe887a12c40541707cc636234f35f4ff1785
	                    minikube.k8s.io/name=ha-561110
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2025_11_22T00_09_47_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 22 Nov 2025 00:09:47 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-561110-m02
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 22 Nov 2025 00:23:15 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 22 Nov 2025 00:23:08 +0000   Sat, 22 Nov 2025 00:09:47 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 22 Nov 2025 00:23:08 +0000   Sat, 22 Nov 2025 00:09:47 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 22 Nov 2025 00:23:08 +0000   Sat, 22 Nov 2025 00:09:47 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 22 Nov 2025 00:23:08 +0000   Sat, 22 Nov 2025 00:10:29 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.3
	  Hostname:    ha-561110-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	System Info:
	  Machine ID:                 c92eea4d3c03c0156e01fcf8691e3907
	  System UUID:                a2162c95-cc29-4cd8-8a91-589e6eb1ab6b
	  Boot ID:                    72ac7385-472f-47d1-a23e-bd80468d6e09
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.2
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7b57f96db7-dx9nw                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         11m
	  kube-system                 etcd-ha-561110-m02                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         13m
	  kube-system                 kindnet-dltvw                            100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      13m
	  kube-system                 kube-apiserver-ha-561110-m02             250m (12%)    0 (0%)      0 (0%)           0 (0%)         13m
	  kube-system                 kube-controller-manager-ha-561110-m02    200m (10%)    0 (0%)      0 (0%)           0 (0%)         13m
	  kube-system                 kube-proxy-b8wb5                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         13m
	  kube-system                 kube-scheduler-ha-561110-m02             100m (5%)     0 (0%)      0 (0%)           0 (0%)         13m
	  kube-system                 kube-vip-ha-561110-m02                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         13m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  100m (5%)
	  memory             150Mi (1%)  50Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                    From             Message
	  ----     ------                   ----                   ----             -------
	  Normal   Starting                 7m42s                  kube-proxy       
	  Normal   Starting                 13m                    kube-proxy       
	  Normal   RegisteredNode           13m                    node-controller  Node ha-561110-m02 event: Registered Node ha-561110-m02 in Controller
	  Normal   RegisteredNode           13m                    node-controller  Node ha-561110-m02 event: Registered Node ha-561110-m02 in Controller
	  Normal   RegisteredNode           12m                    node-controller  Node ha-561110-m02 event: Registered Node ha-561110-m02 in Controller
	  Normal   NodeHasSufficientPID     9m30s (x8 over 9m30s)  kubelet          Node ha-561110-m02 status is now: NodeHasSufficientPID
	  Normal   Starting                 9m30s                  kubelet          Starting kubelet.
	  Warning  CgroupV1                 9m30s                  kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  9m30s (x8 over 9m30s)  kubelet          Node ha-561110-m02 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    9m30s (x8 over 9m30s)  kubelet          Node ha-561110-m02 status is now: NodeHasNoDiskPressure
	  Normal   RegisteredNode           8m57s                  node-controller  Node ha-561110-m02 event: Registered Node ha-561110-m02 in Controller
	  Normal   Starting                 8m25s                  kubelet          Starting kubelet.
	  Warning  CgroupV1                 8m25s                  kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  8m24s (x8 over 8m25s)  kubelet          Node ha-561110-m02 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    8m24s (x8 over 8m25s)  kubelet          Node ha-561110-m02 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     8m24s (x8 over 8m25s)  kubelet          Node ha-561110-m02 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           7m54s                  node-controller  Node ha-561110-m02 event: Registered Node ha-561110-m02 in Controller
	  Normal   RegisteredNode           7m46s                  node-controller  Node ha-561110-m02 event: Registered Node ha-561110-m02 in Controller
	
	
	Name:               ha-561110-m04
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=ha-561110-m04
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=299bbe887a12c40541707cc636234f35f4ff1785
	                    minikube.k8s.io/name=ha-561110
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2025_11_22T00_12_27_0700
	                    minikube.k8s.io/version=v1.37.0
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 22 Nov 2025 00:12:26 +0000
	Taints:             node.kubernetes.io/unreachable:NoExecute
	                    node.kubernetes.io/unreachable:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-561110-m04
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 22 Nov 2025 00:14:09 +0000
	Conditions:
	  Type             Status    LastHeartbeatTime                 LastTransitionTime                Reason              Message
	  ----             ------    -----------------                 ------------------                ------              -------
	  MemoryPressure   Unknown   Sat, 22 Nov 2025 00:13:09 +0000   Sat, 22 Nov 2025 00:16:12 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  DiskPressure     Unknown   Sat, 22 Nov 2025 00:13:09 +0000   Sat, 22 Nov 2025 00:16:12 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  PIDPressure      Unknown   Sat, 22 Nov 2025 00:13:09 +0000   Sat, 22 Nov 2025 00:16:12 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  Ready            Unknown   Sat, 22 Nov 2025 00:13:09 +0000   Sat, 22 Nov 2025 00:16:12 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	Addresses:
	  InternalIP:  192.168.49.5
	  Hostname:    ha-561110-m04
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	System Info:
	  Machine ID:                 c92eea4d3c03c0156e01fcf8691e3907
	  System UUID:                00d86356-c884-4dfd-a214-95f51a02c157
	  Boot ID:                    72ac7385-472f-47d1-a23e-bd80468d6e09
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.2
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.3.0/24
	PodCIDRs:                     10.244.3.0/24
	Non-terminated Pods:          (2 in total)
	  Namespace                   Name                CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                ------------  ----------  ---------------  -------------  ---
	  kube-system                 kindnet-4tkd6       100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      10m
	  kube-system                 kube-proxy-2vctt    0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%)  100m (5%)
	  memory             50Mi (0%)  50Mi (0%)
	  ephemeral-storage  0 (0%)     0 (0%)
	  hugepages-1Gi      0 (0%)     0 (0%)
	  hugepages-2Mi      0 (0%)     0 (0%)
	  hugepages-32Mi     0 (0%)     0 (0%)
	  hugepages-64Ki     0 (0%)     0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 10m                kube-proxy       
	  Normal  NodeHasSufficientMemory  10m (x3 over 10m)  kubelet          Node ha-561110-m04 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    10m (x3 over 10m)  kubelet          Node ha-561110-m04 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     10m (x3 over 10m)  kubelet          Node ha-561110-m04 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           10m                node-controller  Node ha-561110-m04 event: Registered Node ha-561110-m04 in Controller
	  Normal  RegisteredNode           10m                node-controller  Node ha-561110-m04 event: Registered Node ha-561110-m04 in Controller
	  Normal  RegisteredNode           10m                node-controller  Node ha-561110-m04 event: Registered Node ha-561110-m04 in Controller
	  Normal  NodeReady                10m                kubelet          Node ha-561110-m04 status is now: NodeReady
	  Normal  RegisteredNode           8m57s              node-controller  Node ha-561110-m04 event: Registered Node ha-561110-m04 in Controller
	  Normal  RegisteredNode           7m54s              node-controller  Node ha-561110-m04 event: Registered Node ha-561110-m04 in Controller
	  Normal  RegisteredNode           7m46s              node-controller  Node ha-561110-m04 event: Registered Node ha-561110-m04 in Controller
	  Normal  NodeNotReady             7m4s               node-controller  Node ha-561110-m04 status is now: NodeNotReady
	
	
	==> dmesg <==
	[Nov21 23:16] overlayfs: idmapped layers are currently not supported
	[Nov21 23:17] overlayfs: idmapped layers are currently not supported
	[ +10.681159] overlayfs: idmapped layers are currently not supported
	[Nov21 23:19] overlayfs: idmapped layers are currently not supported
	[ +15.192296] overlayfs: idmapped layers are currently not supported
	[Nov21 23:20] overlayfs: idmapped layers are currently not supported
	[Nov21 23:21] overlayfs: idmapped layers are currently not supported
	[Nov21 23:22] overlayfs: idmapped layers are currently not supported
	[ +12.884842] overlayfs: idmapped layers are currently not supported
	[Nov21 23:23] overlayfs: idmapped layers are currently not supported
	[ +12.022080] overlayfs: idmapped layers are currently not supported
	[Nov21 23:25] overlayfs: idmapped layers are currently not supported
	[ +24.447615] overlayfs: idmapped layers are currently not supported
	[Nov21 23:46] kauditd_printk_skb: 8 callbacks suppressed
	[Nov21 23:48] overlayfs: idmapped layers are currently not supported
	[Nov21 23:54] overlayfs: idmapped layers are currently not supported
	[Nov21 23:55] overlayfs: idmapped layers are currently not supported
	[Nov22 00:08] overlayfs: idmapped layers are currently not supported
	[Nov22 00:09] overlayfs: idmapped layers are currently not supported
	[Nov22 00:10] overlayfs: idmapped layers are currently not supported
	[Nov22 00:12] overlayfs: idmapped layers are currently not supported
	[Nov22 00:13] overlayfs: idmapped layers are currently not supported
	[Nov22 00:14] overlayfs: idmapped layers are currently not supported
	[  +3.904643] overlayfs: idmapped layers are currently not supported
	[Nov22 00:15] overlayfs: idmapped layers are currently not supported
	
	
	==> etcd [a395e7473ffe2b7999ae75a70e19b4f153d459c8ccae48aeeb71b5b6248cc1f2] <==
	{"level":"warn","ts":"2025-11-22T00:22:53.988161Z","caller":"etcdserver/cluster_util.go:160","msg":"failed to get version","remote-member-id":"700ebc6e9635b48f","error":"Get \"https://192.168.49.4:2380/version\": dial tcp 192.168.49.4:2380: connect: connection refused"}
	{"level":"warn","ts":"2025-11-22T00:22:54.402648Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_RAFT_MESSAGE","remote-peer-id":"700ebc6e9635b48f","rtt":"51.609423ms","error":"dial tcp 192.168.49.4:2380: connect: connection refused"}
	{"level":"warn","ts":"2025-11-22T00:22:54.402639Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_SNAPSHOT","remote-peer-id":"700ebc6e9635b48f","rtt":"61.899654ms","error":"dial tcp 192.168.49.4:2380: connect: connection refused"}
	{"level":"warn","ts":"2025-11-22T00:22:57.989780Z","caller":"etcdserver/cluster_util.go:259","msg":"failed to reach the peer URL","address":"https://192.168.49.4:2380/version","remote-member-id":"700ebc6e9635b48f","error":"Get \"https://192.168.49.4:2380/version\": dial tcp 192.168.49.4:2380: connect: connection refused"}
	{"level":"warn","ts":"2025-11-22T00:22:57.989860Z","caller":"etcdserver/cluster_util.go:160","msg":"failed to get version","remote-member-id":"700ebc6e9635b48f","error":"Get \"https://192.168.49.4:2380/version\": dial tcp 192.168.49.4:2380: connect: connection refused"}
	{"level":"warn","ts":"2025-11-22T00:22:59.403601Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_RAFT_MESSAGE","remote-peer-id":"700ebc6e9635b48f","rtt":"51.609423ms","error":"dial tcp 192.168.49.4:2380: connect: connection refused"}
	{"level":"warn","ts":"2025-11-22T00:22:59.403589Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_SNAPSHOT","remote-peer-id":"700ebc6e9635b48f","rtt":"61.899654ms","error":"dial tcp 192.168.49.4:2380: connect: connection refused"}
	{"level":"warn","ts":"2025-11-22T00:23:01.991023Z","caller":"etcdserver/cluster_util.go:259","msg":"failed to reach the peer URL","address":"https://192.168.49.4:2380/version","remote-member-id":"700ebc6e9635b48f","error":"Get \"https://192.168.49.4:2380/version\": dial tcp 192.168.49.4:2380: connect: connection refused"}
	{"level":"warn","ts":"2025-11-22T00:23:01.991083Z","caller":"etcdserver/cluster_util.go:160","msg":"failed to get version","remote-member-id":"700ebc6e9635b48f","error":"Get \"https://192.168.49.4:2380/version\": dial tcp 192.168.49.4:2380: connect: connection refused"}
	{"level":"warn","ts":"2025-11-22T00:23:04.403878Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_SNAPSHOT","remote-peer-id":"700ebc6e9635b48f","rtt":"61.899654ms","error":"dial tcp 192.168.49.4:2380: connect: connection refused"}
	{"level":"warn","ts":"2025-11-22T00:23:04.403933Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_RAFT_MESSAGE","remote-peer-id":"700ebc6e9635b48f","rtt":"51.609423ms","error":"dial tcp 192.168.49.4:2380: connect: connection refused"}
	{"level":"warn","ts":"2025-11-22T00:23:05.992468Z","caller":"etcdserver/cluster_util.go:259","msg":"failed to reach the peer URL","address":"https://192.168.49.4:2380/version","remote-member-id":"700ebc6e9635b48f","error":"Get \"https://192.168.49.4:2380/version\": dial tcp 192.168.49.4:2380: connect: connection refused"}
	{"level":"warn","ts":"2025-11-22T00:23:05.992552Z","caller":"etcdserver/cluster_util.go:160","msg":"failed to get version","remote-member-id":"700ebc6e9635b48f","error":"Get \"https://192.168.49.4:2380/version\": dial tcp 192.168.49.4:2380: connect: connection refused"}
	{"level":"warn","ts":"2025-11-22T00:23:06.740226Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"192.168.49.4:36170","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-22T00:23:06.795797Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"192.168.49.4:36198","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-11-22T00:23:06.818468Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1981","msg":"aec36adc501070cc switched to configuration voters=(12355062781122549397 12593026477526642892)"}
	{"level":"info","ts":"2025-11-22T00:23:06.820790Z","caller":"membership/cluster.go:460","msg":"removed member","cluster-id":"fa54960ea34d58be","local-member-id":"aec36adc501070cc","removed-remote-peer-id":"700ebc6e9635b48f","removed-remote-peer-urls":["https://192.168.49.4:2380"],"removed-remote-peer-is-learner":false}
	{"level":"info","ts":"2025-11-22T00:23:06.820867Z","caller":"rafthttp/peer.go:316","msg":"stopping remote peer","remote-peer-id":"700ebc6e9635b48f"}
	{"level":"info","ts":"2025-11-22T00:23:06.820913Z","caller":"rafthttp/stream.go:293","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","remote-peer-id":"700ebc6e9635b48f"}
	{"level":"info","ts":"2025-11-22T00:23:06.820978Z","caller":"rafthttp/stream.go:293","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream Message","remote-peer-id":"700ebc6e9635b48f"}
	{"level":"info","ts":"2025-11-22T00:23:06.821052Z","caller":"rafthttp/pipeline.go:85","msg":"stopped HTTP pipelining with remote peer","local-member-id":"aec36adc501070cc","remote-peer-id":"700ebc6e9635b48f"}
	{"level":"info","ts":"2025-11-22T00:23:06.821092Z","caller":"rafthttp/stream.go:441","msg":"stopped stream reader with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"aec36adc501070cc","remote-peer-id":"700ebc6e9635b48f"}
	{"level":"info","ts":"2025-11-22T00:23:06.821172Z","caller":"rafthttp/stream.go:441","msg":"stopped stream reader with remote peer","stream-reader-type":"stream Message","local-member-id":"aec36adc501070cc","remote-peer-id":"700ebc6e9635b48f"}
	{"level":"info","ts":"2025-11-22T00:23:06.821201Z","caller":"rafthttp/peer.go:321","msg":"stopped remote peer","remote-peer-id":"700ebc6e9635b48f"}
	{"level":"info","ts":"2025-11-22T00:23:06.821231Z","caller":"rafthttp/transport.go:354","msg":"removed remote peer","local-member-id":"aec36adc501070cc","removed-remote-peer-id":"700ebc6e9635b48f"}
	
	
	==> kernel <==
	 00:23:16 up  5:05,  0 user,  load average: 0.40, 0.95, 1.18
	Linux ha-561110 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [5a2fb45570b8d8d9729d3fcc9460e054e1a5757ce0b35d5e4c6ab8f496780c4f] <==
	I1122 00:22:41.465247       1 main.go:324] Node ha-561110-m03 has CIDR [10.244.2.0/24] 
	I1122 00:22:41.465303       1 main.go:297] Handling node with IPs: map[192.168.49.5:{}]
	I1122 00:22:41.465309       1 main.go:324] Node ha-561110-m04 has CIDR [10.244.3.0/24] 
	I1122 00:22:51.473085       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1122 00:22:51.473126       1 main.go:301] handling current node
	I1122 00:22:51.473142       1 main.go:297] Handling node with IPs: map[192.168.49.3:{}]
	I1122 00:22:51.473148       1 main.go:324] Node ha-561110-m02 has CIDR [10.244.1.0/24] 
	I1122 00:22:51.473281       1 main.go:297] Handling node with IPs: map[192.168.49.4:{}]
	I1122 00:22:51.473294       1 main.go:324] Node ha-561110-m03 has CIDR [10.244.2.0/24] 
	I1122 00:22:51.473352       1 main.go:297] Handling node with IPs: map[192.168.49.5:{}]
	I1122 00:22:51.473363       1 main.go:324] Node ha-561110-m04 has CIDR [10.244.3.0/24] 
	I1122 00:23:01.470409       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1122 00:23:01.470552       1 main.go:301] handling current node
	I1122 00:23:01.470584       1 main.go:297] Handling node with IPs: map[192.168.49.3:{}]
	I1122 00:23:01.470592       1 main.go:324] Node ha-561110-m02 has CIDR [10.244.1.0/24] 
	I1122 00:23:01.470789       1 main.go:297] Handling node with IPs: map[192.168.49.4:{}]
	I1122 00:23:01.470804       1 main.go:324] Node ha-561110-m03 has CIDR [10.244.2.0/24] 
	I1122 00:23:01.470889       1 main.go:297] Handling node with IPs: map[192.168.49.5:{}]
	I1122 00:23:01.470900       1 main.go:324] Node ha-561110-m04 has CIDR [10.244.3.0/24] 
	I1122 00:23:11.465362       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1122 00:23:11.465394       1 main.go:301] handling current node
	I1122 00:23:11.465416       1 main.go:297] Handling node with IPs: map[192.168.49.3:{}]
	I1122 00:23:11.465423       1 main.go:324] Node ha-561110-m02 has CIDR [10.244.1.0/24] 
	I1122 00:23:11.465568       1 main.go:297] Handling node with IPs: map[192.168.49.5:{}]
	I1122 00:23:11.465574       1 main.go:324] Node ha-561110-m04 has CIDR [10.244.3.0/24] 
	
	
	==> kube-apiserver [4cbb3fde391bd86e756416ec260b0b8a5501d5139da802107965d9e012c4eca5] <==
	I1122 00:15:18.445997       1 shared_informer.go:356] "Caches are synced" controller="ipallocator-repair-controller"
	I1122 00:15:18.446301       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	W1122 00:15:18.447701       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.49.3]
	I1122 00:15:18.452038       1 shared_informer.go:356] "Caches are synced" controller="*generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]"
	I1122 00:15:18.452137       1 policy_source.go:240] refreshing policies
	I1122 00:15:18.460639       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1122 00:15:18.471883       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I1122 00:15:18.471973       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I1122 00:15:18.484728       1 cache.go:39] Caches are synced for LocalAvailability controller
	I1122 00:15:18.486315       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I1122 00:15:18.488710       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1122 00:15:18.492798       1 shared_informer.go:356] "Caches are synced" controller="node_authorizer"
	I1122 00:15:18.495280       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I1122 00:15:18.507423       1 cache.go:39] Caches are synced for autoregister controller
	I1122 00:15:18.534574       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1122 00:15:18.549678       1 controller.go:667] quota admission added evaluator for: endpoints
	I1122 00:15:18.565788       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	E1122 00:15:18.571045       1 controller.go:95] Found stale data, removed previous endpoints on kubernetes service, apiserver didn't exit successfully previously
	I1122 00:15:19.403170       1 shared_informer.go:356] "Caches are synced" controller="kubernetes-service-cidr-controller"
	I1122 00:15:19.403318       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	W1122 00:15:19.985311       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.49.2 192.168.49.3]
	I1122 00:15:20.110990       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1122 00:15:22.839985       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1122 00:15:22.952373       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1122 00:15:33.431623       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	
	
	==> kube-controller-manager [1c929db60119ab54f03020d00f2063dc6672d329ea34f4504e502142bffbe644] <==
	I1122 00:14:51.749993       1 serving.go:386] Generated self-signed cert in-memory
	I1122 00:14:53.094715       1 controllermanager.go:191] "Starting" version="v1.34.1"
	I1122 00:14:53.095280       1 controllermanager.go:193] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1122 00:14:53.099971       1 secure_serving.go:211] Serving securely on 127.0.0.1:10257
	I1122 00:14:53.101968       1 dynamic_cafile_content.go:161] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I1122 00:14:53.102195       1 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I1122 00:14:53.102364       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	E1122 00:15:08.891956       1 controllermanager.go:245] "Error building controller context" err="failed to wait for apiserver being healthy: timed out waiting for the condition: failed to get apiserver /healthz status: forbidden: User \"system:kube-controller-manager\" cannot get path \"/healthz\""
	
	
	==> kube-controller-manager [555f050993ba210ea8b5a432f7b9d055cece81e4f3e958134fe029c08873937f] <==
	I1122 00:15:22.665955       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1122 00:15:22.665980       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1122 00:15:22.665989       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1122 00:15:22.670916       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I1122 00:15:22.671810       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I1122 00:15:22.671975       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I1122 00:15:22.674739       1 shared_informer.go:356] "Caches are synced" controller="service account"
	I1122 00:15:22.700683       1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
	I1122 00:15:22.700732       1 shared_informer.go:356] "Caches are synced" controller="job"
	I1122 00:15:22.700975       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I1122 00:15:22.701031       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="ha-561110-m04"
	I1122 00:15:22.702027       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1122 00:15:22.702218       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1122 00:15:22.702265       1 shared_informer.go:356] "Caches are synced" controller="taint"
	I1122 00:15:22.702335       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I1122 00:15:22.702421       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="ha-561110-m04"
	I1122 00:15:22.702475       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="ha-561110"
	I1122 00:15:22.702508       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="ha-561110-m02"
	I1122 00:15:22.702530       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="ha-561110-m03"
	I1122 00:15:22.703121       1 node_lifecycle_controller.go:1067] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I1122 00:16:02.360319       1 endpointslice_controller.go:344] "Error syncing endpoint slices for service, retrying" logger="endpointslice-controller" key="kube-system/kube-dns" err="failed to update kube-dns-fg476 EndpointSlice for Service kube-system/kube-dns: Operation cannot be fulfilled on endpointslices.discovery.k8s.io \"kube-dns-fg476\": the object has been modified; please apply your changes to the latest version and try again"
	I1122 00:16:02.360917       1 event.go:377] Event(v1.ObjectReference{Kind:"Service", Namespace:"kube-system", Name:"kube-dns", UID:"9e1e93e1-00b2-4af4-b92a-649228d61b24", APIVersion:"v1", ResourceVersion:"291", FieldPath:""}): type: 'Warning' reason: 'FailedToUpdateEndpointSlices' Error updating Endpoint Slices for Service kube-system/kube-dns: failed to update kube-dns-fg476 EndpointSlice for Service kube-system/kube-dns: Operation cannot be fulfilled on endpointslices.discovery.k8s.io "kube-dns-fg476": the object has been modified; please apply your changes to the latest version and try again
	I1122 00:21:22.783104       1 taint_eviction.go:111] "Deleting pod" logger="taint-eviction-controller" controller="taint-eviction-controller" pod="default/busybox-7b57f96db7-jnjz9"
	E1122 00:21:23.049792       1 replica_set.go:587] "Unhandled Error" err="sync \"default/busybox-7b57f96db7\" failed with Operation cannot be fulfilled on replicasets.apps \"busybox-7b57f96db7\": the object has been modified; please apply your changes to the latest version and try again" logger="UnhandledError"
	E1122 00:23:07.392273       1 garbagecollector.go:360] "Unhandled Error" err="error syncing item &garbagecollector.node{identity:garbagecollector.objectReference{OwnerReference:v1.OwnerReference{APIVersion:\"coordination.k8s.io/v1\", Kind:\"Lease\", Name:\"ha-561110-m03\", UID:\"89ed6ab2-2d42-416d-85b4-495b62b93ace\", Controller:(*bool)(nil), BlockOwnerDeletion:(*bool)(nil)}, Namespace:\"kube-node-lease\"}, dependentsLock:sync.RWMutex{w:sync.Mutex{_:sync.noCopy{}, mu:sync.Mutex{state:0, sema:0x0}}, writerSem:0x0, readerSem:0x0, readerCount:atomic.Int32{_:atomic.noCopy{}, v:1}, readerWait:atomic.Int32{_:atomic.noCopy{}, v:0}}, dependents:map[*garbagecollector.node]struct {}{}, deletingDependents:false, deletingDependentsLock:sync.RWMutex{w:sync.Mutex{_:sync.noCopy{}, mu:sync.Mutex{state:0, sema:0x0}}, writerSem:0x0, readerSem:0x0, readerCount:atomic.Int32{_:atomic.noCopy{}, v:0}, readerWait:atomic.Int32{_:atomic.noCopy{}, v:0}}, beingDeleted:false, beingDeletedLock:sync.RWMutex{w:sync.Mutex{_:sync.noC
opy{}, mu:sync.Mutex{state:0, sema:0x0}}, writerSem:0x0, readerSem:0x0, readerCount:atomic.Int32{_:atomic.noCopy{}, v:0}, readerWait:atomic.Int32{_:atomic.noCopy{}, v:0}}, virtual:false, virtualLock:sync.RWMutex{w:sync.Mutex{_:sync.noCopy{}, mu:sync.Mutex{state:0, sema:0x0}}, writerSem:0x0, readerSem:0x0, readerCount:atomic.Int32{_:atomic.noCopy{}, v:0}, readerWait:atomic.Int32{_:atomic.noCopy{}, v:0}}, owners:[]v1.OwnerReference{v1.OwnerReference{APIVersion:\"v1\", Kind:\"Node\", Name:\"ha-561110-m03\", UID:\"60ac0879-7e66-4fe4-865c-9695d0489790\", Controller:(*bool)(nil), BlockOwnerDeletion:(*bool)(nil)}}}: leases.coordination.k8s.io \"ha-561110-m03\" not found" logger="UnhandledError"
	
	
	==> kube-proxy [d9e4613f17ffd567cd78a387d7add1e58e4b781fbb445147b8bfca54b9432ab5] <==
	I1122 00:15:21.735861       1 server_linux.go:53] "Using iptables proxy"
	I1122 00:15:22.275352       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1122 00:15:22.376449       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1122 00:15:22.376485       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.49.2"]
	E1122 00:15:22.376557       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1122 00:15:22.535668       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1122 00:15:22.535795       1 server_linux.go:132] "Using iptables Proxier"
	I1122 00:15:22.609065       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1122 00:15:22.609513       1 server.go:527] "Version info" version="v1.34.1"
	I1122 00:15:22.609711       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1122 00:15:22.617349       1 config.go:106] "Starting endpoint slice config controller"
	I1122 00:15:22.642095       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1122 00:15:22.642216       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1122 00:15:22.625017       1 config.go:309] "Starting node config controller"
	I1122 00:15:22.642311       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1122 00:15:22.661330       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1122 00:15:22.618034       1 config.go:403] "Starting serviceCIDR config controller"
	I1122 00:15:22.661456       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1122 00:15:22.661484       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1122 00:15:22.617914       1 config.go:200] "Starting service config controller"
	I1122 00:15:22.667161       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1122 00:15:22.669962       1 shared_informer.go:356] "Caches are synced" controller="service config"
	
	
	==> kube-scheduler [4360f5517fd5eb7d570a98dee1b801419d3b650d7e890d5ddecc79946fba46db] <==
	E1122 00:15:06.983690       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1122 00:15:07.083902       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1122 00:15:07.669484       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1122 00:15:08.371857       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1122 00:15:08.496001       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1122 00:15:09.010289       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1122 00:15:09.013639       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1122 00:15:09.181881       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1122 00:15:13.452037       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1122 00:15:13.489179       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1122 00:15:13.596578       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1122 00:15:15.338465       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1122 00:15:15.586170       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1122 00:15:15.778676       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1122 00:15:16.291567       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1122 00:15:16.393784       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1122 00:15:16.654452       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1122 00:15:16.676020       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1122 00:15:16.720023       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_arm64.s:1223" type="*v1.ConfigMap"
	E1122 00:15:16.867178       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1122 00:15:17.894056       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1122 00:15:17.894162       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1122 00:15:18.097763       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1122 00:15:18.407533       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope: RBAC: [clusterrole.rbac.authorization.k8s.io \"system:discovery\" not found, clusterrole.rbac.authorization.k8s.io \"system:basic-user\" not found, clusterrole.rbac.authorization.k8s.io \"system:volume-scheduler\" not found, clusterrole.rbac.authorization.k8s.io \"system:public-info-viewer\" not found, clusterrole.rbac.authorization.k8s.io \"system:kube-scheduler\" not found]" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	I1122 00:15:40.715350       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Nov 22 00:15:19 ha-561110 kubelet[804]: E1122 00:15:19.202909     804 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"etcd-ha-561110\" already exists" pod="kube-system/etcd-ha-561110"
	Nov 22 00:15:19 ha-561110 kubelet[804]: E1122 00:15:19.208208     804 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"etcd-ha-561110\" already exists" pod="kube-system/etcd-ha-561110"
	Nov 22 00:15:19 ha-561110 kubelet[804]: I1122 00:15:19.208378     804 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ha-561110"
	Nov 22 00:15:19 ha-561110 kubelet[804]: E1122 00:15:19.222213     804 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-apiserver-ha-561110\" already exists" pod="kube-system/kube-apiserver-ha-561110"
	Nov 22 00:15:19 ha-561110 kubelet[804]: I1122 00:15:19.222405     804 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ha-561110"
	Nov 22 00:15:19 ha-561110 kubelet[804]: E1122 00:15:19.238353     804 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-controller-manager-ha-561110\" already exists" pod="kube-system/kube-controller-manager-ha-561110"
	Nov 22 00:15:19 ha-561110 kubelet[804]: I1122 00:15:19.996652     804 apiserver.go:52] "Watching apiserver"
	Nov 22 00:15:20 ha-561110 kubelet[804]: I1122 00:15:20.004192     804 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world"
	Nov 22 00:15:20 ha-561110 kubelet[804]: I1122 00:15:20.010755     804 kubelet.go:3202] "Trying to delete pod" pod="kube-system/kube-vip-ha-561110" podUID="f9bbfb1b-cc91-44c4-be9d-f028e6f3038f"
	Nov 22 00:15:20 ha-561110 kubelet[804]: I1122 00:15:20.042558     804 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/318c6763-fea1-4564-86f6-18cfad691213-xtables-lock\") pod \"kube-proxy-fh5cv\" (UID: \"318c6763-fea1-4564-86f6-18cfad691213\") " pod="kube-system/kube-proxy-fh5cv"
	Nov 22 00:15:20 ha-561110 kubelet[804]: I1122 00:15:20.042916     804 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/edeca4a6-de24-4444-be9c-cdcbf744f52a-lib-modules\") pod \"kindnet-7g65m\" (UID: \"edeca4a6-de24-4444-be9c-cdcbf744f52a\") " pod="kube-system/kindnet-7g65m"
	Nov 22 00:15:20 ha-561110 kubelet[804]: I1122 00:15:20.043044     804 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/edeca4a6-de24-4444-be9c-cdcbf744f52a-xtables-lock\") pod \"kindnet-7g65m\" (UID: \"edeca4a6-de24-4444-be9c-cdcbf744f52a\") " pod="kube-system/kindnet-7g65m"
	Nov 22 00:15:20 ha-561110 kubelet[804]: I1122 00:15:20.043629     804 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/318c6763-fea1-4564-86f6-18cfad691213-lib-modules\") pod \"kube-proxy-fh5cv\" (UID: \"318c6763-fea1-4564-86f6-18cfad691213\") " pod="kube-system/kube-proxy-fh5cv"
	Nov 22 00:15:20 ha-561110 kubelet[804]: I1122 00:15:20.043908     804 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/6bf95a26-263b-4088-904d-b344d4826342-tmp\") pod \"storage-provisioner\" (UID: \"6bf95a26-263b-4088-904d-b344d4826342\") " pod="kube-system/storage-provisioner"
	Nov 22 00:15:20 ha-561110 kubelet[804]: I1122 00:15:20.044454     804 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/edeca4a6-de24-4444-be9c-cdcbf744f52a-cni-cfg\") pod \"kindnet-7g65m\" (UID: \"edeca4a6-de24-4444-be9c-cdcbf744f52a\") " pod="kube-system/kindnet-7g65m"
	Nov 22 00:15:20 ha-561110 kubelet[804]: I1122 00:15:20.069531     804 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="12f5cffcd2e0febd6c4ae07da010fd8f" path="/var/lib/kubelet/pods/12f5cffcd2e0febd6c4ae07da010fd8f/volumes"
	Nov 22 00:15:20 ha-561110 kubelet[804]: I1122 00:15:20.170059     804 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory"
	Nov 22 00:15:20 ha-561110 kubelet[804]: I1122 00:15:20.199192     804 kubelet.go:3208] "Deleted mirror pod as it didn't match the static Pod" pod="kube-system/kube-vip-ha-561110"
	Nov 22 00:15:20 ha-561110 kubelet[804]: I1122 00:15:20.199382     804 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-vip-ha-561110"
	Nov 22 00:15:20 ha-561110 kubelet[804]: W1122 00:15:20.465863     804 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/b491a219f5f6a6da2c04a012513c1a2266c783068f6131eddf3365f4209ece96/crio-b8e440f614e569ef72e19243d1540dd34639d19916d8b0e346545eb4867daf57 WatchSource:0}: Error finding container b8e440f614e569ef72e19243d1540dd34639d19916d8b0e346545eb4867daf57: Status 404 returned error can't find the container with id b8e440f614e569ef72e19243d1540dd34639d19916d8b0e346545eb4867daf57
	Nov 22 00:15:20 ha-561110 kubelet[804]: W1122 00:15:20.651808     804 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/b491a219f5f6a6da2c04a012513c1a2266c783068f6131eddf3365f4209ece96/crio-b641fd83b9816fb348d03cb35df6649a6ab3d78bdff2936914e0167db04fad0a WatchSource:0}: Error finding container b641fd83b9816fb348d03cb35df6649a6ab3d78bdff2936914e0167db04fad0a: Status 404 returned error can't find the container with id b641fd83b9816fb348d03cb35df6649a6ab3d78bdff2936914e0167db04fad0a
	Nov 22 00:15:47 ha-561110 kubelet[804]: E1122 00:15:47.996298     804 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"60db7bf551c52a828501521a7f79a373f51d5d988223afbc4f6f1a9ca6872e12\": container with ID starting with 60db7bf551c52a828501521a7f79a373f51d5d988223afbc4f6f1a9ca6872e12 not found: ID does not exist" containerID="60db7bf551c52a828501521a7f79a373f51d5d988223afbc4f6f1a9ca6872e12"
	Nov 22 00:15:47 ha-561110 kubelet[804]: I1122 00:15:47.996363     804 kuberuntime_gc.go:364] "Error getting ContainerStatus for containerID" containerID="60db7bf551c52a828501521a7f79a373f51d5d988223afbc4f6f1a9ca6872e12" err="rpc error: code = NotFound desc = could not find container \"60db7bf551c52a828501521a7f79a373f51d5d988223afbc4f6f1a9ca6872e12\": container with ID starting with 60db7bf551c52a828501521a7f79a373f51d5d988223afbc4f6f1a9ca6872e12 not found: ID does not exist"
	Nov 22 00:15:52 ha-561110 kubelet[804]: I1122 00:15:52.388242     804 scope.go:117] "RemoveContainer" containerID="60513ca704c00c488d3491dd4f8a9e84dd69cf4c098d6dddf6f9ecba18d70a70"
	Nov 22 00:16:25 ha-561110 kubelet[804]: I1122 00:16:25.065664     804 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-vip-ha-561110"
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p ha-561110 -n ha-561110
helpers_test.go:269: (dbg) Run:  kubectl --context ha-561110 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:280: non-running pods: busybox-7b57f96db7-hkwmz
helpers_test.go:282: ======> post-mortem[TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete]: describe non-running pods <======
helpers_test.go:285: (dbg) Run:  kubectl --context ha-561110 describe pod busybox-7b57f96db7-hkwmz
helpers_test.go:290: (dbg) kubectl --context ha-561110 describe pod busybox-7b57f96db7-hkwmz:

                                                
                                                
-- stdout --
	Name:             busybox-7b57f96db7-hkwmz
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             <none>
	Labels:           app=busybox
	                  pod-template-hash=7b57f96db7
	Annotations:      <none>
	Status:           Pending
	IP:               
	IPs:              <none>
	Controlled By:    ReplicaSet/busybox-7b57f96db7
	Containers:
	  busybox:
	    Image:      gcr.io/k8s-minikube/busybox:1.28
	    Port:       <none>
	    Host Port:  <none>
	    Command:
	      sleep
	      3600
	    Environment:  <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-82jj6 (ro)
	Conditions:
	  Type           Status
	  PodScheduled   False 
	Volumes:
	  kube-api-access-82jj6:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason            Age   From               Message
	  ----     ------            ----  ----               -------
	  Warning  FailedScheduling  115s  default-scheduler  0/4 nodes are available: 2 node(s) didn't match pod anti-affinity rules, 2 node(s) had untolerated taint {node.kubernetes.io/unreachable: }. no new claims to deallocate, preemption: 0/4 nodes are available: 2 No preemption victims found for incoming pod, 2 Preemption is not helpful for scheduling.
	  Warning  FailedScheduling  115s  default-scheduler  0/4 nodes are available: 2 node(s) didn't match pod anti-affinity rules, 2 node(s) had untolerated taint {node.kubernetes.io/unreachable: }. no new claims to deallocate, preemption: 0/4 nodes are available: 2 No preemption victims found for incoming pod, 2 Preemption is not helpful for scheduling.
	  Warning  FailedScheduling  11s   default-scheduler  0/4 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/unreachable: }, 1 node(s) were unschedulable, 2 node(s) didn't match pod anti-affinity rules. no new claims to deallocate, preemption: 0/4 nodes are available: 2 No preemption victims found for incoming pod, 2 Preemption is not helpful for scheduling.
	  Warning  FailedScheduling  11s   default-scheduler  0/4 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/unreachable: }, 1 node(s) were unschedulable, 2 node(s) didn't match pod anti-affinity rules. no new claims to deallocate, preemption: 0/4 nodes are available: 2 No preemption victims found for incoming pod, 2 Preemption is not helpful for scheduling.

                                                
                                                
-- /stdout --
helpers_test.go:293: <<< TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (3.22s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (1.9s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 pause -p json-output-557707 --output=json --user=testUser
json_output_test.go:63: (dbg) Non-zero exit: out/minikube-linux-arm64 pause -p json-output-557707 --output=json --user=testUser: exit status 80 (1.895027565s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"bc599827-cd9b-41c4-af04-296df8e3a051","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"Pausing node json-output-557707 ...","name":"Pausing","totalsteps":"1"}}
	{"specversion":"1.0","id":"ba3e1c7c-31ad-4356-b923-860cc954783a","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"80","issues":"","message":"Pause: list running: runc: sudo runc list -f json: Process exited with status 1\nstdout:\n\nstderr:\ntime=\"2025-11-22T00:27:17Z\" level=error msg=\"open /run/runc: no such file or directory\"","name":"GUEST_PAUSE","url":""}}
	{"specversion":"1.0","id":"48bc3bff-574f-4204-8a6a-6c097210ef13","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"message":"╭───────────────────────────────────────────────────────────────────────────────────────────╮\n│                                                                                           │\n│    If the above advice does not help, please let us know:                                 │\n│    https://github.com/kubernetes/minikube/issues/new/choose                               │\n│                                                                                           │\n│    Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │\n│    Please also attach the following f
ile to the GitHub issue:                             │\n│    - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │\n│                                                                                           │\n╰───────────────────────────────────────────────────────────────────────────────────────────╯"}}

                                                
                                                
-- /stdout --
json_output_test.go:65: failed to clean up: args "out/minikube-linux-arm64 pause -p json-output-557707 --output=json --user=testUser": exit status 80
--- FAIL: TestJSONOutput/pause/Command (1.90s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (1.58s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 unpause -p json-output-557707 --output=json --user=testUser
json_output_test.go:63: (dbg) Non-zero exit: out/minikube-linux-arm64 unpause -p json-output-557707 --output=json --user=testUser: exit status 80 (1.58171948s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"4b51e248-ff58-4e61-9c17-5ff8046241b0","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"Unpausing node json-output-557707 ...","name":"Unpausing","totalsteps":"1"}}
	{"specversion":"1.0","id":"cdda0f41-b357-40b1-aa9e-01f216ebea8c","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"80","issues":"","message":"Pause: list paused: runc: sudo runc list -f json: Process exited with status 1\nstdout:\n\nstderr:\ntime=\"2025-11-22T00:27:18Z\" level=error msg=\"open /run/runc: no such file or directory\"","name":"GUEST_UNPAUSE","url":""}}
	{"specversion":"1.0","id":"b901def6-c06f-45f1-9fa2-78818e6888a4","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"message":"╭───────────────────────────────────────────────────────────────────────────────────────────╮\n│                                                                                           │\n│    If the above advice does not help, please let us know:                                 │\n│    https://github.com/kubernetes/minikube/issues/new/choose                               │\n│                                                                                           │\n│    Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │\n│    Please also attach the following f
ile to the GitHub issue:                             │\n│    - /tmp/minikube_unpause_85c908ac827001a7ced33feb0caf7da086d17584_0.log                 │\n│                                                                                           │\n╰───────────────────────────────────────────────────────────────────────────────────────────╯"}}

                                                
                                                
-- /stdout --
json_output_test.go:65: failed to clean up: args "out/minikube-linux-arm64 unpause -p json-output-557707 --output=json --user=testUser": exit status 80
--- FAIL: TestJSONOutput/unpause/Command (1.58s)

                                                
                                    
x
+
TestPause/serial/Pause (7.55s)

                                                
                                                
=== RUN   TestPause/serial/Pause
pause_test.go:110: (dbg) Run:  out/minikube-linux-arm64 pause -p pause-028559 --alsologtostderr -v=5
pause_test.go:110: (dbg) Non-zero exit: out/minikube-linux-arm64 pause -p pause-028559 --alsologtostderr -v=5: exit status 80 (2.616418847s)

                                                
                                                
-- stdout --
	* Pausing node pause-028559 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1122 00:49:38.502416  677529 out.go:360] Setting OutFile to fd 1 ...
	I1122 00:49:38.503626  677529 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1122 00:49:38.503672  677529 out.go:374] Setting ErrFile to fd 2...
	I1122 00:49:38.503695  677529 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1122 00:49:38.504059  677529 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21934-513600/.minikube/bin
	I1122 00:49:38.504403  677529 out.go:368] Setting JSON to false
	I1122 00:49:38.504461  677529 mustload.go:66] Loading cluster: pause-028559
	I1122 00:49:38.504919  677529 config.go:182] Loaded profile config "pause-028559": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1122 00:49:38.505486  677529 cli_runner.go:164] Run: docker container inspect pause-028559 --format={{.State.Status}}
	I1122 00:49:38.523736  677529 host.go:66] Checking if "pause-028559" exists ...
	I1122 00:49:38.524048  677529 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1122 00:49:38.587426  677529 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:2 ContainersRunning:2 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:51 OomKillDisable:true NGoroutines:62 SystemTime:2025-11-22 00:49:38.578072793 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1122 00:49:38.588075  677529 pause.go:60] "namespaces" [kube-system kubernetes-dashboard istio-operator]="keys" map[addons:[] all:%!s(bool=false) apiserver-ips:[] apiserver-name:minikubeCA apiserver-names:[] apiserver-port:%!s(int=8443) auto-pause-interval:1m0s auto-update-drivers:%!s(bool=true) base-image:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763588073-21934@sha256:19d3da0413e1bfa354cbb88004c6796f8e9772a083e0230b0f6e50212ee04c7e binary-mirror: bootstrapper:kubeadm cache-images:%!s(bool=true) cancel-scheduled:%!s(bool=false) cert-expiration:26280h0m0s cni: container-runtime: cpus:2 cri-socket: delete-on-failure:%!s(bool=false) disable-coredns-log:%!s(bool=false) disable-driver-mounts:%!s(bool=false) disable-metrics:%!s(bool=false) disable-optimizations:%!s(bool=false) disk-size:20000mb dns-domain:cluster.local dns-proxy:%!s(bool=false) docker-env:[] docker-opt:[] download-only:%!s(bool=false) driver: dry-run:%!s(bool=false) embed-certs:%!s(bool=false) embedcerts:%!s(bool=false) enable-default-
cni:%!s(bool=false) extra-config: extra-disks:%!s(int=0) feature-gates: force:%!s(bool=false) force-systemd:%!s(bool=false) gpus: ha:%!s(bool=false) host-dns-resolver:%!s(bool=true) host-only-cidr:192.168.59.1/24 host-only-nic-type:virtio hyperkit-vpnkit-sock: hyperkit-vsock-ports:[] hyperv-external-adapter: hyperv-use-external-switch:%!s(bool=false) hyperv-virtual-switch: image-mirror-country: image-repository: insecure-registry:[] install-addons:%!s(bool=true) interactive:%!s(bool=true) iso-url:[https://storage.googleapis.com/minikube-builds/iso/21924/minikube-v1.37.0-1763503576-21924-arm64.iso https://github.com/kubernetes/minikube/releases/download/v1.37.0-1763503576-21924/minikube-v1.37.0-1763503576-21924-arm64.iso https://kubernetes.oss-cn-hangzhou.aliyuncs.com/minikube/iso/minikube-v1.37.0-1763503576-21924-arm64.iso] keep-context:%!s(bool=false) keep-context-active:%!s(bool=false) kubernetes-version: kvm-gpu:%!s(bool=false) kvm-hidden:%!s(bool=false) kvm-network:default kvm-numa-count:%!s(int=1) kvm-qe
mu-uri:qemu:///system listen-address: maxauditentries:%!s(int=1000) memory: mount:%!s(bool=false) mount-9p-version:9p2000.L mount-gid:docker mount-ip: mount-msize:%!s(int=262144) mount-options:[] mount-port:0 mount-string: mount-type:9p mount-uid:docker namespace:default nat-nic-type:virtio native-ssh:%!s(bool=true) network: network-plugin: nfs-share:[] nfs-shares-root:/nfsshares no-kubernetes:%!s(bool=false) no-vtx-check:%!s(bool=false) nodes:%!s(int=1) output:text ports:[] preload:%!s(bool=true) profile:pause-028559 purge:%!s(bool=false) qemu-firmware-path: registry-mirror:[] reminderwaitperiodinhours:%!s(int=24) rootless:%!s(bool=false) schedule:0s service-cluster-ip-range:10.96.0.0/12 skip-audit:%!s(bool=false) socket-vmnet-client-path: socket-vmnet-path: ssh-ip-address: ssh-key: ssh-port:%!s(int=22) ssh-user:root static-ip: subnet: trace: user: uuid: vm:%!s(bool=false) vm-driver: wait:[apiserver system_pods] wait-timeout:6m0s wantnonedriverwarning:%!s(bool=true) wantupdatenotification:%!s(bool=true) want
virtualboxdriverwarning:%!s(bool=true)]="(MISSING)"
	I1122 00:49:38.591184  677529 out.go:179] * Pausing node pause-028559 ... 
	I1122 00:49:38.595089  677529 host.go:66] Checking if "pause-028559" exists ...
	I1122 00:49:38.595449  677529 ssh_runner.go:195] Run: systemctl --version
	I1122 00:49:38.595500  677529 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-028559
	I1122 00:49:38.614905  677529 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33745 SSHKeyPath:/home/jenkins/minikube-integration/21934-513600/.minikube/machines/pause-028559/id_rsa Username:docker}
	I1122 00:49:38.720187  677529 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1122 00:49:38.732752  677529 pause.go:52] kubelet running: true
	I1122 00:49:38.732830  677529 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1122 00:49:38.977566  677529 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1122 00:49:38.977674  677529 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1122 00:49:39.052677  677529 cri.go:89] found id: "56e196e0cbb051ab34ffee4c1a27c68525705ef12cf36abbf63ad2924e3b38d1"
	I1122 00:49:39.052705  677529 cri.go:89] found id: "87e6aa383dd95744b3c571bbd518cffcd7a4eb8cd0d3a6e584ed17b3615976fd"
	I1122 00:49:39.052710  677529 cri.go:89] found id: "04ac46e69fa75a9a16c68b21e8dbf076926ba49248fd3f5118e8445fb78a9d5a"
	I1122 00:49:39.052714  677529 cri.go:89] found id: "096e422d76e9c2d03ee46d5100a4c9d88d27872157f3e04d2ca3d33d12269f96"
	I1122 00:49:39.052718  677529 cri.go:89] found id: "d3753a55d1b0852aeac6d506d250fc46da9733dcd90885d3802044a0a80ad951"
	I1122 00:49:39.052721  677529 cri.go:89] found id: "369bd2d9f6691fe8442c1241cc0e13dde6eb84069c52da0c86e0481560a45f58"
	I1122 00:49:39.052724  677529 cri.go:89] found id: "c697a18245b5616f58771650b29470b900c4e63fb555bd2347a20b506820e266"
	I1122 00:49:39.052727  677529 cri.go:89] found id: "e8a18ac29ae5ac05f6b5b4c70bcf6b1fc73a710e59136307cfa68c8bcd36557d"
	I1122 00:49:39.052730  677529 cri.go:89] found id: "7cf54964fa6209d21f425b45edbf33a457dcdc58ce72370d8035cad09e292b10"
	I1122 00:49:39.052736  677529 cri.go:89] found id: "8a47d7c0ee952b2d53a1e55f636df9b8ea9e35a1de95e8cd16ba1ee91d2429e5"
	I1122 00:49:39.052740  677529 cri.go:89] found id: "36494fae0c15c7ac23088851e0409e2f96cb7f3066877902ebe7aedf80916b67"
	I1122 00:49:39.052743  677529 cri.go:89] found id: "b36b71426eeddbbd8a66fee6ba6d51873fa9668612addcc8f00a16c6fdb775fd"
	I1122 00:49:39.052745  677529 cri.go:89] found id: "fe27dafacf48d07af4ed5cb9690723267dc16f0e9bc5356896a5e1d595009ff0"
	I1122 00:49:39.052748  677529 cri.go:89] found id: "c1bb2c7a299bbbaca1218d7345b106ea8559d1df49b48d1e8effc89a6e7a38b3"
	I1122 00:49:39.052752  677529 cri.go:89] found id: ""
	I1122 00:49:39.052801  677529 ssh_runner.go:195] Run: sudo runc list -f json
	I1122 00:49:39.063438  677529 retry.go:31] will retry after 298.764877ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-22T00:49:39Z" level=error msg="open /run/runc: no such file or directory"
	I1122 00:49:39.362719  677529 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1122 00:49:39.375674  677529 pause.go:52] kubelet running: false
	I1122 00:49:39.375755  677529 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1122 00:49:39.533587  677529 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1122 00:49:39.533701  677529 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1122 00:49:39.607112  677529 cri.go:89] found id: "56e196e0cbb051ab34ffee4c1a27c68525705ef12cf36abbf63ad2924e3b38d1"
	I1122 00:49:39.607133  677529 cri.go:89] found id: "87e6aa383dd95744b3c571bbd518cffcd7a4eb8cd0d3a6e584ed17b3615976fd"
	I1122 00:49:39.607137  677529 cri.go:89] found id: "04ac46e69fa75a9a16c68b21e8dbf076926ba49248fd3f5118e8445fb78a9d5a"
	I1122 00:49:39.607141  677529 cri.go:89] found id: "096e422d76e9c2d03ee46d5100a4c9d88d27872157f3e04d2ca3d33d12269f96"
	I1122 00:49:39.607144  677529 cri.go:89] found id: "d3753a55d1b0852aeac6d506d250fc46da9733dcd90885d3802044a0a80ad951"
	I1122 00:49:39.607153  677529 cri.go:89] found id: "369bd2d9f6691fe8442c1241cc0e13dde6eb84069c52da0c86e0481560a45f58"
	I1122 00:49:39.607156  677529 cri.go:89] found id: "c697a18245b5616f58771650b29470b900c4e63fb555bd2347a20b506820e266"
	I1122 00:49:39.607161  677529 cri.go:89] found id: "e8a18ac29ae5ac05f6b5b4c70bcf6b1fc73a710e59136307cfa68c8bcd36557d"
	I1122 00:49:39.607163  677529 cri.go:89] found id: "7cf54964fa6209d21f425b45edbf33a457dcdc58ce72370d8035cad09e292b10"
	I1122 00:49:39.607170  677529 cri.go:89] found id: "8a47d7c0ee952b2d53a1e55f636df9b8ea9e35a1de95e8cd16ba1ee91d2429e5"
	I1122 00:49:39.607173  677529 cri.go:89] found id: "36494fae0c15c7ac23088851e0409e2f96cb7f3066877902ebe7aedf80916b67"
	I1122 00:49:39.607176  677529 cri.go:89] found id: "b36b71426eeddbbd8a66fee6ba6d51873fa9668612addcc8f00a16c6fdb775fd"
	I1122 00:49:39.607179  677529 cri.go:89] found id: "fe27dafacf48d07af4ed5cb9690723267dc16f0e9bc5356896a5e1d595009ff0"
	I1122 00:49:39.607182  677529 cri.go:89] found id: "c1bb2c7a299bbbaca1218d7345b106ea8559d1df49b48d1e8effc89a6e7a38b3"
	I1122 00:49:39.607185  677529 cri.go:89] found id: ""
	I1122 00:49:39.607239  677529 ssh_runner.go:195] Run: sudo runc list -f json
	I1122 00:49:39.617787  677529 retry.go:31] will retry after 483.720874ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-22T00:49:39Z" level=error msg="open /run/runc: no such file or directory"
	I1122 00:49:40.102684  677529 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1122 00:49:40.117528  677529 pause.go:52] kubelet running: false
	I1122 00:49:40.117596  677529 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1122 00:49:40.269476  677529 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1122 00:49:40.269556  677529 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1122 00:49:40.340566  677529 cri.go:89] found id: "56e196e0cbb051ab34ffee4c1a27c68525705ef12cf36abbf63ad2924e3b38d1"
	I1122 00:49:40.340603  677529 cri.go:89] found id: "87e6aa383dd95744b3c571bbd518cffcd7a4eb8cd0d3a6e584ed17b3615976fd"
	I1122 00:49:40.340609  677529 cri.go:89] found id: "04ac46e69fa75a9a16c68b21e8dbf076926ba49248fd3f5118e8445fb78a9d5a"
	I1122 00:49:40.340631  677529 cri.go:89] found id: "096e422d76e9c2d03ee46d5100a4c9d88d27872157f3e04d2ca3d33d12269f96"
	I1122 00:49:40.340642  677529 cri.go:89] found id: "d3753a55d1b0852aeac6d506d250fc46da9733dcd90885d3802044a0a80ad951"
	I1122 00:49:40.340648  677529 cri.go:89] found id: "369bd2d9f6691fe8442c1241cc0e13dde6eb84069c52da0c86e0481560a45f58"
	I1122 00:49:40.340651  677529 cri.go:89] found id: "c697a18245b5616f58771650b29470b900c4e63fb555bd2347a20b506820e266"
	I1122 00:49:40.340654  677529 cri.go:89] found id: "e8a18ac29ae5ac05f6b5b4c70bcf6b1fc73a710e59136307cfa68c8bcd36557d"
	I1122 00:49:40.340658  677529 cri.go:89] found id: "7cf54964fa6209d21f425b45edbf33a457dcdc58ce72370d8035cad09e292b10"
	I1122 00:49:40.340679  677529 cri.go:89] found id: "8a47d7c0ee952b2d53a1e55f636df9b8ea9e35a1de95e8cd16ba1ee91d2429e5"
	I1122 00:49:40.340690  677529 cri.go:89] found id: "36494fae0c15c7ac23088851e0409e2f96cb7f3066877902ebe7aedf80916b67"
	I1122 00:49:40.340694  677529 cri.go:89] found id: "b36b71426eeddbbd8a66fee6ba6d51873fa9668612addcc8f00a16c6fdb775fd"
	I1122 00:49:40.340697  677529 cri.go:89] found id: "fe27dafacf48d07af4ed5cb9690723267dc16f0e9bc5356896a5e1d595009ff0"
	I1122 00:49:40.340712  677529 cri.go:89] found id: "c1bb2c7a299bbbaca1218d7345b106ea8559d1df49b48d1e8effc89a6e7a38b3"
	I1122 00:49:40.340722  677529 cri.go:89] found id: ""
	I1122 00:49:40.340795  677529 ssh_runner.go:195] Run: sudo runc list -f json
	I1122 00:49:40.351676  677529 retry.go:31] will retry after 351.740982ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-22T00:49:40Z" level=error msg="open /run/runc: no such file or directory"
	I1122 00:49:40.704201  677529 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1122 00:49:40.721938  677529 pause.go:52] kubelet running: false
	I1122 00:49:40.722003  677529 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1122 00:49:40.940501  677529 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1122 00:49:40.940583  677529 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1122 00:49:41.028056  677529 cri.go:89] found id: "56e196e0cbb051ab34ffee4c1a27c68525705ef12cf36abbf63ad2924e3b38d1"
	I1122 00:49:41.028081  677529 cri.go:89] found id: "87e6aa383dd95744b3c571bbd518cffcd7a4eb8cd0d3a6e584ed17b3615976fd"
	I1122 00:49:41.028122  677529 cri.go:89] found id: "04ac46e69fa75a9a16c68b21e8dbf076926ba49248fd3f5118e8445fb78a9d5a"
	I1122 00:49:41.028133  677529 cri.go:89] found id: "096e422d76e9c2d03ee46d5100a4c9d88d27872157f3e04d2ca3d33d12269f96"
	I1122 00:49:41.028138  677529 cri.go:89] found id: "d3753a55d1b0852aeac6d506d250fc46da9733dcd90885d3802044a0a80ad951"
	I1122 00:49:41.028148  677529 cri.go:89] found id: "369bd2d9f6691fe8442c1241cc0e13dde6eb84069c52da0c86e0481560a45f58"
	I1122 00:49:41.028153  677529 cri.go:89] found id: "c697a18245b5616f58771650b29470b900c4e63fb555bd2347a20b506820e266"
	I1122 00:49:41.028157  677529 cri.go:89] found id: "e8a18ac29ae5ac05f6b5b4c70bcf6b1fc73a710e59136307cfa68c8bcd36557d"
	I1122 00:49:41.028160  677529 cri.go:89] found id: "7cf54964fa6209d21f425b45edbf33a457dcdc58ce72370d8035cad09e292b10"
	I1122 00:49:41.028166  677529 cri.go:89] found id: "8a47d7c0ee952b2d53a1e55f636df9b8ea9e35a1de95e8cd16ba1ee91d2429e5"
	I1122 00:49:41.028173  677529 cri.go:89] found id: "36494fae0c15c7ac23088851e0409e2f96cb7f3066877902ebe7aedf80916b67"
	I1122 00:49:41.028176  677529 cri.go:89] found id: "b36b71426eeddbbd8a66fee6ba6d51873fa9668612addcc8f00a16c6fdb775fd"
	I1122 00:49:41.028189  677529 cri.go:89] found id: "fe27dafacf48d07af4ed5cb9690723267dc16f0e9bc5356896a5e1d595009ff0"
	I1122 00:49:41.028192  677529 cri.go:89] found id: "c1bb2c7a299bbbaca1218d7345b106ea8559d1df49b48d1e8effc89a6e7a38b3"
	I1122 00:49:41.028196  677529 cri.go:89] found id: ""
	I1122 00:49:41.028244  677529 ssh_runner.go:195] Run: sudo runc list -f json
	I1122 00:49:41.049196  677529 out.go:203] 
	W1122 00:49:41.052112  677529 out.go:285] X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-22T00:49:41Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-22T00:49:41Z" level=error msg="open /run/runc: no such file or directory"
	
	W1122 00:49:41.052137  677529 out.go:285] * 
	* 
	W1122 00:49:41.060202  677529 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1122 00:49:41.064980  677529 out.go:203] 

                                                
                                                
** /stderr **
pause_test.go:112: failed to pause minikube with args: "out/minikube-linux-arm64 pause -p pause-028559 --alsologtostderr -v=5" : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestPause/serial/Pause]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestPause/serial/Pause]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect pause-028559
helpers_test.go:243: (dbg) docker inspect pause-028559:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "6a94f69817b77fc564d5d76744b1439060476eb42bc40435169dbd0ee1102740",
	        "Created": "2025-11-22T00:48:27.960744645Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 672656,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-11-22T00:48:28.033486895Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:97061622515a85470dad13cb046ad5fc5021fcd2b05aa921a620abb52951cd5d",
	        "ResolvConfPath": "/var/lib/docker/containers/6a94f69817b77fc564d5d76744b1439060476eb42bc40435169dbd0ee1102740/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/6a94f69817b77fc564d5d76744b1439060476eb42bc40435169dbd0ee1102740/hostname",
	        "HostsPath": "/var/lib/docker/containers/6a94f69817b77fc564d5d76744b1439060476eb42bc40435169dbd0ee1102740/hosts",
	        "LogPath": "/var/lib/docker/containers/6a94f69817b77fc564d5d76744b1439060476eb42bc40435169dbd0ee1102740/6a94f69817b77fc564d5d76744b1439060476eb42bc40435169dbd0ee1102740-json.log",
	        "Name": "/pause-028559",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "pause-028559:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "pause-028559",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "6a94f69817b77fc564d5d76744b1439060476eb42bc40435169dbd0ee1102740",
	                "LowerDir": "/var/lib/docker/overlay2/c149ec3533d2a3b038708dde1e912b8b07514fcbcc3d1ff393f0a0a670aea096-init/diff:/var/lib/docker/overlay2/7e8788c6de692bc1c3758a2bb2c4b8da0fbba26855f855c0f3b655bfbdd92f8e/diff",
	                "MergedDir": "/var/lib/docker/overlay2/c149ec3533d2a3b038708dde1e912b8b07514fcbcc3d1ff393f0a0a670aea096/merged",
	                "UpperDir": "/var/lib/docker/overlay2/c149ec3533d2a3b038708dde1e912b8b07514fcbcc3d1ff393f0a0a670aea096/diff",
	                "WorkDir": "/var/lib/docker/overlay2/c149ec3533d2a3b038708dde1e912b8b07514fcbcc3d1ff393f0a0a670aea096/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "pause-028559",
	                "Source": "/var/lib/docker/volumes/pause-028559/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "pause-028559",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763588073-21934@sha256:19d3da0413e1bfa354cbb88004c6796f8e9772a083e0230b0f6e50212ee04c7e",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "pause-028559",
	                "name.minikube.sigs.k8s.io": "pause-028559",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "f34db19d45abcc46dc4209db03db2d51e354914f726aa8ba5d05989b3d7a42e8",
	            "SandboxKey": "/var/run/docker/netns/f34db19d45ab",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33745"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33746"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33749"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33747"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33748"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "pause-028559": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.85.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "3a:43:28:1d:6f:9f",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "f14dae99aeca43c4226c8eb18b4de4b38a2da23b90842e84338c84d11826484b",
	                    "EndpointID": "a66044e880589ed0971e98a7c1b32e86f3e945a96d2aaba8fc0bc5d539026083",
	                    "Gateway": "192.168.85.1",
	                    "IPAddress": "192.168.85.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "pause-028559",
	                        "6a94f69817b7"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p pause-028559 -n pause-028559
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p pause-028559 -n pause-028559: exit status 2 (352.377465ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestPause/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestPause/serial/Pause]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p pause-028559 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p pause-028559 logs -n 25: (1.50296539s)
helpers_test.go:260: TestPause/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                   ARGS                                                                   │          PROFILE          │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ start   │ -p NoKubernetes-307118 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio                                    │ NoKubernetes-307118       │ jenkins │ v1.37.0 │ 22 Nov 25 00:43 UTC │ 22 Nov 25 00:44 UTC │
	│ start   │ -p missing-upgrade-264026 --memory=3072 --driver=docker  --container-runtime=crio                                                        │ missing-upgrade-264026    │ jenkins │ v1.32.0 │ 22 Nov 25 00:44 UTC │ 22 Nov 25 00:45 UTC │
	│ start   │ -p NoKubernetes-307118 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio                    │ NoKubernetes-307118       │ jenkins │ v1.37.0 │ 22 Nov 25 00:44 UTC │ 22 Nov 25 00:44 UTC │
	│ delete  │ -p NoKubernetes-307118                                                                                                                   │ NoKubernetes-307118       │ jenkins │ v1.37.0 │ 22 Nov 25 00:44 UTC │ 22 Nov 25 00:44 UTC │
	│ start   │ -p NoKubernetes-307118 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio                    │ NoKubernetes-307118       │ jenkins │ v1.37.0 │ 22 Nov 25 00:44 UTC │ 22 Nov 25 00:44 UTC │
	│ ssh     │ -p NoKubernetes-307118 sudo systemctl is-active --quiet service kubelet                                                                  │ NoKubernetes-307118       │ jenkins │ v1.37.0 │ 22 Nov 25 00:44 UTC │                     │
	│ stop    │ -p NoKubernetes-307118                                                                                                                   │ NoKubernetes-307118       │ jenkins │ v1.37.0 │ 22 Nov 25 00:44 UTC │ 22 Nov 25 00:45 UTC │
	│ start   │ -p NoKubernetes-307118 --driver=docker  --container-runtime=crio                                                                         │ NoKubernetes-307118       │ jenkins │ v1.37.0 │ 22 Nov 25 00:45 UTC │ 22 Nov 25 00:45 UTC │
	│ ssh     │ -p NoKubernetes-307118 sudo systemctl is-active --quiet service kubelet                                                                  │ NoKubernetes-307118       │ jenkins │ v1.37.0 │ 22 Nov 25 00:45 UTC │                     │
	│ delete  │ -p NoKubernetes-307118                                                                                                                   │ NoKubernetes-307118       │ jenkins │ v1.37.0 │ 22 Nov 25 00:45 UTC │ 22 Nov 25 00:45 UTC │
	│ start   │ -p kubernetes-upgrade-134864 --memory=3072 --kubernetes-version=v1.28.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio │ kubernetes-upgrade-134864 │ jenkins │ v1.37.0 │ 22 Nov 25 00:45 UTC │ 22 Nov 25 00:45 UTC │
	│ start   │ -p missing-upgrade-264026 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio                                 │ missing-upgrade-264026    │ jenkins │ v1.37.0 │ 22 Nov 25 00:45 UTC │ 22 Nov 25 00:46 UTC │
	│ stop    │ -p kubernetes-upgrade-134864                                                                                                             │ kubernetes-upgrade-134864 │ jenkins │ v1.37.0 │ 22 Nov 25 00:45 UTC │ 22 Nov 25 00:45 UTC │
	│ start   │ -p kubernetes-upgrade-134864 --memory=3072 --kubernetes-version=v1.34.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio │ kubernetes-upgrade-134864 │ jenkins │ v1.37.0 │ 22 Nov 25 00:45 UTC │                     │
	│ delete  │ -p missing-upgrade-264026                                                                                                                │ missing-upgrade-264026    │ jenkins │ v1.37.0 │ 22 Nov 25 00:46 UTC │ 22 Nov 25 00:46 UTC │
	│ start   │ -p stopped-upgrade-070222 --memory=3072 --vm-driver=docker  --container-runtime=crio                                                     │ stopped-upgrade-070222    │ jenkins │ v1.32.0 │ 22 Nov 25 00:46 UTC │ 22 Nov 25 00:46 UTC │
	│ stop    │ stopped-upgrade-070222 stop                                                                                                              │ stopped-upgrade-070222    │ jenkins │ v1.32.0 │ 22 Nov 25 00:46 UTC │ 22 Nov 25 00:46 UTC │
	│ start   │ -p stopped-upgrade-070222 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio                                 │ stopped-upgrade-070222    │ jenkins │ v1.37.0 │ 22 Nov 25 00:46 UTC │ 22 Nov 25 00:47 UTC │
	│ delete  │ -p stopped-upgrade-070222                                                                                                                │ stopped-upgrade-070222    │ jenkins │ v1.37.0 │ 22 Nov 25 00:47 UTC │ 22 Nov 25 00:47 UTC │
	│ start   │ -p running-upgrade-234956 --memory=3072 --vm-driver=docker  --container-runtime=crio                                                     │ running-upgrade-234956    │ jenkins │ v1.32.0 │ 22 Nov 25 00:47 UTC │ 22 Nov 25 00:48 UTC │
	│ start   │ -p running-upgrade-234956 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio                                 │ running-upgrade-234956    │ jenkins │ v1.37.0 │ 22 Nov 25 00:48 UTC │ 22 Nov 25 00:48 UTC │
	│ delete  │ -p running-upgrade-234956                                                                                                                │ running-upgrade-234956    │ jenkins │ v1.37.0 │ 22 Nov 25 00:48 UTC │ 22 Nov 25 00:48 UTC │
	│ start   │ -p pause-028559 --memory=3072 --install-addons=false --wait=all --driver=docker  --container-runtime=crio                                │ pause-028559              │ jenkins │ v1.37.0 │ 22 Nov 25 00:48 UTC │ 22 Nov 25 00:49 UTC │
	│ start   │ -p pause-028559 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio                                                         │ pause-028559              │ jenkins │ v1.37.0 │ 22 Nov 25 00:49 UTC │ 22 Nov 25 00:49 UTC │
	│ pause   │ -p pause-028559 --alsologtostderr -v=5                                                                                                   │ pause-028559              │ jenkins │ v1.37.0 │ 22 Nov 25 00:49 UTC │                     │
	└─────────┴──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/22 00:49:12
	Running on machine: ip-172-31-21-244
	Binary: Built with gc go1.25.3 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1122 00:49:12.742265  675703 out.go:360] Setting OutFile to fd 1 ...
	I1122 00:49:12.742449  675703 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1122 00:49:12.742459  675703 out.go:374] Setting ErrFile to fd 2...
	I1122 00:49:12.742465  675703 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1122 00:49:12.742721  675703 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21934-513600/.minikube/bin
	I1122 00:49:12.743088  675703 out.go:368] Setting JSON to false
	I1122 00:49:12.744402  675703 start.go:133] hostinfo: {"hostname":"ip-172-31-21-244","uptime":19869,"bootTime":1763752684,"procs":203,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"da8ac1fd-6236-412a-a346-95873c98230d"}
	I1122 00:49:12.744473  675703 start.go:143] virtualization:  
	I1122 00:49:12.748296  675703 out.go:179] * [pause-028559] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1122 00:49:12.752033  675703 out.go:179]   - MINIKUBE_LOCATION=21934
	I1122 00:49:12.752135  675703 notify.go:221] Checking for updates...
	I1122 00:49:12.757649  675703 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1122 00:49:12.760587  675703 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21934-513600/kubeconfig
	I1122 00:49:12.763496  675703 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21934-513600/.minikube
	I1122 00:49:12.766366  675703 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1122 00:49:12.769443  675703 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1122 00:49:12.772761  675703 config.go:182] Loaded profile config "pause-028559": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1122 00:49:12.773310  675703 driver.go:422] Setting default libvirt URI to qemu:///system
	I1122 00:49:12.800610  675703 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1122 00:49:12.800719  675703 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1122 00:49:12.861541  675703 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:2 ContainersRunning:2 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:51 OomKillDisable:true NGoroutines:62 SystemTime:2025-11-22 00:49:12.85166246 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aa
rch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pa
th:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1122 00:49:12.861653  675703 docker.go:319] overlay module found
	I1122 00:49:12.866766  675703 out.go:179] * Using the docker driver based on existing profile
	I1122 00:49:12.869525  675703 start.go:309] selected driver: docker
	I1122 00:49:12.869544  675703 start.go:930] validating driver "docker" against &{Name:pause-028559 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763588073-21934@sha256:19d3da0413e1bfa354cbb88004c6796f8e9772a083e0230b0f6e50212ee04c7e Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:pause-028559 Namespace:default APIServerHAVIP: APIServerName:minikub
eCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false regi
stry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1122 00:49:12.869670  675703 start.go:941] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1122 00:49:12.869768  675703 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1122 00:49:12.934723  675703 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:2 ContainersRunning:2 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:51 OomKillDisable:true NGoroutines:62 SystemTime:2025-11-22 00:49:12.926156642 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1122 00:49:12.935122  675703 cni.go:84] Creating CNI manager for ""
	I1122 00:49:12.935187  675703 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1122 00:49:12.935229  675703 start.go:353] cluster config:
	{Name:pause-028559 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763588073-21934@sha256:19d3da0413e1bfa354cbb88004c6796f8e9772a083e0230b0f6e50212ee04c7e Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:pause-028559 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:c
rio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false
storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1122 00:49:12.940261  675703 out.go:179] * Starting "pause-028559" primary control-plane node in "pause-028559" cluster
	I1122 00:49:12.943231  675703 cache.go:134] Beginning downloading kic base image for docker with crio
	I1122 00:49:12.946084  675703 out.go:179] * Pulling base image v0.0.48-1763588073-21934 ...
	I1122 00:49:12.949098  675703 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1122 00:49:12.949150  675703 preload.go:203] Found local preload: /home/jenkins/minikube-integration/21934-513600/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4
	I1122 00:49:12.949160  675703 cache.go:65] Caching tarball of preloaded images
	I1122 00:49:12.949177  675703 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763588073-21934@sha256:19d3da0413e1bfa354cbb88004c6796f8e9772a083e0230b0f6e50212ee04c7e in local docker daemon
	I1122 00:49:12.949239  675703 preload.go:238] Found /home/jenkins/minikube-integration/21934-513600/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1122 00:49:12.949249  675703 cache.go:68] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1122 00:49:12.949385  675703 profile.go:143] Saving config to /home/jenkins/minikube-integration/21934-513600/.minikube/profiles/pause-028559/config.json ...
	I1122 00:49:12.969168  675703 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763588073-21934@sha256:19d3da0413e1bfa354cbb88004c6796f8e9772a083e0230b0f6e50212ee04c7e in local docker daemon, skipping pull
	I1122 00:49:12.969190  675703 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763588073-21934@sha256:19d3da0413e1bfa354cbb88004c6796f8e9772a083e0230b0f6e50212ee04c7e exists in daemon, skipping load
	I1122 00:49:12.969208  675703 cache.go:243] Successfully downloaded all kic artifacts
	I1122 00:49:12.969230  675703 start.go:360] acquireMachinesLock for pause-028559: {Name:mk639f010a9c552843e3e85aa47fa9daf6e9b9cc Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1122 00:49:12.969291  675703 start.go:364] duration metric: took 34.805µs to acquireMachinesLock for "pause-028559"
	I1122 00:49:12.969315  675703 start.go:96] Skipping create...Using existing machine configuration
	I1122 00:49:12.969320  675703 fix.go:54] fixHost starting: 
	I1122 00:49:12.969579  675703 cli_runner.go:164] Run: docker container inspect pause-028559 --format={{.State.Status}}
	I1122 00:49:12.988057  675703 fix.go:112] recreateIfNeeded on pause-028559: state=Running err=<nil>
	W1122 00:49:12.988082  675703 fix.go:138] unexpected machine state, will restart: <nil>
	I1122 00:49:15.839588  659783 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1122 00:49:15.839649  659783 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1122 00:49:15.839721  659783 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1122 00:49:15.866942  659783 cri.go:89] found id: "513641c31371d8c05f0f09cedcd3f1444e01013c0f90e22c86ddeadf4d1a516c"
	I1122 00:49:15.866964  659783 cri.go:89] found id: "b12168475560a7e49fcbb662aed1e0002ea914f70e776f082083dc3a3c822135"
	I1122 00:49:15.866968  659783 cri.go:89] found id: ""
	I1122 00:49:15.866976  659783 logs.go:282] 2 containers: [513641c31371d8c05f0f09cedcd3f1444e01013c0f90e22c86ddeadf4d1a516c b12168475560a7e49fcbb662aed1e0002ea914f70e776f082083dc3a3c822135]
	I1122 00:49:15.867031  659783 ssh_runner.go:195] Run: which crictl
	I1122 00:49:15.870652  659783 ssh_runner.go:195] Run: which crictl
	I1122 00:49:15.874915  659783 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1122 00:49:15.874985  659783 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1122 00:49:15.901174  659783 cri.go:89] found id: ""
	I1122 00:49:15.901206  659783 logs.go:282] 0 containers: []
	W1122 00:49:15.901215  659783 logs.go:284] No container was found matching "etcd"
	I1122 00:49:15.901221  659783 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1122 00:49:15.901297  659783 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1122 00:49:15.927620  659783 cri.go:89] found id: ""
	I1122 00:49:15.927641  659783 logs.go:282] 0 containers: []
	W1122 00:49:15.927650  659783 logs.go:284] No container was found matching "coredns"
	I1122 00:49:15.927656  659783 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1122 00:49:15.927712  659783 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1122 00:49:15.965325  659783 cri.go:89] found id: "c546164b4e284af119113a13e0a67e01c0dd3fe9b208a4092c79664dd912c8e8"
	I1122 00:49:15.965349  659783 cri.go:89] found id: ""
	I1122 00:49:15.965357  659783 logs.go:282] 1 containers: [c546164b4e284af119113a13e0a67e01c0dd3fe9b208a4092c79664dd912c8e8]
	I1122 00:49:15.965415  659783 ssh_runner.go:195] Run: which crictl
	I1122 00:49:15.970360  659783 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1122 00:49:15.970435  659783 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1122 00:49:16.006441  659783 cri.go:89] found id: ""
	I1122 00:49:16.006470  659783 logs.go:282] 0 containers: []
	W1122 00:49:16.006480  659783 logs.go:284] No container was found matching "kube-proxy"
	I1122 00:49:16.006488  659783 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1122 00:49:16.006555  659783 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1122 00:49:16.035453  659783 cri.go:89] found id: "492c7cccdf0364fe71128220d1afdd81ad6e587b857b7898958d72c0a7eb4953"
	I1122 00:49:16.035476  659783 cri.go:89] found id: "bb0e61234ebc57d4a32db1e400b0a3cab257c463daf8c1fd63a71be5978bb7ad"
	I1122 00:49:16.035481  659783 cri.go:89] found id: ""
	I1122 00:49:16.035490  659783 logs.go:282] 2 containers: [492c7cccdf0364fe71128220d1afdd81ad6e587b857b7898958d72c0a7eb4953 bb0e61234ebc57d4a32db1e400b0a3cab257c463daf8c1fd63a71be5978bb7ad]
	I1122 00:49:16.035549  659783 ssh_runner.go:195] Run: which crictl
	I1122 00:49:16.039706  659783 ssh_runner.go:195] Run: which crictl
	I1122 00:49:16.043883  659783 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1122 00:49:16.043962  659783 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1122 00:49:16.076539  659783 cri.go:89] found id: ""
	I1122 00:49:16.076566  659783 logs.go:282] 0 containers: []
	W1122 00:49:16.076575  659783 logs.go:284] No container was found matching "kindnet"
	I1122 00:49:16.076582  659783 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1122 00:49:16.076644  659783 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1122 00:49:16.104616  659783 cri.go:89] found id: ""
	I1122 00:49:16.104642  659783 logs.go:282] 0 containers: []
	W1122 00:49:16.104651  659783 logs.go:284] No container was found matching "storage-provisioner"
	I1122 00:49:16.104697  659783 logs.go:123] Gathering logs for kube-controller-manager [492c7cccdf0364fe71128220d1afdd81ad6e587b857b7898958d72c0a7eb4953] ...
	I1122 00:49:16.104717  659783 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 492c7cccdf0364fe71128220d1afdd81ad6e587b857b7898958d72c0a7eb4953"
	I1122 00:49:16.132023  659783 logs.go:123] Gathering logs for kube-controller-manager [bb0e61234ebc57d4a32db1e400b0a3cab257c463daf8c1fd63a71be5978bb7ad] ...
	I1122 00:49:16.132050  659783 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 bb0e61234ebc57d4a32db1e400b0a3cab257c463daf8c1fd63a71be5978bb7ad"
	I1122 00:49:16.157979  659783 logs.go:123] Gathering logs for CRI-O ...
	I1122 00:49:16.158005  659783 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1122 00:49:16.215931  659783 logs.go:123] Gathering logs for container status ...
	I1122 00:49:16.215968  659783 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1122 00:49:16.246098  659783 logs.go:123] Gathering logs for dmesg ...
	I1122 00:49:16.246128  659783 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1122 00:49:12.991600  675703 out.go:252] * Updating the running docker "pause-028559" container ...
	I1122 00:49:12.991639  675703 machine.go:94] provisionDockerMachine start ...
	I1122 00:49:12.991725  675703 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-028559
	I1122 00:49:13.010871  675703 main.go:143] libmachine: Using SSH client type: native
	I1122 00:49:13.011204  675703 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33745 <nil> <nil>}
	I1122 00:49:13.011221  675703 main.go:143] libmachine: About to run SSH command:
	hostname
	I1122 00:49:13.149372  675703 main.go:143] libmachine: SSH cmd err, output: <nil>: pause-028559
	
	I1122 00:49:13.149398  675703 ubuntu.go:182] provisioning hostname "pause-028559"
	I1122 00:49:13.149460  675703 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-028559
	I1122 00:49:13.167795  675703 main.go:143] libmachine: Using SSH client type: native
	I1122 00:49:13.168144  675703 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33745 <nil> <nil>}
	I1122 00:49:13.168165  675703 main.go:143] libmachine: About to run SSH command:
	sudo hostname pause-028559 && echo "pause-028559" | sudo tee /etc/hostname
	I1122 00:49:13.328740  675703 main.go:143] libmachine: SSH cmd err, output: <nil>: pause-028559
	
	I1122 00:49:13.328814  675703 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-028559
	I1122 00:49:13.348051  675703 main.go:143] libmachine: Using SSH client type: native
	I1122 00:49:13.348487  675703 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33745 <nil> <nil>}
	I1122 00:49:13.348505  675703 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\spause-028559' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 pause-028559/g' /etc/hosts;
				else 
					echo '127.0.1.1 pause-028559' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1122 00:49:13.494473  675703 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1122 00:49:13.494501  675703 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21934-513600/.minikube CaCertPath:/home/jenkins/minikube-integration/21934-513600/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21934-513600/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21934-513600/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21934-513600/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21934-513600/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21934-513600/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21934-513600/.minikube}
	I1122 00:49:13.494521  675703 ubuntu.go:190] setting up certificates
	I1122 00:49:13.494530  675703 provision.go:84] configureAuth start
	I1122 00:49:13.494587  675703 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" pause-028559
	I1122 00:49:13.512933  675703 provision.go:143] copyHostCerts
	I1122 00:49:13.513003  675703 exec_runner.go:144] found /home/jenkins/minikube-integration/21934-513600/.minikube/key.pem, removing ...
	I1122 00:49:13.513023  675703 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21934-513600/.minikube/key.pem
	I1122 00:49:13.513095  675703 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21934-513600/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21934-513600/.minikube/key.pem (1675 bytes)
	I1122 00:49:13.513193  675703 exec_runner.go:144] found /home/jenkins/minikube-integration/21934-513600/.minikube/ca.pem, removing ...
	I1122 00:49:13.513204  675703 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21934-513600/.minikube/ca.pem
	I1122 00:49:13.513232  675703 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21934-513600/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21934-513600/.minikube/ca.pem (1078 bytes)
	I1122 00:49:13.513286  675703 exec_runner.go:144] found /home/jenkins/minikube-integration/21934-513600/.minikube/cert.pem, removing ...
	I1122 00:49:13.513296  675703 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21934-513600/.minikube/cert.pem
	I1122 00:49:13.513320  675703 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21934-513600/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21934-513600/.minikube/cert.pem (1123 bytes)
	I1122 00:49:13.513366  675703 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21934-513600/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21934-513600/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21934-513600/.minikube/certs/ca-key.pem org=jenkins.pause-028559 san=[127.0.0.1 192.168.85.2 localhost minikube pause-028559]
	I1122 00:49:14.027133  675703 provision.go:177] copyRemoteCerts
	I1122 00:49:14.027237  675703 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1122 00:49:14.027321  675703 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-028559
	I1122 00:49:14.045612  675703 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33745 SSHKeyPath:/home/jenkins/minikube-integration/21934-513600/.minikube/machines/pause-028559/id_rsa Username:docker}
	I1122 00:49:14.149496  675703 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21934-513600/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1122 00:49:14.167820  675703 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21934-513600/.minikube/machines/server.pem --> /etc/docker/server.pem (1204 bytes)
	I1122 00:49:14.185598  675703 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21934-513600/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1122 00:49:14.205471  675703 provision.go:87] duration metric: took 710.919232ms to configureAuth
	I1122 00:49:14.205500  675703 ubuntu.go:206] setting minikube options for container-runtime
	I1122 00:49:14.205724  675703 config.go:182] Loaded profile config "pause-028559": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1122 00:49:14.205855  675703 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-028559
	I1122 00:49:14.222966  675703 main.go:143] libmachine: Using SSH client type: native
	I1122 00:49:14.223286  675703 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33745 <nil> <nil>}
	I1122 00:49:14.223308  675703 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1122 00:49:16.265015  659783 logs.go:123] Gathering logs for describe nodes ...
	I1122 00:49:16.265044  659783 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1122 00:49:20.687879  659783 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": (4.422814344s)
	W1122 00:49:20.687910  659783 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	Get "https://localhost:8443/api/v1/nodes?limit=500": dial tcp [::1]:8443: connect: connection refused - error from a previous attempt: read tcp [::1]:35182->[::1]:8443: read: connection reset by peer
	 output: 
	** stderr ** 
	Get "https://localhost:8443/api/v1/nodes?limit=500": dial tcp [::1]:8443: connect: connection refused - error from a previous attempt: read tcp [::1]:35182->[::1]:8443: read: connection reset by peer
	
	** /stderr **
	I1122 00:49:20.687918  659783 logs.go:123] Gathering logs for kube-scheduler [c546164b4e284af119113a13e0a67e01c0dd3fe9b208a4092c79664dd912c8e8] ...
	I1122 00:49:20.687929  659783 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 c546164b4e284af119113a13e0a67e01c0dd3fe9b208a4092c79664dd912c8e8"
	I1122 00:49:20.768382  659783 logs.go:123] Gathering logs for kubelet ...
	I1122 00:49:20.768465  659783 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1122 00:49:20.900070  659783 logs.go:123] Gathering logs for kube-apiserver [513641c31371d8c05f0f09cedcd3f1444e01013c0f90e22c86ddeadf4d1a516c] ...
	I1122 00:49:20.900104  659783 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 513641c31371d8c05f0f09cedcd3f1444e01013c0f90e22c86ddeadf4d1a516c"
	I1122 00:49:20.947744  659783 logs.go:123] Gathering logs for kube-apiserver [b12168475560a7e49fcbb662aed1e0002ea914f70e776f082083dc3a3c822135] ...
	I1122 00:49:20.947780  659783 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 b12168475560a7e49fcbb662aed1e0002ea914f70e776f082083dc3a3c822135"
	W1122 00:49:20.978746  659783 logs.go:130] failed kube-apiserver [b12168475560a7e49fcbb662aed1e0002ea914f70e776f082083dc3a3c822135]: command: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 b12168475560a7e49fcbb662aed1e0002ea914f70e776f082083dc3a3c822135" /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 b12168475560a7e49fcbb662aed1e0002ea914f70e776f082083dc3a3c822135": Process exited with status 1
	stdout:
	
	stderr:
	E1122 00:49:20.976033    3907 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b12168475560a7e49fcbb662aed1e0002ea914f70e776f082083dc3a3c822135\": container with ID starting with b12168475560a7e49fcbb662aed1e0002ea914f70e776f082083dc3a3c822135 not found: ID does not exist" containerID="b12168475560a7e49fcbb662aed1e0002ea914f70e776f082083dc3a3c822135"
	time="2025-11-22T00:49:20Z" level=fatal msg="rpc error: code = NotFound desc = could not find container \"b12168475560a7e49fcbb662aed1e0002ea914f70e776f082083dc3a3c822135\": container with ID starting with b12168475560a7e49fcbb662aed1e0002ea914f70e776f082083dc3a3c822135 not found: ID does not exist"
	 output: 
	** stderr ** 
	E1122 00:49:20.976033    3907 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b12168475560a7e49fcbb662aed1e0002ea914f70e776f082083dc3a3c822135\": container with ID starting with b12168475560a7e49fcbb662aed1e0002ea914f70e776f082083dc3a3c822135 not found: ID does not exist" containerID="b12168475560a7e49fcbb662aed1e0002ea914f70e776f082083dc3a3c822135"
	time="2025-11-22T00:49:20Z" level=fatal msg="rpc error: code = NotFound desc = could not find container \"b12168475560a7e49fcbb662aed1e0002ea914f70e776f082083dc3a3c822135\": container with ID starting with b12168475560a7e49fcbb662aed1e0002ea914f70e776f082083dc3a3c822135 not found: ID does not exist"
	
	** /stderr **
	I1122 00:49:19.583615  675703 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1122 00:49:19.583639  675703 machine.go:97] duration metric: took 6.591990734s to provisionDockerMachine
	I1122 00:49:19.583651  675703 start.go:293] postStartSetup for "pause-028559" (driver="docker")
	I1122 00:49:19.583662  675703 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1122 00:49:19.583720  675703 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1122 00:49:19.583760  675703 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-028559
	I1122 00:49:19.602527  675703 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33745 SSHKeyPath:/home/jenkins/minikube-integration/21934-513600/.minikube/machines/pause-028559/id_rsa Username:docker}
	I1122 00:49:19.705588  675703 ssh_runner.go:195] Run: cat /etc/os-release
	I1122 00:49:19.708796  675703 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1122 00:49:19.708822  675703 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1122 00:49:19.708850  675703 filesync.go:126] Scanning /home/jenkins/minikube-integration/21934-513600/.minikube/addons for local assets ...
	I1122 00:49:19.708911  675703 filesync.go:126] Scanning /home/jenkins/minikube-integration/21934-513600/.minikube/files for local assets ...
	I1122 00:49:19.709035  675703 filesync.go:149] local asset: /home/jenkins/minikube-integration/21934-513600/.minikube/files/etc/ssl/certs/5169372.pem -> 5169372.pem in /etc/ssl/certs
	I1122 00:49:19.709137  675703 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1122 00:49:19.716285  675703 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21934-513600/.minikube/files/etc/ssl/certs/5169372.pem --> /etc/ssl/certs/5169372.pem (1708 bytes)
	I1122 00:49:19.733029  675703 start.go:296] duration metric: took 149.362549ms for postStartSetup
	I1122 00:49:19.733114  675703 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1122 00:49:19.733173  675703 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-028559
	I1122 00:49:19.750385  675703 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33745 SSHKeyPath:/home/jenkins/minikube-integration/21934-513600/.minikube/machines/pause-028559/id_rsa Username:docker}
	I1122 00:49:19.847033  675703 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1122 00:49:19.851815  675703 fix.go:56] duration metric: took 6.882488052s for fixHost
	I1122 00:49:19.851840  675703 start.go:83] releasing machines lock for "pause-028559", held for 6.882537069s
	I1122 00:49:19.851920  675703 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" pause-028559
	I1122 00:49:19.868078  675703 ssh_runner.go:195] Run: cat /version.json
	I1122 00:49:19.868140  675703 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-028559
	I1122 00:49:19.868151  675703 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1122 00:49:19.868217  675703 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-028559
	I1122 00:49:19.886355  675703 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33745 SSHKeyPath:/home/jenkins/minikube-integration/21934-513600/.minikube/machines/pause-028559/id_rsa Username:docker}
	I1122 00:49:19.887694  675703 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33745 SSHKeyPath:/home/jenkins/minikube-integration/21934-513600/.minikube/machines/pause-028559/id_rsa Username:docker}
	I1122 00:49:19.985564  675703 ssh_runner.go:195] Run: systemctl --version
	I1122 00:49:20.079432  675703 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1122 00:49:20.122304  675703 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1122 00:49:20.126849  675703 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1122 00:49:20.126918  675703 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1122 00:49:20.135486  675703 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1122 00:49:20.135510  675703 start.go:496] detecting cgroup driver to use...
	I1122 00:49:20.135541  675703 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1122 00:49:20.135602  675703 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1122 00:49:20.150982  675703 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1122 00:49:20.164642  675703 docker.go:218] disabling cri-docker service (if available) ...
	I1122 00:49:20.164707  675703 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1122 00:49:20.181760  675703 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1122 00:49:20.195579  675703 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1122 00:49:20.337744  675703 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1122 00:49:20.485361  675703 docker.go:234] disabling docker service ...
	I1122 00:49:20.485423  675703 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1122 00:49:20.502268  675703 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1122 00:49:20.515993  675703 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1122 00:49:20.658658  675703 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1122 00:49:20.858345  675703 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1122 00:49:20.874536  675703 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1122 00:49:20.899875  675703 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1122 00:49:20.899946  675703 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1122 00:49:20.909845  675703 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1122 00:49:20.909912  675703 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1122 00:49:20.923518  675703 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1122 00:49:20.933988  675703 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1122 00:49:20.949689  675703 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1122 00:49:20.959059  675703 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1122 00:49:20.972008  675703 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1122 00:49:20.981590  675703 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1122 00:49:20.990490  675703 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1122 00:49:20.999363  675703 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1122 00:49:21.009130  675703 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1122 00:49:21.157828  675703 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1122 00:49:21.366060  675703 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1122 00:49:21.366129  675703 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1122 00:49:21.370129  675703 start.go:564] Will wait 60s for crictl version
	I1122 00:49:21.370191  675703 ssh_runner.go:195] Run: which crictl
	I1122 00:49:21.373639  675703 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1122 00:49:21.398612  675703 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.2
	RuntimeApiVersion:  v1
	I1122 00:49:21.398697  675703 ssh_runner.go:195] Run: crio --version
	I1122 00:49:21.425256  675703 ssh_runner.go:195] Run: crio --version
	I1122 00:49:21.455792  675703 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.2 ...
	I1122 00:49:21.458723  675703 cli_runner.go:164] Run: docker network inspect pause-028559 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1122 00:49:21.474025  675703 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I1122 00:49:21.477984  675703 kubeadm.go:884] updating cluster {Name:pause-028559 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763588073-21934@sha256:19d3da0413e1bfa354cbb88004c6796f8e9772a083e0230b0f6e50212ee04c7e Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:pause-028559 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerName
s:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false regist
ry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1122 00:49:21.478137  675703 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1122 00:49:21.478191  675703 ssh_runner.go:195] Run: sudo crictl images --output json
	I1122 00:49:21.510004  675703 crio.go:514] all images are preloaded for cri-o runtime.
	I1122 00:49:21.510031  675703 crio.go:433] Images already preloaded, skipping extraction
	I1122 00:49:21.510095  675703 ssh_runner.go:195] Run: sudo crictl images --output json
	I1122 00:49:21.535600  675703 crio.go:514] all images are preloaded for cri-o runtime.
	I1122 00:49:21.535620  675703 cache_images.go:86] Images are preloaded, skipping loading
	I1122 00:49:21.535628  675703 kubeadm.go:935] updating node { 192.168.85.2 8443 v1.34.1 crio true true} ...
	I1122 00:49:21.535731  675703 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=pause-028559 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:pause-028559 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1122 00:49:21.535809  675703 ssh_runner.go:195] Run: crio config
	I1122 00:49:21.587638  675703 cni.go:84] Creating CNI manager for ""
	I1122 00:49:21.587661  675703 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1122 00:49:21.587679  675703 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1122 00:49:21.587726  675703 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:pause-028559 NodeName:pause-028559 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernete
s/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1122 00:49:21.587911  675703 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "pause-028559"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.85.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1122 00:49:21.587999  675703 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1122 00:49:21.595942  675703 binaries.go:51] Found k8s binaries, skipping transfer
	I1122 00:49:21.596046  675703 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1122 00:49:21.603592  675703 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (362 bytes)
	I1122 00:49:21.617454  675703 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1122 00:49:21.629618  675703 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2209 bytes)
	I1122 00:49:21.644361  675703 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I1122 00:49:21.648034  675703 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1122 00:49:21.792492  675703 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1122 00:49:21.807347  675703 certs.go:69] Setting up /home/jenkins/minikube-integration/21934-513600/.minikube/profiles/pause-028559 for IP: 192.168.85.2
	I1122 00:49:21.807428  675703 certs.go:195] generating shared ca certs ...
	I1122 00:49:21.807460  675703 certs.go:227] acquiring lock for ca certs: {Name:mkaf4c79493334cb2058b349c36be7473837f9b7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1122 00:49:21.807632  675703 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21934-513600/.minikube/ca.key
	I1122 00:49:21.807707  675703 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21934-513600/.minikube/proxy-client-ca.key
	I1122 00:49:21.807744  675703 certs.go:257] generating profile certs ...
	I1122 00:49:21.807882  675703 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21934-513600/.minikube/profiles/pause-028559/client.key
	I1122 00:49:21.807986  675703 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21934-513600/.minikube/profiles/pause-028559/apiserver.key.d413a2b7
	I1122 00:49:21.808061  675703 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21934-513600/.minikube/profiles/pause-028559/proxy-client.key
	I1122 00:49:21.808205  675703 certs.go:484] found cert: /home/jenkins/minikube-integration/21934-513600/.minikube/certs/516937.pem (1338 bytes)
	W1122 00:49:21.808291  675703 certs.go:480] ignoring /home/jenkins/minikube-integration/21934-513600/.minikube/certs/516937_empty.pem, impossibly tiny 0 bytes
	I1122 00:49:21.808320  675703 certs.go:484] found cert: /home/jenkins/minikube-integration/21934-513600/.minikube/certs/ca-key.pem (1679 bytes)
	I1122 00:49:21.808367  675703 certs.go:484] found cert: /home/jenkins/minikube-integration/21934-513600/.minikube/certs/ca.pem (1078 bytes)
	I1122 00:49:21.808420  675703 certs.go:484] found cert: /home/jenkins/minikube-integration/21934-513600/.minikube/certs/cert.pem (1123 bytes)
	I1122 00:49:21.808472  675703 certs.go:484] found cert: /home/jenkins/minikube-integration/21934-513600/.minikube/certs/key.pem (1675 bytes)
	I1122 00:49:21.808563  675703 certs.go:484] found cert: /home/jenkins/minikube-integration/21934-513600/.minikube/files/etc/ssl/certs/5169372.pem (1708 bytes)
	I1122 00:49:21.809188  675703 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21934-513600/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1122 00:49:21.828740  675703 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21934-513600/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1122 00:49:21.846083  675703 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21934-513600/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1122 00:49:21.863576  675703 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21934-513600/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1122 00:49:21.880831  675703 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21934-513600/.minikube/profiles/pause-028559/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I1122 00:49:21.897968  675703 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21934-513600/.minikube/profiles/pause-028559/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1122 00:49:21.916136  675703 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21934-513600/.minikube/profiles/pause-028559/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1122 00:49:21.933505  675703 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21934-513600/.minikube/profiles/pause-028559/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1122 00:49:21.950594  675703 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21934-513600/.minikube/files/etc/ssl/certs/5169372.pem --> /usr/share/ca-certificates/5169372.pem (1708 bytes)
	I1122 00:49:21.967967  675703 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21934-513600/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1122 00:49:21.985075  675703 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21934-513600/.minikube/certs/516937.pem --> /usr/share/ca-certificates/516937.pem (1338 bytes)
	I1122 00:49:22.009505  675703 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1122 00:49:22.023923  675703 ssh_runner.go:195] Run: openssl version
	I1122 00:49:22.030361  675703 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/5169372.pem && ln -fs /usr/share/ca-certificates/5169372.pem /etc/ssl/certs/5169372.pem"
	I1122 00:49:22.039238  675703 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/5169372.pem
	I1122 00:49:22.043542  675703 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov 21 23:55 /usr/share/ca-certificates/5169372.pem
	I1122 00:49:22.043610  675703 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/5169372.pem
	I1122 00:49:22.089660  675703 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/5169372.pem /etc/ssl/certs/3ec20f2e.0"
	I1122 00:49:22.097984  675703 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1122 00:49:22.106565  675703 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1122 00:49:22.110214  675703 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov 21 23:48 /usr/share/ca-certificates/minikubeCA.pem
	I1122 00:49:22.110303  675703 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1122 00:49:22.152326  675703 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1122 00:49:22.160182  675703 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/516937.pem && ln -fs /usr/share/ca-certificates/516937.pem /etc/ssl/certs/516937.pem"
	I1122 00:49:22.168398  675703 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/516937.pem
	I1122 00:49:22.172129  675703 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov 21 23:55 /usr/share/ca-certificates/516937.pem
	I1122 00:49:22.172242  675703 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/516937.pem
	I1122 00:49:22.213186  675703 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/516937.pem /etc/ssl/certs/51391683.0"
	I1122 00:49:22.221286  675703 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1122 00:49:22.225048  675703 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1122 00:49:22.266039  675703 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1122 00:49:22.307350  675703 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1122 00:49:22.348238  675703 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1122 00:49:22.389177  675703 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1122 00:49:22.430093  675703 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1122 00:49:22.471760  675703 kubeadm.go:401] StartCluster: {Name:pause-028559 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763588073-21934@sha256:19d3da0413e1bfa354cbb88004c6796f8e9772a083e0230b0f6e50212ee04c7e Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:pause-028559 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[
] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-
aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1122 00:49:22.471883  675703 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1122 00:49:22.471950  675703 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1122 00:49:22.499207  675703 cri.go:89] found id: "e8a18ac29ae5ac05f6b5b4c70bcf6b1fc73a710e59136307cfa68c8bcd36557d"
	I1122 00:49:22.499229  675703 cri.go:89] found id: "7cf54964fa6209d21f425b45edbf33a457dcdc58ce72370d8035cad09e292b10"
	I1122 00:49:22.499233  675703 cri.go:89] found id: "8a47d7c0ee952b2d53a1e55f636df9b8ea9e35a1de95e8cd16ba1ee91d2429e5"
	I1122 00:49:22.499237  675703 cri.go:89] found id: "36494fae0c15c7ac23088851e0409e2f96cb7f3066877902ebe7aedf80916b67"
	I1122 00:49:22.499240  675703 cri.go:89] found id: "b36b71426eeddbbd8a66fee6ba6d51873fa9668612addcc8f00a16c6fdb775fd"
	I1122 00:49:22.499243  675703 cri.go:89] found id: "fe27dafacf48d07af4ed5cb9690723267dc16f0e9bc5356896a5e1d595009ff0"
	I1122 00:49:22.499247  675703 cri.go:89] found id: "c1bb2c7a299bbbaca1218d7345b106ea8559d1df49b48d1e8effc89a6e7a38b3"
	I1122 00:49:22.499250  675703 cri.go:89] found id: ""
	I1122 00:49:22.499328  675703 ssh_runner.go:195] Run: sudo runc list -f json
	W1122 00:49:22.510348  675703 kubeadm.go:408] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-22T00:49:22Z" level=error msg="open /run/runc: no such file or directory"
	I1122 00:49:22.510452  675703 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1122 00:49:22.518456  675703 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1122 00:49:22.518476  675703 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1122 00:49:22.518527  675703 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1122 00:49:22.525863  675703 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1122 00:49:22.526504  675703 kubeconfig.go:125] found "pause-028559" server: "https://192.168.85.2:8443"
	I1122 00:49:22.527325  675703 kapi.go:59] client config for pause-028559: &rest.Config{Host:"https://192.168.85.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21934-513600/.minikube/profiles/pause-028559/client.crt", KeyFile:"/home/jenkins/minikube-integration/21934-513600/.minikube/profiles/pause-028559/client.key", CAFile:"/home/jenkins/minikube-integration/21934-513600/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]s
tring(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1fb2df0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1122 00:49:22.527861  675703 envvar.go:172] "Feature gate default state" feature="InOrderInformers" enabled=true
	I1122 00:49:22.527879  675703 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I1122 00:49:22.527886  675703 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
	I1122 00:49:22.527892  675703 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
	I1122 00:49:22.527899  675703 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I1122 00:49:22.528166  675703 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1122 00:49:22.535965  675703 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.85.2
	I1122 00:49:22.535999  675703 kubeadm.go:602] duration metric: took 17.516524ms to restartPrimaryControlPlane
	I1122 00:49:22.536009  675703 kubeadm.go:403] duration metric: took 64.261728ms to StartCluster
	I1122 00:49:22.536024  675703 settings.go:142] acquiring lock: {Name:mk6c31eb57ec65b047b78b4e1046e03fe7cc77bb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1122 00:49:22.536082  675703 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21934-513600/kubeconfig
	I1122 00:49:22.536975  675703 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21934-513600/kubeconfig: {Name:mkcd094d8a765e5a81369c0fb33cb5e696b17bb8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1122 00:49:22.537195  675703 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1122 00:49:22.537586  675703 config.go:182] Loaded profile config "pause-028559": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1122 00:49:22.537637  675703 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1122 00:49:22.542171  675703 out.go:179] * Verifying Kubernetes components...
	I1122 00:49:22.544048  675703 out.go:179] * Enabled addons: 
	I1122 00:49:22.546908  675703 addons.go:530] duration metric: took 9.271441ms for enable addons: enabled=[]
	I1122 00:49:22.546954  675703 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1122 00:49:22.701557  675703 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1122 00:49:22.714884  675703 node_ready.go:35] waiting up to 6m0s for node "pause-028559" to be "Ready" ...
	I1122 00:49:23.479039  659783 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1122 00:49:23.479428  659783 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1122 00:49:23.479469  659783 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1122 00:49:23.479520  659783 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1122 00:49:23.517661  659783 cri.go:89] found id: "513641c31371d8c05f0f09cedcd3f1444e01013c0f90e22c86ddeadf4d1a516c"
	I1122 00:49:23.517680  659783 cri.go:89] found id: ""
	I1122 00:49:23.517687  659783 logs.go:282] 1 containers: [513641c31371d8c05f0f09cedcd3f1444e01013c0f90e22c86ddeadf4d1a516c]
	I1122 00:49:23.517741  659783 ssh_runner.go:195] Run: which crictl
	I1122 00:49:23.527344  659783 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1122 00:49:23.527428  659783 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1122 00:49:23.580193  659783 cri.go:89] found id: ""
	I1122 00:49:23.580214  659783 logs.go:282] 0 containers: []
	W1122 00:49:23.580222  659783 logs.go:284] No container was found matching "etcd"
	I1122 00:49:23.580228  659783 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1122 00:49:23.580285  659783 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1122 00:49:23.661303  659783 cri.go:89] found id: ""
	I1122 00:49:23.661324  659783 logs.go:282] 0 containers: []
	W1122 00:49:23.661332  659783 logs.go:284] No container was found matching "coredns"
	I1122 00:49:23.661339  659783 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1122 00:49:23.661395  659783 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1122 00:49:23.709061  659783 cri.go:89] found id: "c546164b4e284af119113a13e0a67e01c0dd3fe9b208a4092c79664dd912c8e8"
	I1122 00:49:23.709079  659783 cri.go:89] found id: ""
	I1122 00:49:23.709086  659783 logs.go:282] 1 containers: [c546164b4e284af119113a13e0a67e01c0dd3fe9b208a4092c79664dd912c8e8]
	I1122 00:49:23.709146  659783 ssh_runner.go:195] Run: which crictl
	I1122 00:49:23.717708  659783 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1122 00:49:23.717776  659783 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1122 00:49:23.760550  659783 cri.go:89] found id: ""
	I1122 00:49:23.760571  659783 logs.go:282] 0 containers: []
	W1122 00:49:23.760579  659783 logs.go:284] No container was found matching "kube-proxy"
	I1122 00:49:23.760586  659783 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1122 00:49:23.760643  659783 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1122 00:49:23.808446  659783 cri.go:89] found id: "492c7cccdf0364fe71128220d1afdd81ad6e587b857b7898958d72c0a7eb4953"
	I1122 00:49:23.808519  659783 cri.go:89] found id: "bb0e61234ebc57d4a32db1e400b0a3cab257c463daf8c1fd63a71be5978bb7ad"
	I1122 00:49:23.808539  659783 cri.go:89] found id: ""
	I1122 00:49:23.808563  659783 logs.go:282] 2 containers: [492c7cccdf0364fe71128220d1afdd81ad6e587b857b7898958d72c0a7eb4953 bb0e61234ebc57d4a32db1e400b0a3cab257c463daf8c1fd63a71be5978bb7ad]
	I1122 00:49:23.808649  659783 ssh_runner.go:195] Run: which crictl
	I1122 00:49:23.812612  659783 ssh_runner.go:195] Run: which crictl
	I1122 00:49:23.822331  659783 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1122 00:49:23.822451  659783 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1122 00:49:23.874912  659783 cri.go:89] found id: ""
	I1122 00:49:23.874986  659783 logs.go:282] 0 containers: []
	W1122 00:49:23.875008  659783 logs.go:284] No container was found matching "kindnet"
	I1122 00:49:23.875026  659783 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1122 00:49:23.875115  659783 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1122 00:49:23.922229  659783 cri.go:89] found id: ""
	I1122 00:49:23.922303  659783 logs.go:282] 0 containers: []
	W1122 00:49:23.922327  659783 logs.go:284] No container was found matching "storage-provisioner"
	I1122 00:49:23.922370  659783 logs.go:123] Gathering logs for kubelet ...
	I1122 00:49:23.922399  659783 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1122 00:49:24.095003  659783 logs.go:123] Gathering logs for dmesg ...
	I1122 00:49:24.095043  659783 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1122 00:49:24.114309  659783 logs.go:123] Gathering logs for kube-controller-manager [492c7cccdf0364fe71128220d1afdd81ad6e587b857b7898958d72c0a7eb4953] ...
	I1122 00:49:24.114342  659783 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 492c7cccdf0364fe71128220d1afdd81ad6e587b857b7898958d72c0a7eb4953"
	I1122 00:49:24.151737  659783 logs.go:123] Gathering logs for CRI-O ...
	I1122 00:49:24.151764  659783 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1122 00:49:24.256697  659783 logs.go:123] Gathering logs for describe nodes ...
	I1122 00:49:24.256775  659783 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1122 00:49:24.387958  659783 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1122 00:49:24.387976  659783 logs.go:123] Gathering logs for kube-apiserver [513641c31371d8c05f0f09cedcd3f1444e01013c0f90e22c86ddeadf4d1a516c] ...
	I1122 00:49:24.387991  659783 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 513641c31371d8c05f0f09cedcd3f1444e01013c0f90e22c86ddeadf4d1a516c"
	I1122 00:49:24.427807  659783 logs.go:123] Gathering logs for kube-scheduler [c546164b4e284af119113a13e0a67e01c0dd3fe9b208a4092c79664dd912c8e8] ...
	I1122 00:49:24.427883  659783 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 c546164b4e284af119113a13e0a67e01c0dd3fe9b208a4092c79664dd912c8e8"
	I1122 00:49:24.509719  659783 logs.go:123] Gathering logs for kube-controller-manager [bb0e61234ebc57d4a32db1e400b0a3cab257c463daf8c1fd63a71be5978bb7ad] ...
	I1122 00:49:24.509878  659783 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 bb0e61234ebc57d4a32db1e400b0a3cab257c463daf8c1fd63a71be5978bb7ad"
	I1122 00:49:24.589091  659783 logs.go:123] Gathering logs for container status ...
	I1122 00:49:24.589117  659783 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1122 00:49:27.174752  659783 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1122 00:49:27.175242  659783 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1122 00:49:27.175327  659783 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1122 00:49:27.175444  659783 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1122 00:49:27.217336  659783 cri.go:89] found id: "513641c31371d8c05f0f09cedcd3f1444e01013c0f90e22c86ddeadf4d1a516c"
	I1122 00:49:27.217395  659783 cri.go:89] found id: ""
	I1122 00:49:27.217426  659783 logs.go:282] 1 containers: [513641c31371d8c05f0f09cedcd3f1444e01013c0f90e22c86ddeadf4d1a516c]
	I1122 00:49:27.217510  659783 ssh_runner.go:195] Run: which crictl
	I1122 00:49:27.230837  659783 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1122 00:49:27.230962  659783 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1122 00:49:27.279157  659783 cri.go:89] found id: ""
	I1122 00:49:27.279231  659783 logs.go:282] 0 containers: []
	W1122 00:49:27.279253  659783 logs.go:284] No container was found matching "etcd"
	I1122 00:49:27.279273  659783 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1122 00:49:27.279379  659783 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1122 00:49:27.344590  659783 cri.go:89] found id: ""
	I1122 00:49:27.344664  659783 logs.go:282] 0 containers: []
	W1122 00:49:27.344686  659783 logs.go:284] No container was found matching "coredns"
	I1122 00:49:27.344705  659783 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1122 00:49:27.344792  659783 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1122 00:49:27.383127  659783 cri.go:89] found id: "c546164b4e284af119113a13e0a67e01c0dd3fe9b208a4092c79664dd912c8e8"
	I1122 00:49:27.383199  659783 cri.go:89] found id: ""
	I1122 00:49:27.383222  659783 logs.go:282] 1 containers: [c546164b4e284af119113a13e0a67e01c0dd3fe9b208a4092c79664dd912c8e8]
	I1122 00:49:27.383314  659783 ssh_runner.go:195] Run: which crictl
	I1122 00:49:27.389051  659783 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1122 00:49:27.389184  659783 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1122 00:49:27.436305  659783 cri.go:89] found id: ""
	I1122 00:49:27.436384  659783 logs.go:282] 0 containers: []
	W1122 00:49:27.436408  659783 logs.go:284] No container was found matching "kube-proxy"
	I1122 00:49:27.436426  659783 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1122 00:49:27.436531  659783 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1122 00:49:27.508007  659783 cri.go:89] found id: "492c7cccdf0364fe71128220d1afdd81ad6e587b857b7898958d72c0a7eb4953"
	I1122 00:49:27.508080  659783 cri.go:89] found id: "bb0e61234ebc57d4a32db1e400b0a3cab257c463daf8c1fd63a71be5978bb7ad"
	I1122 00:49:27.508116  659783 cri.go:89] found id: ""
	I1122 00:49:27.508140  659783 logs.go:282] 2 containers: [492c7cccdf0364fe71128220d1afdd81ad6e587b857b7898958d72c0a7eb4953 bb0e61234ebc57d4a32db1e400b0a3cab257c463daf8c1fd63a71be5978bb7ad]
	I1122 00:49:27.508228  659783 ssh_runner.go:195] Run: which crictl
	I1122 00:49:27.512426  659783 ssh_runner.go:195] Run: which crictl
	I1122 00:49:27.522456  659783 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1122 00:49:27.522588  659783 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1122 00:49:27.563609  659783 cri.go:89] found id: ""
	I1122 00:49:27.563692  659783 logs.go:282] 0 containers: []
	W1122 00:49:27.563714  659783 logs.go:284] No container was found matching "kindnet"
	I1122 00:49:27.563734  659783 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1122 00:49:27.563841  659783 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1122 00:49:27.620944  659783 cri.go:89] found id: ""
	I1122 00:49:27.621017  659783 logs.go:282] 0 containers: []
	W1122 00:49:27.621039  659783 logs.go:284] No container was found matching "storage-provisioner"
	I1122 00:49:27.621081  659783 logs.go:123] Gathering logs for kubelet ...
	I1122 00:49:27.621108  659783 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1122 00:49:27.787270  659783 logs.go:123] Gathering logs for describe nodes ...
	I1122 00:49:27.787301  659783 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1122 00:49:27.904766  659783 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1122 00:49:27.904785  659783 logs.go:123] Gathering logs for kube-apiserver [513641c31371d8c05f0f09cedcd3f1444e01013c0f90e22c86ddeadf4d1a516c] ...
	I1122 00:49:27.904798  659783 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 513641c31371d8c05f0f09cedcd3f1444e01013c0f90e22c86ddeadf4d1a516c"
	I1122 00:49:27.954024  659783 logs.go:123] Gathering logs for kube-scheduler [c546164b4e284af119113a13e0a67e01c0dd3fe9b208a4092c79664dd912c8e8] ...
	I1122 00:49:27.954108  659783 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 c546164b4e284af119113a13e0a67e01c0dd3fe9b208a4092c79664dd912c8e8"
	I1122 00:49:28.057382  659783 logs.go:123] Gathering logs for kube-controller-manager [bb0e61234ebc57d4a32db1e400b0a3cab257c463daf8c1fd63a71be5978bb7ad] ...
	I1122 00:49:28.057480  659783 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 bb0e61234ebc57d4a32db1e400b0a3cab257c463daf8c1fd63a71be5978bb7ad"
	I1122 00:49:28.111713  659783 logs.go:123] Gathering logs for CRI-O ...
	I1122 00:49:28.111737  659783 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1122 00:49:28.180238  659783 logs.go:123] Gathering logs for container status ...
	I1122 00:49:28.180314  659783 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1122 00:49:28.228480  659783 logs.go:123] Gathering logs for dmesg ...
	I1122 00:49:28.228504  659783 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1122 00:49:28.248652  659783 logs.go:123] Gathering logs for kube-controller-manager [492c7cccdf0364fe71128220d1afdd81ad6e587b857b7898958d72c0a7eb4953] ...
	I1122 00:49:28.248722  659783 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 492c7cccdf0364fe71128220d1afdd81ad6e587b857b7898958d72c0a7eb4953"
	I1122 00:49:30.796894  659783 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1122 00:49:30.797406  659783 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1122 00:49:30.797458  659783 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1122 00:49:30.797520  659783 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1122 00:49:30.824603  659783 cri.go:89] found id: "513641c31371d8c05f0f09cedcd3f1444e01013c0f90e22c86ddeadf4d1a516c"
	I1122 00:49:30.824626  659783 cri.go:89] found id: ""
	I1122 00:49:30.824633  659783 logs.go:282] 1 containers: [513641c31371d8c05f0f09cedcd3f1444e01013c0f90e22c86ddeadf4d1a516c]
	I1122 00:49:30.824697  659783 ssh_runner.go:195] Run: which crictl
	I1122 00:49:30.828378  659783 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1122 00:49:30.828448  659783 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1122 00:49:30.860932  659783 cri.go:89] found id: ""
	I1122 00:49:30.860956  659783 logs.go:282] 0 containers: []
	W1122 00:49:30.860965  659783 logs.go:284] No container was found matching "etcd"
	I1122 00:49:30.860971  659783 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1122 00:49:30.861028  659783 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1122 00:49:30.889540  659783 cri.go:89] found id: ""
	I1122 00:49:30.889562  659783 logs.go:282] 0 containers: []
	W1122 00:49:30.889572  659783 logs.go:284] No container was found matching "coredns"
	I1122 00:49:30.889578  659783 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1122 00:49:30.889638  659783 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1122 00:49:30.920066  659783 cri.go:89] found id: "c546164b4e284af119113a13e0a67e01c0dd3fe9b208a4092c79664dd912c8e8"
	I1122 00:49:30.920095  659783 cri.go:89] found id: ""
	I1122 00:49:30.920104  659783 logs.go:282] 1 containers: [c546164b4e284af119113a13e0a67e01c0dd3fe9b208a4092c79664dd912c8e8]
	I1122 00:49:30.920162  659783 ssh_runner.go:195] Run: which crictl
	I1122 00:49:30.924202  659783 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1122 00:49:30.924290  659783 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1122 00:49:30.952573  659783 cri.go:89] found id: ""
	I1122 00:49:30.952596  659783 logs.go:282] 0 containers: []
	W1122 00:49:30.952605  659783 logs.go:284] No container was found matching "kube-proxy"
	I1122 00:49:30.952611  659783 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1122 00:49:30.952669  659783 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1122 00:49:30.982018  659783 cri.go:89] found id: "492c7cccdf0364fe71128220d1afdd81ad6e587b857b7898958d72c0a7eb4953"
	I1122 00:49:30.982044  659783 cri.go:89] found id: ""
	I1122 00:49:30.982052  659783 logs.go:282] 1 containers: [492c7cccdf0364fe71128220d1afdd81ad6e587b857b7898958d72c0a7eb4953]
	I1122 00:49:30.982117  659783 ssh_runner.go:195] Run: which crictl
	I1122 00:49:30.986210  659783 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1122 00:49:30.986362  659783 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1122 00:49:31.020631  659783 cri.go:89] found id: ""
	I1122 00:49:31.020715  659783 logs.go:282] 0 containers: []
	W1122 00:49:31.020741  659783 logs.go:284] No container was found matching "kindnet"
	I1122 00:49:31.020760  659783 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1122 00:49:31.020871  659783 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1122 00:49:31.048252  659783 cri.go:89] found id: ""
	I1122 00:49:31.048278  659783 logs.go:282] 0 containers: []
	W1122 00:49:31.048287  659783 logs.go:284] No container was found matching "storage-provisioner"
	I1122 00:49:31.048297  659783 logs.go:123] Gathering logs for describe nodes ...
	I1122 00:49:31.048341  659783 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1122 00:49:31.117367  659783 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1122 00:49:31.117393  659783 logs.go:123] Gathering logs for kube-apiserver [513641c31371d8c05f0f09cedcd3f1444e01013c0f90e22c86ddeadf4d1a516c] ...
	I1122 00:49:31.117411  659783 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 513641c31371d8c05f0f09cedcd3f1444e01013c0f90e22c86ddeadf4d1a516c"
	I1122 00:49:31.153446  659783 logs.go:123] Gathering logs for kube-scheduler [c546164b4e284af119113a13e0a67e01c0dd3fe9b208a4092c79664dd912c8e8] ...
	I1122 00:49:31.153483  659783 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 c546164b4e284af119113a13e0a67e01c0dd3fe9b208a4092c79664dd912c8e8"
	I1122 00:49:31.242082  659783 logs.go:123] Gathering logs for kube-controller-manager [492c7cccdf0364fe71128220d1afdd81ad6e587b857b7898958d72c0a7eb4953] ...
	I1122 00:49:31.242159  659783 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 492c7cccdf0364fe71128220d1afdd81ad6e587b857b7898958d72c0a7eb4953"
	I1122 00:49:29.222497  675703 node_ready.go:49] node "pause-028559" is "Ready"
	I1122 00:49:29.222523  675703 node_ready.go:38] duration metric: took 6.507599327s for node "pause-028559" to be "Ready" ...
	I1122 00:49:29.222537  675703 api_server.go:52] waiting for apiserver process to appear ...
	I1122 00:49:29.222596  675703 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1122 00:49:29.234532  675703 api_server.go:72] duration metric: took 6.697297012s to wait for apiserver process to appear ...
	I1122 00:49:29.234554  675703 api_server.go:88] waiting for apiserver healthz status ...
	I1122 00:49:29.234573  675703 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1122 00:49:29.281690  675703 api_server.go:279] https://192.168.85.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1122 00:49:29.281774  675703 api_server.go:103] status: https://192.168.85.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1122 00:49:29.735012  675703 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1122 00:49:29.743237  675703 api_server.go:279] https://192.168.85.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1122 00:49:29.743327  675703 api_server.go:103] status: https://192.168.85.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1122 00:49:30.234922  675703 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1122 00:49:30.243000  675703 api_server.go:279] https://192.168.85.2:8443/healthz returned 200:
	ok
	I1122 00:49:30.244073  675703 api_server.go:141] control plane version: v1.34.1
	I1122 00:49:30.244096  675703 api_server.go:131] duration metric: took 1.00953584s to wait for apiserver health ...
	I1122 00:49:30.244105  675703 system_pods.go:43] waiting for kube-system pods to appear ...
	I1122 00:49:30.248099  675703 system_pods.go:59] 7 kube-system pods found
	I1122 00:49:30.248135  675703 system_pods.go:61] "coredns-66bc5c9577-mf9wz" [c60bc6ef-6579-4cd2-821a-d54eed09dd2f] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1122 00:49:30.248144  675703 system_pods.go:61] "etcd-pause-028559" [cde6ab83-039f-4d41-b9d1-9f014e5a0cc2] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1122 00:49:30.248157  675703 system_pods.go:61] "kindnet-md6h6" [c69079f9-3127-43a1-99c6-9ec5a41b79cc] Running
	I1122 00:49:30.248163  675703 system_pods.go:61] "kube-apiserver-pause-028559" [e01b3f00-761a-4a4a-883d-0f10a9dcee53] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1122 00:49:30.248171  675703 system_pods.go:61] "kube-controller-manager-pause-028559" [facefbfd-f19e-48a4-9b4a-bc60f64a69bd] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1122 00:49:30.248175  675703 system_pods.go:61] "kube-proxy-qnj6x" [1e6d9e0d-242d-484d-be05-aaaf175e8c31] Running
	I1122 00:49:30.248202  675703 system_pods.go:61] "kube-scheduler-pause-028559" [f94e531d-a5fe-4de0-9d3d-4779af25bf97] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1122 00:49:30.248212  675703 system_pods.go:74] duration metric: took 4.101007ms to wait for pod list to return data ...
	I1122 00:49:30.248223  675703 default_sa.go:34] waiting for default service account to be created ...
	I1122 00:49:30.251213  675703 default_sa.go:45] found service account: "default"
	I1122 00:49:30.251240  675703 default_sa.go:55] duration metric: took 3.00851ms for default service account to be created ...
	I1122 00:49:30.251249  675703 system_pods.go:116] waiting for k8s-apps to be running ...
	I1122 00:49:30.254542  675703 system_pods.go:86] 7 kube-system pods found
	I1122 00:49:30.254625  675703 system_pods.go:89] "coredns-66bc5c9577-mf9wz" [c60bc6ef-6579-4cd2-821a-d54eed09dd2f] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1122 00:49:30.254648  675703 system_pods.go:89] "etcd-pause-028559" [cde6ab83-039f-4d41-b9d1-9f014e5a0cc2] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1122 00:49:30.254667  675703 system_pods.go:89] "kindnet-md6h6" [c69079f9-3127-43a1-99c6-9ec5a41b79cc] Running
	I1122 00:49:30.254704  675703 system_pods.go:89] "kube-apiserver-pause-028559" [e01b3f00-761a-4a4a-883d-0f10a9dcee53] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1122 00:49:30.254729  675703 system_pods.go:89] "kube-controller-manager-pause-028559" [facefbfd-f19e-48a4-9b4a-bc60f64a69bd] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1122 00:49:30.254748  675703 system_pods.go:89] "kube-proxy-qnj6x" [1e6d9e0d-242d-484d-be05-aaaf175e8c31] Running
	I1122 00:49:30.254787  675703 system_pods.go:89] "kube-scheduler-pause-028559" [f94e531d-a5fe-4de0-9d3d-4779af25bf97] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1122 00:49:30.254814  675703 system_pods.go:126] duration metric: took 3.558107ms to wait for k8s-apps to be running ...
	I1122 00:49:30.254837  675703 system_svc.go:44] waiting for kubelet service to be running ....
	I1122 00:49:30.254921  675703 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1122 00:49:30.273565  675703 system_svc.go:56] duration metric: took 18.720025ms WaitForService to wait for kubelet
	I1122 00:49:30.273645  675703 kubeadm.go:587] duration metric: took 7.736413892s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1122 00:49:30.273677  675703 node_conditions.go:102] verifying NodePressure condition ...
	I1122 00:49:30.281343  675703 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1122 00:49:30.281423  675703 node_conditions.go:123] node cpu capacity is 2
	I1122 00:49:30.281450  675703 node_conditions.go:105] duration metric: took 7.753356ms to run NodePressure ...
	I1122 00:49:30.281478  675703 start.go:242] waiting for startup goroutines ...
	I1122 00:49:30.281518  675703 start.go:247] waiting for cluster config update ...
	I1122 00:49:30.281540  675703 start.go:256] writing updated cluster config ...
	I1122 00:49:30.281929  675703 ssh_runner.go:195] Run: rm -f paused
	I1122 00:49:30.290498  675703 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1122 00:49:30.291251  675703 kapi.go:59] client config for pause-028559: &rest.Config{Host:"https://192.168.85.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21934-513600/.minikube/profiles/pause-028559/client.crt", KeyFile:"/home/jenkins/minikube-integration/21934-513600/.minikube/profiles/pause-028559/client.key", CAFile:"/home/jenkins/minikube-integration/21934-513600/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]s
tring(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1fb2df0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1122 00:49:30.349437  675703 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-mf9wz" in "kube-system" namespace to be "Ready" or be gone ...
	W1122 00:49:32.356308  675703 pod_ready.go:104] pod "coredns-66bc5c9577-mf9wz" is not "Ready", error: <nil>
	I1122 00:49:31.287880  659783 logs.go:123] Gathering logs for CRI-O ...
	I1122 00:49:31.287910  659783 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1122 00:49:31.361711  659783 logs.go:123] Gathering logs for container status ...
	I1122 00:49:31.361784  659783 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1122 00:49:31.395895  659783 logs.go:123] Gathering logs for kubelet ...
	I1122 00:49:31.395974  659783 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1122 00:49:31.519014  659783 logs.go:123] Gathering logs for dmesg ...
	I1122 00:49:31.519054  659783 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1122 00:49:34.040116  659783 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1122 00:49:34.040588  659783 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1122 00:49:34.040657  659783 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1122 00:49:34.040732  659783 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1122 00:49:34.068378  659783 cri.go:89] found id: "513641c31371d8c05f0f09cedcd3f1444e01013c0f90e22c86ddeadf4d1a516c"
	I1122 00:49:34.068400  659783 cri.go:89] found id: ""
	I1122 00:49:34.068408  659783 logs.go:282] 1 containers: [513641c31371d8c05f0f09cedcd3f1444e01013c0f90e22c86ddeadf4d1a516c]
	I1122 00:49:34.068465  659783 ssh_runner.go:195] Run: which crictl
	I1122 00:49:34.072298  659783 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1122 00:49:34.072373  659783 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1122 00:49:34.098629  659783 cri.go:89] found id: ""
	I1122 00:49:34.098653  659783 logs.go:282] 0 containers: []
	W1122 00:49:34.098663  659783 logs.go:284] No container was found matching "etcd"
	I1122 00:49:34.098669  659783 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1122 00:49:34.098726  659783 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1122 00:49:34.124615  659783 cri.go:89] found id: ""
	I1122 00:49:34.124639  659783 logs.go:282] 0 containers: []
	W1122 00:49:34.124648  659783 logs.go:284] No container was found matching "coredns"
	I1122 00:49:34.124654  659783 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1122 00:49:34.124716  659783 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1122 00:49:34.151983  659783 cri.go:89] found id: "c546164b4e284af119113a13e0a67e01c0dd3fe9b208a4092c79664dd912c8e8"
	I1122 00:49:34.152005  659783 cri.go:89] found id: ""
	I1122 00:49:34.152013  659783 logs.go:282] 1 containers: [c546164b4e284af119113a13e0a67e01c0dd3fe9b208a4092c79664dd912c8e8]
	I1122 00:49:34.152067  659783 ssh_runner.go:195] Run: which crictl
	I1122 00:49:34.155804  659783 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1122 00:49:34.155880  659783 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1122 00:49:34.182319  659783 cri.go:89] found id: ""
	I1122 00:49:34.182344  659783 logs.go:282] 0 containers: []
	W1122 00:49:34.182353  659783 logs.go:284] No container was found matching "kube-proxy"
	I1122 00:49:34.182360  659783 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1122 00:49:34.182438  659783 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1122 00:49:34.210190  659783 cri.go:89] found id: "492c7cccdf0364fe71128220d1afdd81ad6e587b857b7898958d72c0a7eb4953"
	I1122 00:49:34.210211  659783 cri.go:89] found id: ""
	I1122 00:49:34.210219  659783 logs.go:282] 1 containers: [492c7cccdf0364fe71128220d1afdd81ad6e587b857b7898958d72c0a7eb4953]
	I1122 00:49:34.210296  659783 ssh_runner.go:195] Run: which crictl
	I1122 00:49:34.214023  659783 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1122 00:49:34.214112  659783 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1122 00:49:34.241405  659783 cri.go:89] found id: ""
	I1122 00:49:34.241429  659783 logs.go:282] 0 containers: []
	W1122 00:49:34.241437  659783 logs.go:284] No container was found matching "kindnet"
	I1122 00:49:34.241443  659783 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1122 00:49:34.241553  659783 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1122 00:49:34.267639  659783 cri.go:89] found id: ""
	I1122 00:49:34.267667  659783 logs.go:282] 0 containers: []
	W1122 00:49:34.267676  659783 logs.go:284] No container was found matching "storage-provisioner"
	I1122 00:49:34.267686  659783 logs.go:123] Gathering logs for kube-apiserver [513641c31371d8c05f0f09cedcd3f1444e01013c0f90e22c86ddeadf4d1a516c] ...
	I1122 00:49:34.267728  659783 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 513641c31371d8c05f0f09cedcd3f1444e01013c0f90e22c86ddeadf4d1a516c"
	I1122 00:49:34.302018  659783 logs.go:123] Gathering logs for kube-scheduler [c546164b4e284af119113a13e0a67e01c0dd3fe9b208a4092c79664dd912c8e8] ...
	I1122 00:49:34.302050  659783 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 c546164b4e284af119113a13e0a67e01c0dd3fe9b208a4092c79664dd912c8e8"
	I1122 00:49:34.398301  659783 logs.go:123] Gathering logs for kube-controller-manager [492c7cccdf0364fe71128220d1afdd81ad6e587b857b7898958d72c0a7eb4953] ...
	I1122 00:49:34.398338  659783 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 492c7cccdf0364fe71128220d1afdd81ad6e587b857b7898958d72c0a7eb4953"
	I1122 00:49:34.430046  659783 logs.go:123] Gathering logs for CRI-O ...
	I1122 00:49:34.430079  659783 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1122 00:49:34.493544  659783 logs.go:123] Gathering logs for container status ...
	I1122 00:49:34.493578  659783 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1122 00:49:34.525986  659783 logs.go:123] Gathering logs for kubelet ...
	I1122 00:49:34.526015  659783 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1122 00:49:34.641721  659783 logs.go:123] Gathering logs for dmesg ...
	I1122 00:49:34.641756  659783 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1122 00:49:34.659594  659783 logs.go:123] Gathering logs for describe nodes ...
	I1122 00:49:34.659624  659783 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1122 00:49:34.728222  659783 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	W1122 00:49:34.358009  675703 pod_ready.go:104] pod "coredns-66bc5c9577-mf9wz" is not "Ready", error: <nil>
	I1122 00:49:35.364144  675703 pod_ready.go:94] pod "coredns-66bc5c9577-mf9wz" is "Ready"
	I1122 00:49:35.364170  675703 pod_ready.go:86] duration metric: took 5.014700497s for pod "coredns-66bc5c9577-mf9wz" in "kube-system" namespace to be "Ready" or be gone ...
	I1122 00:49:35.371385  675703 pod_ready.go:83] waiting for pod "etcd-pause-028559" in "kube-system" namespace to be "Ready" or be gone ...
	I1122 00:49:35.384840  675703 pod_ready.go:94] pod "etcd-pause-028559" is "Ready"
	I1122 00:49:35.384918  675703 pod_ready.go:86] duration metric: took 13.494906ms for pod "etcd-pause-028559" in "kube-system" namespace to be "Ready" or be gone ...
	I1122 00:49:35.397388  675703 pod_ready.go:83] waiting for pod "kube-apiserver-pause-028559" in "kube-system" namespace to be "Ready" or be gone ...
	I1122 00:49:35.407774  675703 pod_ready.go:94] pod "kube-apiserver-pause-028559" is "Ready"
	I1122 00:49:35.407853  675703 pod_ready.go:86] duration metric: took 10.439628ms for pod "kube-apiserver-pause-028559" in "kube-system" namespace to be "Ready" or be gone ...
	I1122 00:49:35.411381  675703 pod_ready.go:83] waiting for pod "kube-controller-manager-pause-028559" in "kube-system" namespace to be "Ready" or be gone ...
	I1122 00:49:35.552765  675703 pod_ready.go:94] pod "kube-controller-manager-pause-028559" is "Ready"
	I1122 00:49:35.552790  675703 pod_ready.go:86] duration metric: took 141.288685ms for pod "kube-controller-manager-pause-028559" in "kube-system" namespace to be "Ready" or be gone ...
	I1122 00:49:35.752574  675703 pod_ready.go:83] waiting for pod "kube-proxy-qnj6x" in "kube-system" namespace to be "Ready" or be gone ...
	I1122 00:49:36.153230  675703 pod_ready.go:94] pod "kube-proxy-qnj6x" is "Ready"
	I1122 00:49:36.153256  675703 pod_ready.go:86] duration metric: took 400.655531ms for pod "kube-proxy-qnj6x" in "kube-system" namespace to be "Ready" or be gone ...
	I1122 00:49:36.353229  675703 pod_ready.go:83] waiting for pod "kube-scheduler-pause-028559" in "kube-system" namespace to be "Ready" or be gone ...
	I1122 00:49:38.358301  675703 pod_ready.go:94] pod "kube-scheduler-pause-028559" is "Ready"
	I1122 00:49:38.358331  675703 pod_ready.go:86] duration metric: took 2.005075887s for pod "kube-scheduler-pause-028559" in "kube-system" namespace to be "Ready" or be gone ...
	I1122 00:49:38.358344  675703 pod_ready.go:40] duration metric: took 8.067764447s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1122 00:49:38.417058  675703 start.go:628] kubectl: 1.33.2, cluster: 1.34.1 (minor skew: 1)
	I1122 00:49:38.420098  675703 out.go:179] * Done! kubectl is now configured to use "pause-028559" cluster and "default" namespace by default
	I1122 00:49:37.228499  659783 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1122 00:49:37.228919  659783 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1122 00:49:37.228992  659783 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1122 00:49:37.229062  659783 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1122 00:49:37.254860  659783 cri.go:89] found id: "513641c31371d8c05f0f09cedcd3f1444e01013c0f90e22c86ddeadf4d1a516c"
	I1122 00:49:37.254927  659783 cri.go:89] found id: ""
	I1122 00:49:37.254949  659783 logs.go:282] 1 containers: [513641c31371d8c05f0f09cedcd3f1444e01013c0f90e22c86ddeadf4d1a516c]
	I1122 00:49:37.255027  659783 ssh_runner.go:195] Run: which crictl
	I1122 00:49:37.258686  659783 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1122 00:49:37.258780  659783 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1122 00:49:37.287483  659783 cri.go:89] found id: ""
	I1122 00:49:37.287505  659783 logs.go:282] 0 containers: []
	W1122 00:49:37.287515  659783 logs.go:284] No container was found matching "etcd"
	I1122 00:49:37.287521  659783 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1122 00:49:37.287577  659783 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1122 00:49:37.319527  659783 cri.go:89] found id: ""
	I1122 00:49:37.319552  659783 logs.go:282] 0 containers: []
	W1122 00:49:37.319561  659783 logs.go:284] No container was found matching "coredns"
	I1122 00:49:37.319568  659783 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1122 00:49:37.319631  659783 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1122 00:49:37.346037  659783 cri.go:89] found id: "c546164b4e284af119113a13e0a67e01c0dd3fe9b208a4092c79664dd912c8e8"
	I1122 00:49:37.346058  659783 cri.go:89] found id: ""
	I1122 00:49:37.346066  659783 logs.go:282] 1 containers: [c546164b4e284af119113a13e0a67e01c0dd3fe9b208a4092c79664dd912c8e8]
	I1122 00:49:37.346123  659783 ssh_runner.go:195] Run: which crictl
	I1122 00:49:37.349584  659783 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1122 00:49:37.349651  659783 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1122 00:49:37.376164  659783 cri.go:89] found id: ""
	I1122 00:49:37.376187  659783 logs.go:282] 0 containers: []
	W1122 00:49:37.376196  659783 logs.go:284] No container was found matching "kube-proxy"
	I1122 00:49:37.376202  659783 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1122 00:49:37.376265  659783 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1122 00:49:37.403661  659783 cri.go:89] found id: "492c7cccdf0364fe71128220d1afdd81ad6e587b857b7898958d72c0a7eb4953"
	I1122 00:49:37.403734  659783 cri.go:89] found id: ""
	I1122 00:49:37.403752  659783 logs.go:282] 1 containers: [492c7cccdf0364fe71128220d1afdd81ad6e587b857b7898958d72c0a7eb4953]
	I1122 00:49:37.403817  659783 ssh_runner.go:195] Run: which crictl
	I1122 00:49:37.407561  659783 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1122 00:49:37.407631  659783 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1122 00:49:37.433527  659783 cri.go:89] found id: ""
	I1122 00:49:37.433547  659783 logs.go:282] 0 containers: []
	W1122 00:49:37.433556  659783 logs.go:284] No container was found matching "kindnet"
	I1122 00:49:37.433562  659783 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1122 00:49:37.433622  659783 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1122 00:49:37.466464  659783 cri.go:89] found id: ""
	I1122 00:49:37.466489  659783 logs.go:282] 0 containers: []
	W1122 00:49:37.466498  659783 logs.go:284] No container was found matching "storage-provisioner"
	I1122 00:49:37.466508  659783 logs.go:123] Gathering logs for dmesg ...
	I1122 00:49:37.466520  659783 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1122 00:49:37.484698  659783 logs.go:123] Gathering logs for describe nodes ...
	I1122 00:49:37.484728  659783 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1122 00:49:37.558527  659783 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1122 00:49:37.558549  659783 logs.go:123] Gathering logs for kube-apiserver [513641c31371d8c05f0f09cedcd3f1444e01013c0f90e22c86ddeadf4d1a516c] ...
	I1122 00:49:37.558561  659783 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 513641c31371d8c05f0f09cedcd3f1444e01013c0f90e22c86ddeadf4d1a516c"
	I1122 00:49:37.592244  659783 logs.go:123] Gathering logs for kube-scheduler [c546164b4e284af119113a13e0a67e01c0dd3fe9b208a4092c79664dd912c8e8] ...
	I1122 00:49:37.592278  659783 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 c546164b4e284af119113a13e0a67e01c0dd3fe9b208a4092c79664dd912c8e8"
	I1122 00:49:37.654272  659783 logs.go:123] Gathering logs for kube-controller-manager [492c7cccdf0364fe71128220d1afdd81ad6e587b857b7898958d72c0a7eb4953] ...
	I1122 00:49:37.654308  659783 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 492c7cccdf0364fe71128220d1afdd81ad6e587b857b7898958d72c0a7eb4953"
	I1122 00:49:37.683329  659783 logs.go:123] Gathering logs for CRI-O ...
	I1122 00:49:37.683354  659783 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1122 00:49:37.742893  659783 logs.go:123] Gathering logs for container status ...
	I1122 00:49:37.742939  659783 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1122 00:49:37.774790  659783 logs.go:123] Gathering logs for kubelet ...
	I1122 00:49:37.774817  659783 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1122 00:49:40.392116  659783 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1122 00:49:40.392564  659783 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1122 00:49:40.392613  659783 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1122 00:49:40.392673  659783 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1122 00:49:40.421353  659783 cri.go:89] found id: "513641c31371d8c05f0f09cedcd3f1444e01013c0f90e22c86ddeadf4d1a516c"
	I1122 00:49:40.421373  659783 cri.go:89] found id: ""
	I1122 00:49:40.421381  659783 logs.go:282] 1 containers: [513641c31371d8c05f0f09cedcd3f1444e01013c0f90e22c86ddeadf4d1a516c]
	I1122 00:49:40.421438  659783 ssh_runner.go:195] Run: which crictl
	I1122 00:49:40.425192  659783 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1122 00:49:40.425271  659783 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1122 00:49:40.450286  659783 cri.go:89] found id: ""
	I1122 00:49:40.450314  659783 logs.go:282] 0 containers: []
	W1122 00:49:40.450323  659783 logs.go:284] No container was found matching "etcd"
	I1122 00:49:40.450330  659783 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1122 00:49:40.450386  659783 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1122 00:49:40.476864  659783 cri.go:89] found id: ""
	I1122 00:49:40.476889  659783 logs.go:282] 0 containers: []
	W1122 00:49:40.476898  659783 logs.go:284] No container was found matching "coredns"
	I1122 00:49:40.476904  659783 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1122 00:49:40.476962  659783 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1122 00:49:40.505369  659783 cri.go:89] found id: "c546164b4e284af119113a13e0a67e01c0dd3fe9b208a4092c79664dd912c8e8"
	I1122 00:49:40.505397  659783 cri.go:89] found id: ""
	I1122 00:49:40.505405  659783 logs.go:282] 1 containers: [c546164b4e284af119113a13e0a67e01c0dd3fe9b208a4092c79664dd912c8e8]
	I1122 00:49:40.505462  659783 ssh_runner.go:195] Run: which crictl
	I1122 00:49:40.509111  659783 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1122 00:49:40.509187  659783 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1122 00:49:40.536796  659783 cri.go:89] found id: ""
	I1122 00:49:40.536820  659783 logs.go:282] 0 containers: []
	W1122 00:49:40.536829  659783 logs.go:284] No container was found matching "kube-proxy"
	I1122 00:49:40.536835  659783 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1122 00:49:40.536892  659783 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1122 00:49:40.563132  659783 cri.go:89] found id: "492c7cccdf0364fe71128220d1afdd81ad6e587b857b7898958d72c0a7eb4953"
	I1122 00:49:40.563153  659783 cri.go:89] found id: ""
	I1122 00:49:40.563161  659783 logs.go:282] 1 containers: [492c7cccdf0364fe71128220d1afdd81ad6e587b857b7898958d72c0a7eb4953]
	I1122 00:49:40.563228  659783 ssh_runner.go:195] Run: which crictl
	I1122 00:49:40.567173  659783 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1122 00:49:40.567237  659783 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1122 00:49:40.595171  659783 cri.go:89] found id: ""
	I1122 00:49:40.595195  659783 logs.go:282] 0 containers: []
	W1122 00:49:40.595205  659783 logs.go:284] No container was found matching "kindnet"
	I1122 00:49:40.595212  659783 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1122 00:49:40.595268  659783 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1122 00:49:40.621301  659783 cri.go:89] found id: ""
	I1122 00:49:40.621326  659783 logs.go:282] 0 containers: []
	W1122 00:49:40.621335  659783 logs.go:284] No container was found matching "storage-provisioner"
	I1122 00:49:40.621344  659783 logs.go:123] Gathering logs for kube-apiserver [513641c31371d8c05f0f09cedcd3f1444e01013c0f90e22c86ddeadf4d1a516c] ...
	I1122 00:49:40.621357  659783 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 513641c31371d8c05f0f09cedcd3f1444e01013c0f90e22c86ddeadf4d1a516c"
	I1122 00:49:40.660396  659783 logs.go:123] Gathering logs for kube-scheduler [c546164b4e284af119113a13e0a67e01c0dd3fe9b208a4092c79664dd912c8e8] ...
	I1122 00:49:40.660430  659783 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 c546164b4e284af119113a13e0a67e01c0dd3fe9b208a4092c79664dd912c8e8"
	I1122 00:49:40.723262  659783 logs.go:123] Gathering logs for kube-controller-manager [492c7cccdf0364fe71128220d1afdd81ad6e587b857b7898958d72c0a7eb4953] ...
	I1122 00:49:40.723325  659783 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 492c7cccdf0364fe71128220d1afdd81ad6e587b857b7898958d72c0a7eb4953"
	I1122 00:49:40.753222  659783 logs.go:123] Gathering logs for CRI-O ...
	I1122 00:49:40.753301  659783 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1122 00:49:40.837307  659783 logs.go:123] Gathering logs for container status ...
	I1122 00:49:40.837395  659783 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1122 00:49:40.890696  659783 logs.go:123] Gathering logs for kubelet ...
	I1122 00:49:40.890769  659783 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1122 00:49:41.028245  659783 logs.go:123] Gathering logs for dmesg ...
	I1122 00:49:41.028300  659783 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1122 00:49:41.048201  659783 logs.go:123] Gathering logs for describe nodes ...
	I1122 00:49:41.048436  659783 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1122 00:49:41.156552  659783 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	
	
	==> CRI-O <==
	Nov 22 00:49:22 pause-028559 crio[2044]: time="2025-11-22T00:49:22.913778011Z" level=info msg="Started container" PID=2338 containerID=d3753a55d1b0852aeac6d506d250fc46da9733dcd90885d3802044a0a80ad951 description=kube-system/kube-scheduler-pause-028559/kube-scheduler id=05227cb2-968f-4bc1-97bb-6a520506fee5 name=/runtime.v1.RuntimeService/StartContainer sandboxID=447f373a9450540c35ccff421b669437b04b54109fdd4952b307a3a2dbd1aa13
	Nov 22 00:49:22 pause-028559 crio[2044]: time="2025-11-22T00:49:22.921084356Z" level=info msg="Created container 87e6aa383dd95744b3c571bbd518cffcd7a4eb8cd0d3a6e584ed17b3615976fd: kube-system/kindnet-md6h6/kindnet-cni" id=12247211-8cd6-4ae1-a318-a7b24b4d4501 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 22 00:49:22 pause-028559 crio[2044]: time="2025-11-22T00:49:22.921856165Z" level=info msg="Starting container: 87e6aa383dd95744b3c571bbd518cffcd7a4eb8cd0d3a6e584ed17b3615976fd" id=6bff490c-8540-4c74-931d-5c68197f6a12 name=/runtime.v1.RuntimeService/StartContainer
	Nov 22 00:49:22 pause-028559 crio[2044]: time="2025-11-22T00:49:22.926423966Z" level=info msg="Started container" PID=2347 containerID=87e6aa383dd95744b3c571bbd518cffcd7a4eb8cd0d3a6e584ed17b3615976fd description=kube-system/kindnet-md6h6/kindnet-cni id=6bff490c-8540-4c74-931d-5c68197f6a12 name=/runtime.v1.RuntimeService/StartContainer sandboxID=d59ab14bb3ccc2a17b058cab963d51d99b6250fe01089cb9ec2107b7add11a95
	Nov 22 00:49:22 pause-028559 crio[2044]: time="2025-11-22T00:49:22.96926119Z" level=info msg="Created container 56e196e0cbb051ab34ffee4c1a27c68525705ef12cf36abbf63ad2924e3b38d1: kube-system/coredns-66bc5c9577-mf9wz/coredns" id=1f1ed80a-1f96-436f-aa86-1417ed37b58d name=/runtime.v1.RuntimeService/CreateContainer
	Nov 22 00:49:22 pause-028559 crio[2044]: time="2025-11-22T00:49:22.97361851Z" level=info msg="Starting container: 56e196e0cbb051ab34ffee4c1a27c68525705ef12cf36abbf63ad2924e3b38d1" id=0b9576d6-7191-4f6b-91b2-b4fa3eba215f name=/runtime.v1.RuntimeService/StartContainer
	Nov 22 00:49:22 pause-028559 crio[2044]: time="2025-11-22T00:49:22.976075365Z" level=info msg="Started container" PID=2376 containerID=56e196e0cbb051ab34ffee4c1a27c68525705ef12cf36abbf63ad2924e3b38d1 description=kube-system/coredns-66bc5c9577-mf9wz/coredns id=0b9576d6-7191-4f6b-91b2-b4fa3eba215f name=/runtime.v1.RuntimeService/StartContainer sandboxID=d7e1d7e0efbc2a7ec9c7c67cdaa6fb979dec664c9f36c6a48c08c993cd41dee8
	Nov 22 00:49:22 pause-028559 crio[2044]: time="2025-11-22T00:49:22.97730609Z" level=info msg="Created container 04ac46e69fa75a9a16c68b21e8dbf076926ba49248fd3f5118e8445fb78a9d5a: kube-system/kube-proxy-qnj6x/kube-proxy" id=2d83e4a2-0ebf-4888-a9de-8321dcaffab0 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 22 00:49:22 pause-028559 crio[2044]: time="2025-11-22T00:49:22.978094744Z" level=info msg="Starting container: 04ac46e69fa75a9a16c68b21e8dbf076926ba49248fd3f5118e8445fb78a9d5a" id=36170a58-33a9-4414-9b81-bddab701a2ba name=/runtime.v1.RuntimeService/StartContainer
	Nov 22 00:49:22 pause-028559 crio[2044]: time="2025-11-22T00:49:22.986553622Z" level=info msg="Started container" PID=2351 containerID=04ac46e69fa75a9a16c68b21e8dbf076926ba49248fd3f5118e8445fb78a9d5a description=kube-system/kube-proxy-qnj6x/kube-proxy id=36170a58-33a9-4414-9b81-bddab701a2ba name=/runtime.v1.RuntimeService/StartContainer sandboxID=c2dbf2401845cb444159fa771efde2c2d158ea0001fde11ce0f78c0d60a59f06
	Nov 22 00:49:33 pause-028559 crio[2044]: time="2025-11-22T00:49:33.282215374Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 22 00:49:33 pause-028559 crio[2044]: time="2025-11-22T00:49:33.2861763Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 22 00:49:33 pause-028559 crio[2044]: time="2025-11-22T00:49:33.286352894Z" level=info msg="Updated default CNI network name to kindnet"
	Nov 22 00:49:33 pause-028559 crio[2044]: time="2025-11-22T00:49:33.286386985Z" level=info msg="CNI monitoring event WRITE         \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 22 00:49:33 pause-028559 crio[2044]: time="2025-11-22T00:49:33.289526388Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 22 00:49:33 pause-028559 crio[2044]: time="2025-11-22T00:49:33.28957309Z" level=info msg="Updated default CNI network name to kindnet"
	Nov 22 00:49:33 pause-028559 crio[2044]: time="2025-11-22T00:49:33.289592708Z" level=info msg="CNI monitoring event WRITE         \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 22 00:49:33 pause-028559 crio[2044]: time="2025-11-22T00:49:33.293053128Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 22 00:49:33 pause-028559 crio[2044]: time="2025-11-22T00:49:33.293086768Z" level=info msg="Updated default CNI network name to kindnet"
	Nov 22 00:49:33 pause-028559 crio[2044]: time="2025-11-22T00:49:33.293108822Z" level=info msg="CNI monitoring event RENAME        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 22 00:49:33 pause-028559 crio[2044]: time="2025-11-22T00:49:33.296181077Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 22 00:49:33 pause-028559 crio[2044]: time="2025-11-22T00:49:33.296212977Z" level=info msg="Updated default CNI network name to kindnet"
	Nov 22 00:49:33 pause-028559 crio[2044]: time="2025-11-22T00:49:33.296235377Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist\" ← \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 22 00:49:33 pause-028559 crio[2044]: time="2025-11-22T00:49:33.299265195Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 22 00:49:33 pause-028559 crio[2044]: time="2025-11-22T00:49:33.299301199Z" level=info msg="Updated default CNI network name to kindnet"
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                    NAMESPACE
	56e196e0cbb05       138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc   19 seconds ago      Running             coredns                   1                   d7e1d7e0efbc2       coredns-66bc5c9577-mf9wz               kube-system
	87e6aa383dd95       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c   19 seconds ago      Running             kindnet-cni               1                   d59ab14bb3ccc       kindnet-md6h6                          kube-system
	04ac46e69fa75       05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9   19 seconds ago      Running             kube-proxy                1                   c2dbf2401845c       kube-proxy-qnj6x                       kube-system
	096e422d76e9c       7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a   19 seconds ago      Running             kube-controller-manager   1                   f15ca62deff89       kube-controller-manager-pause-028559   kube-system
	d3753a55d1b08       b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0   19 seconds ago      Running             kube-scheduler            1                   447f373a94505       kube-scheduler-pause-028559            kube-system
	369bd2d9f6691       43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196   19 seconds ago      Running             kube-apiserver            1                   4bf93441f7197       kube-apiserver-pause-028559            kube-system
	c697a18245b56       a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e   19 seconds ago      Running             etcd                      1                   c5f392634b9bc       etcd-pause-028559                      kube-system
	e8a18ac29ae5a       138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc   31 seconds ago      Exited              coredns                   0                   d7e1d7e0efbc2       coredns-66bc5c9577-mf9wz               kube-system
	7cf54964fa620       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c   42 seconds ago      Exited              kindnet-cni               0                   d59ab14bb3ccc       kindnet-md6h6                          kube-system
	8a47d7c0ee952       05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9   43 seconds ago      Exited              kube-proxy                0                   c2dbf2401845c       kube-proxy-qnj6x                       kube-system
	36494fae0c15c       7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a   56 seconds ago      Exited              kube-controller-manager   0                   f15ca62deff89       kube-controller-manager-pause-028559   kube-system
	b36b71426eedd       b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0   56 seconds ago      Exited              kube-scheduler            0                   447f373a94505       kube-scheduler-pause-028559            kube-system
	fe27dafacf48d       43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196   56 seconds ago      Exited              kube-apiserver            0                   4bf93441f7197       kube-apiserver-pause-028559            kube-system
	c1bb2c7a299bb       a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e   56 seconds ago      Exited              etcd                      0                   c5f392634b9bc       etcd-pause-028559                      kube-system
	
	
	==> coredns [56e196e0cbb051ab34ffee4c1a27c68525705ef12cf36abbf63ad2924e3b38d1] <==
	maxprocs: Leaving GOMAXPROCS=2: CPU quota undefined
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = fa9a0cdcdddcb4be74a0eaf7cfcb211c40e29ddf5507e03bbfc0065bade31f0f2641a2513136e246f32328dd126fc93236fb5c595246f0763926a524386705e8
	CoreDNS-1.12.1
	linux/arm64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:52746 - 36541 "HINFO IN 1552381233468049872.8928525143797496713. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.023222908s
	
	
	==> coredns [e8a18ac29ae5ac05f6b5b4c70bcf6b1fc73a710e59136307cfa68c8bcd36557d] <==
	maxprocs: Leaving GOMAXPROCS=2: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = fa9a0cdcdddcb4be74a0eaf7cfcb211c40e29ddf5507e03bbfc0065bade31f0f2641a2513136e246f32328dd126fc93236fb5c595246f0763926a524386705e8
	CoreDNS-1.12.1
	linux/arm64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:42276 - 1746 "HINFO IN 4082627028878797549.2099873197882724034. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.014723523s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> describe nodes <==
	Name:               pause-028559
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=pause-028559
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=299bbe887a12c40541707cc636234f35f4ff1785
	                    minikube.k8s.io/name=pause-028559
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_11_22T00_48_53_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 22 Nov 2025 00:48:49 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  pause-028559
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 22 Nov 2025 00:49:23 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 22 Nov 2025 00:49:23 +0000   Sat, 22 Nov 2025 00:48:45 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 22 Nov 2025 00:49:23 +0000   Sat, 22 Nov 2025 00:48:45 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 22 Nov 2025 00:49:23 +0000   Sat, 22 Nov 2025 00:48:45 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 22 Nov 2025 00:49:23 +0000   Sat, 22 Nov 2025 00:49:09 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.85.2
	  Hostname:    pause-028559
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	System Info:
	  Machine ID:                 c92eea4d3c03c0156e01fcf8691e3907
	  System UUID:                c6993104-bd8d-4c82-9995-66f6f1c875cf
	  Boot ID:                    72ac7385-472f-47d1-a23e-bd80468d6e09
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.2
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (7 in total)
	  Namespace                   Name                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                    ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-66bc5c9577-mf9wz                100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     45s
	  kube-system                 etcd-pause-028559                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         52s
	  kube-system                 kindnet-md6h6                           100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      45s
	  kube-system                 kube-apiserver-pause-028559             250m (12%)    0 (0%)      0 (0%)           0 (0%)         50s
	  kube-system                 kube-controller-manager-pause-028559    200m (10%)    0 (0%)      0 (0%)           0 (0%)         50s
	  kube-system                 kube-proxy-qnj6x                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         45s
	  kube-system                 kube-scheduler-pause-028559             100m (5%)     0 (0%)      0 (0%)           0 (0%)         50s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  100m (5%)
	  memory             220Mi (2%)  220Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                From             Message
	  ----     ------                   ----               ----             -------
	  Normal   Starting                 43s                kube-proxy       
	  Normal   Starting                 12s                kube-proxy       
	  Normal   NodeHasSufficientPID     58s (x8 over 58s)  kubelet          Node pause-028559 status is now: NodeHasSufficientPID
	  Warning  CgroupV1                 58s                kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  58s (x8 over 58s)  kubelet          Node pause-028559 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    58s (x8 over 58s)  kubelet          Node pause-028559 status is now: NodeHasNoDiskPressure
	  Normal   Starting                 58s                kubelet          Starting kubelet.
	  Normal   Starting                 50s                kubelet          Starting kubelet.
	  Warning  CgroupV1                 50s                kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  50s                kubelet          Node pause-028559 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    50s                kubelet          Node pause-028559 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     50s                kubelet          Node pause-028559 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           46s                node-controller  Node pause-028559 event: Registered Node pause-028559 in Controller
	  Normal   NodeReady                33s                kubelet          Node pause-028559 status is now: NodeReady
	  Normal   RegisteredNode           10s                node-controller  Node pause-028559 event: Registered Node pause-028559 in Controller
	
	
	==> dmesg <==
	[  +3.904643] overlayfs: idmapped layers are currently not supported
	[Nov22 00:15] overlayfs: idmapped layers are currently not supported
	[Nov22 00:23] overlayfs: idmapped layers are currently not supported
	[  +4.038304] overlayfs: idmapped layers are currently not supported
	[Nov22 00:24] overlayfs: idmapped layers are currently not supported
	[Nov22 00:25] overlayfs: idmapped layers are currently not supported
	[Nov22 00:26] overlayfs: idmapped layers are currently not supported
	[Nov22 00:31] overlayfs: idmapped layers are currently not supported
	[ +30.712010] overlayfs: idmapped layers are currently not supported
	[Nov22 00:32] overlayfs: idmapped layers are currently not supported
	[Nov22 00:33] overlayfs: idmapped layers are currently not supported
	[Nov22 00:35] overlayfs: idmapped layers are currently not supported
	[Nov22 00:36] overlayfs: idmapped layers are currently not supported
	[ +18.168104] overlayfs: idmapped layers are currently not supported
	[Nov22 00:37] overlayfs: idmapped layers are currently not supported
	[ +56.322609] overlayfs: idmapped layers are currently not supported
	[Nov22 00:38] overlayfs: idmapped layers are currently not supported
	[Nov22 00:39] overlayfs: idmapped layers are currently not supported
	[ +23.174928] overlayfs: idmapped layers are currently not supported
	[Nov22 00:41] overlayfs: idmapped layers are currently not supported
	[Nov22 00:42] overlayfs: idmapped layers are currently not supported
	[Nov22 00:44] overlayfs: idmapped layers are currently not supported
	[Nov22 00:45] overlayfs: idmapped layers are currently not supported
	[Nov22 00:46] overlayfs: idmapped layers are currently not supported
	[Nov22 00:48] overlayfs: idmapped layers are currently not supported
	
	
	==> etcd [c1bb2c7a299bbbaca1218d7345b106ea8559d1df49b48d1e8effc89a6e7a38b3] <==
	{"level":"warn","ts":"2025-11-22T00:48:48.674347Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35068","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-22T00:48:48.710781Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35092","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-22T00:48:48.726288Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35112","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-22T00:48:48.830058Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35132","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-11-22T00:48:58.102773Z","caller":"traceutil/trace.go:172","msg":"trace[427647510] transaction","detail":"{read_only:false; response_revision:373; number_of_response:1; }","duration":"101.393068ms","start":"2025-11-22T00:48:58.001356Z","end":"2025-11-22T00:48:58.102749Z","steps":["trace[427647510] 'compare'  (duration: 26.57408ms)","trace[427647510] 'store kv pair into bolt db' {req_type:put; key:/registry/configmaps/kube-node-lease/kube-root-ca.crt; req_size:1740; } (duration: 43.925413ms)"],"step_count":2}
	{"level":"info","ts":"2025-11-22T00:48:58.103949Z","caller":"traceutil/trace.go:172","msg":"trace[1198625101] transaction","detail":"{read_only:false; response_revision:374; number_of_response:1; }","duration":"101.295725ms","start":"2025-11-22T00:48:58.002631Z","end":"2025-11-22T00:48:58.103927Z","steps":["trace[1198625101] 'process raft request'  (duration: 71.347394ms)"],"step_count":1}
	{"level":"info","ts":"2025-11-22T00:48:58.151315Z","caller":"traceutil/trace.go:172","msg":"trace[534132880] transaction","detail":"{read_only:false; response_revision:375; number_of_response:1; }","duration":"148.537473ms","start":"2025-11-22T00:48:58.002759Z","end":"2025-11-22T00:48:58.151296Z","steps":["trace[534132880] 'process raft request'  (duration: 129.11136ms)","trace[534132880] 'compare'  (duration: 16.445195ms)"],"step_count":2}
	{"level":"info","ts":"2025-11-22T00:49:14.399681Z","caller":"osutil/interrupt_unix.go:65","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2025-11-22T00:49:14.399724Z","caller":"embed/etcd.go:426","msg":"closing etcd server","name":"pause-028559","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.85.2:2380"],"advertise-client-urls":["https://192.168.85.2:2379"]}
	{"level":"error","ts":"2025-11-22T00:49:14.399941Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"http: Server closed","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*serveCtx).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/serve.go:90"}
	{"level":"error","ts":"2025-11-22T00:49:14.537856Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"http: Server closed","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*serveCtx).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/serve.go:90"}
	{"level":"error","ts":"2025-11-22T00:49:14.539293Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 127.0.0.1:2381: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"warn","ts":"2025-11-22T00:49:14.539350Z","caller":"embed/serve.go:245","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-11-22T00:49:14.539434Z","caller":"embed/serve.go:245","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.85.2:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-11-22T00:49:14.539458Z","caller":"embed/serve.go:247","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.85.2:2379: use of closed network connection"}
	{"level":"error","ts":"2025-11-22T00:49:14.539469Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 192.168.85.2:2379: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-11-22T00:49:14.539486Z","caller":"etcdserver/server.go:1281","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"9f0758e1c58a86ed","current-leader-member-id":"9f0758e1c58a86ed"}
	{"level":"warn","ts":"2025-11-22T00:49:14.539444Z","caller":"embed/serve.go:247","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"info","ts":"2025-11-22T00:49:14.539530Z","caller":"etcdserver/server.go:2319","msg":"server has stopped; stopping cluster version's monitor"}
	{"level":"error","ts":"2025-11-22T00:49:14.539538Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 127.0.0.1:2379: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-11-22T00:49:14.539520Z","caller":"etcdserver/server.go:2342","msg":"server has stopped; stopping storage version's monitor"}
	{"level":"info","ts":"2025-11-22T00:49:14.542789Z","caller":"embed/etcd.go:621","msg":"stopping serving peer traffic","address":"192.168.85.2:2380"}
	{"level":"error","ts":"2025-11-22T00:49:14.542870Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 192.168.85.2:2380: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-11-22T00:49:14.542908Z","caller":"embed/etcd.go:626","msg":"stopped serving peer traffic","address":"192.168.85.2:2380"}
	{"level":"info","ts":"2025-11-22T00:49:14.542916Z","caller":"embed/etcd.go:428","msg":"closed etcd server","name":"pause-028559","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.85.2:2380"],"advertise-client-urls":["https://192.168.85.2:2379"]}
	
	
	==> etcd [c697a18245b5616f58771650b29470b900c4e63fb555bd2347a20b506820e266] <==
	{"level":"warn","ts":"2025-11-22T00:49:27.747549Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60686","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-22T00:49:27.769697Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60696","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-22T00:49:27.789443Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60720","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-22T00:49:27.829439Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60740","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-22T00:49:27.848381Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60756","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-22T00:49:27.886618Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60770","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-22T00:49:27.920916Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60790","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-22T00:49:27.926458Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60804","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-22T00:49:27.984157Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60820","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-22T00:49:28.004268Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60848","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-22T00:49:28.046793Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60860","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-22T00:49:28.051129Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60866","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-22T00:49:28.094987Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60892","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-22T00:49:28.099102Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60918","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-22T00:49:28.119320Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60924","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-22T00:49:28.146047Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60942","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-22T00:49:28.171455Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60954","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-22T00:49:28.189189Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60974","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-22T00:49:28.206441Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60980","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-22T00:49:28.227836Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:32772","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-22T00:49:28.251214Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:32794","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-22T00:49:28.280129Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:32802","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-22T00:49:28.315650Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:32832","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-22T00:49:28.330323Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:32852","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-22T00:49:28.381196Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:32874","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 00:49:42 up  5:31,  0 user,  load average: 1.89, 2.45, 1.96
	Linux pause-028559 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [7cf54964fa6209d21f425b45edbf33a457dcdc58ce72370d8035cad09e292b10] <==
	I1122 00:48:59.538061       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1122 00:48:59.610085       1 main.go:139] hostIP = 192.168.85.2
	podIP = 192.168.85.2
	I1122 00:48:59.610292       1 main.go:148] setting mtu 1500 for CNI 
	I1122 00:48:59.610334       1 main.go:178] kindnetd IP family: "ipv4"
	I1122 00:48:59.610375       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-11-22T00:48:59Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1122 00:48:59.719884       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1122 00:48:59.809952       1 controller.go:381] "Waiting for informer caches to sync"
	I1122 00:48:59.809981       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1122 00:48:59.810112       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1122 00:49:00.125861       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1122 00:49:00.125976       1 metrics.go:72] Registering metrics
	I1122 00:49:00.126100       1 controller.go:711] "Syncing nftables rules"
	I1122 00:49:09.718622       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1122 00:49:09.718666       1 main.go:301] handling current node
	
	
	==> kindnet [87e6aa383dd95744b3c571bbd518cffcd7a4eb8cd0d3a6e584ed17b3615976fd] <==
	I1122 00:49:23.019359       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1122 00:49:23.019745       1 main.go:139] hostIP = 192.168.85.2
	podIP = 192.168.85.2
	I1122 00:49:23.019916       1 main.go:148] setting mtu 1500 for CNI 
	I1122 00:49:23.019957       1 main.go:178] kindnetd IP family: "ipv4"
	I1122 00:49:23.019992       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-11-22T00:49:23Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1122 00:49:23.281604       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1122 00:49:23.281673       1 controller.go:381] "Waiting for informer caches to sync"
	I1122 00:49:23.281706       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1122 00:49:23.282580       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1122 00:49:29.385942       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1122 00:49:29.385980       1 metrics.go:72] Registering metrics
	I1122 00:49:29.386039       1 controller.go:711] "Syncing nftables rules"
	I1122 00:49:33.281875       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1122 00:49:33.281923       1 main.go:301] handling current node
	
	
	==> kube-apiserver [369bd2d9f6691fe8442c1241cc0e13dde6eb84069c52da0c86e0481560a45f58] <==
	I1122 00:49:29.227576       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I1122 00:49:29.227589       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I1122 00:49:29.228257       1 shared_informer.go:356] "Caches are synced" controller="kubernetes-service-cidr-controller"
	I1122 00:49:29.228304       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1122 00:49:29.245644       1 shared_informer.go:356] "Caches are synced" controller="ipallocator-repair-controller"
	I1122 00:49:29.251739       1 shared_informer.go:356] "Caches are synced" controller="configmaps"
	I1122 00:49:29.251928       1 cache.go:39] Caches are synced for LocalAvailability controller
	I1122 00:49:29.252156       1 shared_informer.go:356] "Caches are synced" controller="cluster_authentication_trust_controller"
	I1122 00:49:29.252448       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1122 00:49:29.260293       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I1122 00:49:29.295392       1 shared_informer.go:356] "Caches are synced" controller="*generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]"
	I1122 00:49:29.295523       1 policy_source.go:240] refreshing policies
	I1122 00:49:29.308471       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1122 00:49:29.309783       1 shared_informer.go:356] "Caches are synced" controller="crd-autoregister"
	I1122 00:49:29.316136       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1122 00:49:29.333383       1 aggregator.go:171] initial CRD sync complete...
	I1122 00:49:29.333535       1 autoregister_controller.go:144] Starting autoregister controller
	I1122 00:49:29.335086       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1122 00:49:29.335153       1 cache.go:39] Caches are synced for autoregister controller
	I1122 00:49:29.944674       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1122 00:49:31.176374       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1122 00:49:32.571263       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1122 00:49:32.820014       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1122 00:49:32.869979       1 controller.go:667] quota admission added evaluator for: endpoints
	I1122 00:49:32.970319       1 controller.go:667] quota admission added evaluator for: deployments.apps
	
	
	==> kube-apiserver [fe27dafacf48d07af4ed5cb9690723267dc16f0e9bc5356896a5e1d595009ff0] <==
	W1122 00:49:14.414256       1 logging.go:55] [core] [Channel #103 SubChannel #105]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1122 00:49:14.414561       1 logging.go:55] [core] [Channel #251 SubChannel #253]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1122 00:49:14.414321       1 logging.go:55] [core] [Channel #163 SubChannel #165]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1122 00:49:14.414352       1 logging.go:55] [core] [Channel #147 SubChannel #149]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1122 00:49:14.414378       1 logging.go:55] [core] [Channel #215 SubChannel #217]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1122 00:49:14.414781       1 logging.go:55] [core] [Channel #63 SubChannel #65]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1122 00:49:14.414413       1 logging.go:55] [core] [Channel #87 SubChannel #89]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1122 00:49:14.414870       1 logging.go:55] [core] [Channel #211 SubChannel #213]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1122 00:49:14.414930       1 logging.go:55] [core] [Channel #43 SubChannel #45]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1122 00:49:14.415000       1 logging.go:55] [core] [Channel #123 SubChannel #125]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1122 00:49:14.415208       1 logging.go:55] [core] [Channel #143 SubChannel #145]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1122 00:49:14.415319       1 logging.go:55] [core] [Channel #203 SubChannel #205]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1122 00:49:14.415445       1 logging.go:55] [core] [Channel #219 SubChannel #221]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1122 00:49:14.415309       1 logging.go:55] [core] [Channel #187 SubChannel #189]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1122 00:49:14.415590       1 logging.go:55] [core] [Channel #111 SubChannel #113]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1122 00:49:14.414526       1 logging.go:55] [core] [Channel #155 SubChannel #157]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1122 00:49:14.414625       1 logging.go:55] [core] [Channel #115 SubChannel #117]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1122 00:49:14.414652       1 logging.go:55] [core] [Channel #127 SubChannel #129]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1122 00:49:14.414690       1 logging.go:55] [core] [Channel #139 SubChannel #141]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1122 00:49:14.414724       1 logging.go:55] [core] [Channel #151 SubChannel #153]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1122 00:49:14.414752       1 logging.go:55] [core] [Channel #183 SubChannel #185]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1122 00:49:14.414843       1 logging.go:55] [core] [Channel #91 SubChannel #93]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1122 00:49:14.415693       1 logging.go:55] [core] [Channel #1 SubChannel #4]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1122 00:49:14.415771       1 logging.go:55] [core] [Channel #119 SubChannel #121]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1122 00:49:14.416154       1 logging.go:55] [core] [Channel #83 SubChannel #85]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	
	
	==> kube-controller-manager [096e422d76e9c2d03ee46d5100a4c9d88d27872157f3e04d2ca3d33d12269f96] <==
	I1122 00:49:32.566720       1 shared_informer.go:356] "Caches are synced" controller="service-cidr-controller"
	I1122 00:49:32.569057       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	I1122 00:49:32.569194       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1122 00:49:32.573401       1 shared_informer.go:356] "Caches are synced" controller="disruption"
	I1122 00:49:32.579859       1 shared_informer.go:356] "Caches are synced" controller="bootstrap_signer"
	I1122 00:49:32.582079       1 shared_informer.go:356] "Caches are synced" controller="taint"
	I1122 00:49:32.582177       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I1122 00:49:32.582275       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="pause-028559"
	I1122 00:49:32.582321       1 node_lifecycle_controller.go:1067] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I1122 00:49:32.587432       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1122 00:49:32.612018       1 shared_informer.go:356] "Caches are synced" controller="job"
	I1122 00:49:32.612109       1 shared_informer.go:356] "Caches are synced" controller="VAC protection"
	I1122 00:49:32.612126       1 shared_informer.go:356] "Caches are synced" controller="namespace"
	I1122 00:49:32.612167       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1122 00:49:32.612613       1 shared_informer.go:356] "Caches are synced" controller="TTL after finished"
	I1122 00:49:32.612638       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I1122 00:49:32.612686       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I1122 00:49:32.612714       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I1122 00:49:32.618390       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	I1122 00:49:32.620746       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I1122 00:49:32.621740       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1122 00:49:32.624085       1 shared_informer.go:356] "Caches are synced" controller="cronjob"
	I1122 00:49:32.631992       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-legacy-unknown"
	I1122 00:49:32.632091       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-client"
	I1122 00:49:32.632124       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kube-apiserver-client"
	
	
	==> kube-controller-manager [36494fae0c15c7ac23088851e0409e2f96cb7f3066877902ebe7aedf80916b67] <==
	I1122 00:48:56.571666       1 node_lifecycle_controller.go:1025] "Controller detected that all Nodes are not-Ready. Entering master disruption mode" logger="node-lifecycle-controller"
	I1122 00:48:56.572768       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I1122 00:48:56.572937       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrapproving"
	I1122 00:48:56.577489       1 shared_informer.go:356] "Caches are synced" controller="cronjob"
	I1122 00:48:56.580594       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1122 00:48:56.583019       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I1122 00:48:56.589997       1 shared_informer.go:356] "Caches are synced" controller="crt configmap"
	I1122 00:48:56.597260       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1122 00:48:56.616142       1 shared_informer.go:356] "Caches are synced" controller="disruption"
	I1122 00:48:56.617413       1 shared_informer.go:356] "Caches are synced" controller="node"
	I1122 00:48:56.617422       1 shared_informer.go:356] "Caches are synced" controller="service-cidr-controller"
	I1122 00:48:56.617547       1 range_allocator.go:177] "Sending events to api server" logger="node-ipam-controller"
	I1122 00:48:56.617600       1 range_allocator.go:183] "Starting range CIDR allocator" logger="node-ipam-controller"
	I1122 00:48:56.617629       1 shared_informer.go:349] "Waiting for caches to sync" controller="cidrallocator"
	I1122 00:48:56.617658       1 shared_informer.go:356] "Caches are synced" controller="cidrallocator"
	I1122 00:48:56.617705       1 shared_informer.go:356] "Caches are synced" controller="PV protection"
	I1122 00:48:56.618543       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I1122 00:48:56.620806       1 shared_informer.go:356] "Caches are synced" controller="expand"
	I1122 00:48:56.620936       1 shared_informer.go:356] "Caches are synced" controller="GC"
	I1122 00:48:56.620997       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1122 00:48:56.621181       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	I1122 00:48:56.625680       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1122 00:48:56.627207       1 range_allocator.go:428] "Set node PodCIDR" logger="node-ipam-controller" node="pause-028559" podCIDRs=["10.244.0.0/24"]
	I1122 00:48:56.630768       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I1122 00:49:11.574061       1 node_lifecycle_controller.go:1044] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	
	
	==> kube-proxy [04ac46e69fa75a9a16c68b21e8dbf076926ba49248fd3f5118e8445fb78a9d5a] <==
	I1122 00:49:23.044612       1 server_linux.go:53] "Using iptables proxy"
	I1122 00:49:24.965982       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1122 00:49:29.289556       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1122 00:49:29.305885       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.85.2"]
	E1122 00:49:29.325934       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1122 00:49:29.443747       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1122 00:49:29.443811       1 server_linux.go:132] "Using iptables Proxier"
	I1122 00:49:29.460265       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1122 00:49:29.460655       1 server.go:527] "Version info" version="v1.34.1"
	I1122 00:49:29.460679       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1122 00:49:29.473967       1 config.go:106] "Starting endpoint slice config controller"
	I1122 00:49:29.474153       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1122 00:49:29.474491       1 config.go:200] "Starting service config controller"
	I1122 00:49:29.474546       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1122 00:49:29.474913       1 config.go:403] "Starting serviceCIDR config controller"
	I1122 00:49:29.474971       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1122 00:49:29.475467       1 config.go:309] "Starting node config controller"
	I1122 00:49:29.475527       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1122 00:49:29.475558       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1122 00:49:29.576274       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1122 00:49:29.576344       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1122 00:49:29.576586       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	
	
	==> kube-proxy [8a47d7c0ee952b2d53a1e55f636df9b8ea9e35a1de95e8cd16ba1ee91d2429e5] <==
	I1122 00:48:58.792896       1 server_linux.go:53] "Using iptables proxy"
	I1122 00:48:58.883512       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1122 00:48:58.992801       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1122 00:48:58.992842       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.85.2"]
	E1122 00:48:58.992906       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1122 00:48:59.012676       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1122 00:48:59.012735       1 server_linux.go:132] "Using iptables Proxier"
	I1122 00:48:59.018438       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1122 00:48:59.018776       1 server.go:527] "Version info" version="v1.34.1"
	I1122 00:48:59.018797       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1122 00:48:59.020072       1 config.go:200] "Starting service config controller"
	I1122 00:48:59.020095       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1122 00:48:59.020111       1 config.go:106] "Starting endpoint slice config controller"
	I1122 00:48:59.020115       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1122 00:48:59.020127       1 config.go:403] "Starting serviceCIDR config controller"
	I1122 00:48:59.020131       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1122 00:48:59.022920       1 config.go:309] "Starting node config controller"
	I1122 00:48:59.022941       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1122 00:48:59.022949       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1122 00:48:59.120833       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1122 00:48:59.120875       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1122 00:48:59.120922       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [b36b71426eeddbbd8a66fee6ba6d51873fa9668612addcc8f00a16c6fdb775fd] <==
	E1122 00:48:49.859305       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1122 00:48:49.859530       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1122 00:48:49.859629       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1122 00:48:49.859704       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1122 00:48:49.859781       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1122 00:48:49.859852       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1122 00:48:49.860125       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1122 00:48:49.860187       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1122 00:48:49.860246       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1122 00:48:49.860289       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1122 00:48:49.860374       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1122 00:48:49.860416       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1122 00:48:49.860584       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1122 00:48:50.675390       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1122 00:48:50.687613       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1122 00:48:50.729285       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1122 00:48:50.816346       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1122 00:48:50.842088       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	I1122 00:48:51.537610       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1122 00:49:14.386312       1 secure_serving.go:259] Stopped listening on 127.0.0.1:10259
	I1122 00:49:14.386339       1 server.go:263] "[graceful-termination] secure server has stopped listening"
	I1122 00:49:14.386373       1 tlsconfig.go:258] "Shutting down DynamicServingCertificateController"
	I1122 00:49:14.386403       1 configmap_cafile_content.go:226] "Shutting down controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1122 00:49:14.386529       1 server.go:265] "[graceful-termination] secure server is exiting"
	E1122 00:49:14.386557       1 run.go:72] "command failed" err="finished without leader elect"
	
	
	==> kube-scheduler [d3753a55d1b0852aeac6d506d250fc46da9733dcd90885d3802044a0a80ad951] <==
	I1122 00:49:26.862955       1 serving.go:386] Generated self-signed cert in-memory
	W1122 00:49:29.202036       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1122 00:49:29.202077       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1122 00:49:29.202087       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1122 00:49:29.202094       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1122 00:49:29.297401       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.1"
	I1122 00:49:29.297502       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1122 00:49:29.302123       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1122 00:49:29.302231       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1122 00:49:29.304579       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1122 00:49:29.304672       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1122 00:49:29.402777       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Nov 22 00:49:22 pause-028559 kubelet[1296]: E1122 00:49:22.802037    1296 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.85.2:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-pause-028559\": dial tcp 192.168.85.2:8443: connect: connection refused" podUID="44c3af8ce59f9041bc4996c94c884532" pod="kube-system/kube-apiserver-pause-028559"
	Nov 22 00:49:22 pause-028559 kubelet[1296]: E1122 00:49:22.802536    1296 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.85.2:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-pause-028559\": dial tcp 192.168.85.2:8443: connect: connection refused" podUID="b87ad4bf409fe052b180b22ad3a54cf6" pod="kube-system/kube-scheduler-pause-028559"
	Nov 22 00:49:22 pause-028559 kubelet[1296]: I1122 00:49:22.807112    1296 scope.go:117] "RemoveContainer" containerID="7cf54964fa6209d21f425b45edbf33a457dcdc58ce72370d8035cad09e292b10"
	Nov 22 00:49:22 pause-028559 kubelet[1296]: E1122 00:49:22.807543    1296 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.85.2:8443/api/v1/namespaces/kube-system/pods/etcd-pause-028559\": dial tcp 192.168.85.2:8443: connect: connection refused" podUID="60135025660a8ec9cd48fe139ccdc20a" pod="kube-system/etcd-pause-028559"
	Nov 22 00:49:22 pause-028559 kubelet[1296]: E1122 00:49:22.808540    1296 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.85.2:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-pause-028559\": dial tcp 192.168.85.2:8443: connect: connection refused" podUID="44c3af8ce59f9041bc4996c94c884532" pod="kube-system/kube-apiserver-pause-028559"
	Nov 22 00:49:22 pause-028559 kubelet[1296]: E1122 00:49:22.814206    1296 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.85.2:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-pause-028559\": dial tcp 192.168.85.2:8443: connect: connection refused" podUID="b87ad4bf409fe052b180b22ad3a54cf6" pod="kube-system/kube-scheduler-pause-028559"
	Nov 22 00:49:22 pause-028559 kubelet[1296]: E1122 00:49:22.814480    1296 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.85.2:8443/api/v1/namespaces/kube-system/pods/kindnet-md6h6\": dial tcp 192.168.85.2:8443: connect: connection refused" podUID="c69079f9-3127-43a1-99c6-9ec5a41b79cc" pod="kube-system/kindnet-md6h6"
	Nov 22 00:49:22 pause-028559 kubelet[1296]: E1122 00:49:22.814681    1296 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.85.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-qnj6x\": dial tcp 192.168.85.2:8443: connect: connection refused" podUID="1e6d9e0d-242d-484d-be05-aaaf175e8c31" pod="kube-system/kube-proxy-qnj6x"
	Nov 22 00:49:22 pause-028559 kubelet[1296]: E1122 00:49:22.814872    1296 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.85.2:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-pause-028559\": dial tcp 192.168.85.2:8443: connect: connection refused" podUID="67f331a25b5bd2923696a64e0dc87204" pod="kube-system/kube-controller-manager-pause-028559"
	Nov 22 00:49:22 pause-028559 kubelet[1296]: I1122 00:49:22.823727    1296 scope.go:117] "RemoveContainer" containerID="e8a18ac29ae5ac05f6b5b4c70bcf6b1fc73a710e59136307cfa68c8bcd36557d"
	Nov 22 00:49:22 pause-028559 kubelet[1296]: E1122 00:49:22.824368    1296 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.85.2:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-pause-028559\": dial tcp 192.168.85.2:8443: connect: connection refused" podUID="67f331a25b5bd2923696a64e0dc87204" pod="kube-system/kube-controller-manager-pause-028559"
	Nov 22 00:49:22 pause-028559 kubelet[1296]: E1122 00:49:22.824588    1296 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.85.2:8443/api/v1/namespaces/kube-system/pods/etcd-pause-028559\": dial tcp 192.168.85.2:8443: connect: connection refused" podUID="60135025660a8ec9cd48fe139ccdc20a" pod="kube-system/etcd-pause-028559"
	Nov 22 00:49:22 pause-028559 kubelet[1296]: E1122 00:49:22.824780    1296 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.85.2:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-pause-028559\": dial tcp 192.168.85.2:8443: connect: connection refused" podUID="44c3af8ce59f9041bc4996c94c884532" pod="kube-system/kube-apiserver-pause-028559"
	Nov 22 00:49:22 pause-028559 kubelet[1296]: E1122 00:49:22.824974    1296 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.85.2:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-pause-028559\": dial tcp 192.168.85.2:8443: connect: connection refused" podUID="b87ad4bf409fe052b180b22ad3a54cf6" pod="kube-system/kube-scheduler-pause-028559"
	Nov 22 00:49:22 pause-028559 kubelet[1296]: E1122 00:49:22.825183    1296 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.85.2:8443/api/v1/namespaces/kube-system/pods/kindnet-md6h6\": dial tcp 192.168.85.2:8443: connect: connection refused" podUID="c69079f9-3127-43a1-99c6-9ec5a41b79cc" pod="kube-system/kindnet-md6h6"
	Nov 22 00:49:22 pause-028559 kubelet[1296]: E1122 00:49:22.825374    1296 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.85.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-qnj6x\": dial tcp 192.168.85.2:8443: connect: connection refused" podUID="1e6d9e0d-242d-484d-be05-aaaf175e8c31" pod="kube-system/kube-proxy-qnj6x"
	Nov 22 00:49:22 pause-028559 kubelet[1296]: E1122 00:49:22.825564    1296 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.85.2:8443/api/v1/namespaces/kube-system/pods/coredns-66bc5c9577-mf9wz\": dial tcp 192.168.85.2:8443: connect: connection refused" podUID="c60bc6ef-6579-4cd2-821a-d54eed09dd2f" pod="kube-system/coredns-66bc5c9577-mf9wz"
	Nov 22 00:49:29 pause-028559 kubelet[1296]: E1122 00:49:29.114419    1296 status_manager.go:1018] "Failed to get status for pod" err="pods \"etcd-pause-028559\" is forbidden: User \"system:node:pause-028559\" cannot get resource \"pods\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'pause-028559' and this object" podUID="60135025660a8ec9cd48fe139ccdc20a" pod="kube-system/etcd-pause-028559"
	Nov 22 00:49:29 pause-028559 kubelet[1296]: E1122 00:49:29.114956    1296 reflector.go:205] "Failed to watch" err="configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:pause-028559\" cannot watch resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'pause-028559' and this object" logger="UnhandledError" reflector="object-\"kube-system\"/\"kube-root-ca.crt\"" type="*v1.ConfigMap"
	Nov 22 00:49:29 pause-028559 kubelet[1296]: E1122 00:49:29.149132    1296 status_manager.go:1018] "Failed to get status for pod" err="pods \"kube-apiserver-pause-028559\" is forbidden: User \"system:node:pause-028559\" cannot get resource \"pods\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'pause-028559' and this object" podUID="44c3af8ce59f9041bc4996c94c884532" pod="kube-system/kube-apiserver-pause-028559"
	Nov 22 00:49:29 pause-028559 kubelet[1296]: E1122 00:49:29.199478    1296 status_manager.go:1018] "Failed to get status for pod" err="pods \"kube-scheduler-pause-028559\" is forbidden: User \"system:node:pause-028559\" cannot get resource \"pods\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'pause-028559' and this object" podUID="b87ad4bf409fe052b180b22ad3a54cf6" pod="kube-system/kube-scheduler-pause-028559"
	Nov 22 00:49:32 pause-028559 kubelet[1296]: W1122 00:49:32.861596    1296 conversion.go:112] Could not get instant cpu stats: cumulative stats decrease
	Nov 22 00:49:38 pause-028559 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Nov 22 00:49:38 pause-028559 systemd[1]: kubelet.service: Deactivated successfully.
	Nov 22 00:49:38 pause-028559 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p pause-028559 -n pause-028559
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p pause-028559 -n pause-028559: exit status 2 (348.777497ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:269: (dbg) Run:  kubectl --context pause-028559 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestPause/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestPause/serial/Pause]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestPause/serial/Pause]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect pause-028559
helpers_test.go:243: (dbg) docker inspect pause-028559:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "6a94f69817b77fc564d5d76744b1439060476eb42bc40435169dbd0ee1102740",
	        "Created": "2025-11-22T00:48:27.960744645Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 672656,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-11-22T00:48:28.033486895Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:97061622515a85470dad13cb046ad5fc5021fcd2b05aa921a620abb52951cd5d",
	        "ResolvConfPath": "/var/lib/docker/containers/6a94f69817b77fc564d5d76744b1439060476eb42bc40435169dbd0ee1102740/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/6a94f69817b77fc564d5d76744b1439060476eb42bc40435169dbd0ee1102740/hostname",
	        "HostsPath": "/var/lib/docker/containers/6a94f69817b77fc564d5d76744b1439060476eb42bc40435169dbd0ee1102740/hosts",
	        "LogPath": "/var/lib/docker/containers/6a94f69817b77fc564d5d76744b1439060476eb42bc40435169dbd0ee1102740/6a94f69817b77fc564d5d76744b1439060476eb42bc40435169dbd0ee1102740-json.log",
	        "Name": "/pause-028559",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "pause-028559:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "pause-028559",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "6a94f69817b77fc564d5d76744b1439060476eb42bc40435169dbd0ee1102740",
	                "LowerDir": "/var/lib/docker/overlay2/c149ec3533d2a3b038708dde1e912b8b07514fcbcc3d1ff393f0a0a670aea096-init/diff:/var/lib/docker/overlay2/7e8788c6de692bc1c3758a2bb2c4b8da0fbba26855f855c0f3b655bfbdd92f8e/diff",
	                "MergedDir": "/var/lib/docker/overlay2/c149ec3533d2a3b038708dde1e912b8b07514fcbcc3d1ff393f0a0a670aea096/merged",
	                "UpperDir": "/var/lib/docker/overlay2/c149ec3533d2a3b038708dde1e912b8b07514fcbcc3d1ff393f0a0a670aea096/diff",
	                "WorkDir": "/var/lib/docker/overlay2/c149ec3533d2a3b038708dde1e912b8b07514fcbcc3d1ff393f0a0a670aea096/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "pause-028559",
	                "Source": "/var/lib/docker/volumes/pause-028559/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "pause-028559",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763588073-21934@sha256:19d3da0413e1bfa354cbb88004c6796f8e9772a083e0230b0f6e50212ee04c7e",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "pause-028559",
	                "name.minikube.sigs.k8s.io": "pause-028559",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "f34db19d45abcc46dc4209db03db2d51e354914f726aa8ba5d05989b3d7a42e8",
	            "SandboxKey": "/var/run/docker/netns/f34db19d45ab",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33745"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33746"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33749"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33747"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33748"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "pause-028559": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.85.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "3a:43:28:1d:6f:9f",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "f14dae99aeca43c4226c8eb18b4de4b38a2da23b90842e84338c84d11826484b",
	                    "EndpointID": "a66044e880589ed0971e98a7c1b32e86f3e945a96d2aaba8fc0bc5d539026083",
	                    "Gateway": "192.168.85.1",
	                    "IPAddress": "192.168.85.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "pause-028559",
	                        "6a94f69817b7"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p pause-028559 -n pause-028559
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p pause-028559 -n pause-028559: exit status 2 (387.692818ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestPause/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestPause/serial/Pause]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p pause-028559 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p pause-028559 logs -n 25: (1.635542629s)
helpers_test.go:260: TestPause/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                   ARGS                                                                   │          PROFILE          │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ start   │ -p NoKubernetes-307118 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio                                    │ NoKubernetes-307118       │ jenkins │ v1.37.0 │ 22 Nov 25 00:43 UTC │ 22 Nov 25 00:44 UTC │
	│ start   │ -p missing-upgrade-264026 --memory=3072 --driver=docker  --container-runtime=crio                                                        │ missing-upgrade-264026    │ jenkins │ v1.32.0 │ 22 Nov 25 00:44 UTC │ 22 Nov 25 00:45 UTC │
	│ start   │ -p NoKubernetes-307118 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio                    │ NoKubernetes-307118       │ jenkins │ v1.37.0 │ 22 Nov 25 00:44 UTC │ 22 Nov 25 00:44 UTC │
	│ delete  │ -p NoKubernetes-307118                                                                                                                   │ NoKubernetes-307118       │ jenkins │ v1.37.0 │ 22 Nov 25 00:44 UTC │ 22 Nov 25 00:44 UTC │
	│ start   │ -p NoKubernetes-307118 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio                    │ NoKubernetes-307118       │ jenkins │ v1.37.0 │ 22 Nov 25 00:44 UTC │ 22 Nov 25 00:44 UTC │
	│ ssh     │ -p NoKubernetes-307118 sudo systemctl is-active --quiet service kubelet                                                                  │ NoKubernetes-307118       │ jenkins │ v1.37.0 │ 22 Nov 25 00:44 UTC │                     │
	│ stop    │ -p NoKubernetes-307118                                                                                                                   │ NoKubernetes-307118       │ jenkins │ v1.37.0 │ 22 Nov 25 00:44 UTC │ 22 Nov 25 00:45 UTC │
	│ start   │ -p NoKubernetes-307118 --driver=docker  --container-runtime=crio                                                                         │ NoKubernetes-307118       │ jenkins │ v1.37.0 │ 22 Nov 25 00:45 UTC │ 22 Nov 25 00:45 UTC │
	│ ssh     │ -p NoKubernetes-307118 sudo systemctl is-active --quiet service kubelet                                                                  │ NoKubernetes-307118       │ jenkins │ v1.37.0 │ 22 Nov 25 00:45 UTC │                     │
	│ delete  │ -p NoKubernetes-307118                                                                                                                   │ NoKubernetes-307118       │ jenkins │ v1.37.0 │ 22 Nov 25 00:45 UTC │ 22 Nov 25 00:45 UTC │
	│ start   │ -p kubernetes-upgrade-134864 --memory=3072 --kubernetes-version=v1.28.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio │ kubernetes-upgrade-134864 │ jenkins │ v1.37.0 │ 22 Nov 25 00:45 UTC │ 22 Nov 25 00:45 UTC │
	│ start   │ -p missing-upgrade-264026 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio                                 │ missing-upgrade-264026    │ jenkins │ v1.37.0 │ 22 Nov 25 00:45 UTC │ 22 Nov 25 00:46 UTC │
	│ stop    │ -p kubernetes-upgrade-134864                                                                                                             │ kubernetes-upgrade-134864 │ jenkins │ v1.37.0 │ 22 Nov 25 00:45 UTC │ 22 Nov 25 00:45 UTC │
	│ start   │ -p kubernetes-upgrade-134864 --memory=3072 --kubernetes-version=v1.34.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio │ kubernetes-upgrade-134864 │ jenkins │ v1.37.0 │ 22 Nov 25 00:45 UTC │                     │
	│ delete  │ -p missing-upgrade-264026                                                                                                                │ missing-upgrade-264026    │ jenkins │ v1.37.0 │ 22 Nov 25 00:46 UTC │ 22 Nov 25 00:46 UTC │
	│ start   │ -p stopped-upgrade-070222 --memory=3072 --vm-driver=docker  --container-runtime=crio                                                     │ stopped-upgrade-070222    │ jenkins │ v1.32.0 │ 22 Nov 25 00:46 UTC │ 22 Nov 25 00:46 UTC │
	│ stop    │ stopped-upgrade-070222 stop                                                                                                              │ stopped-upgrade-070222    │ jenkins │ v1.32.0 │ 22 Nov 25 00:46 UTC │ 22 Nov 25 00:46 UTC │
	│ start   │ -p stopped-upgrade-070222 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio                                 │ stopped-upgrade-070222    │ jenkins │ v1.37.0 │ 22 Nov 25 00:46 UTC │ 22 Nov 25 00:47 UTC │
	│ delete  │ -p stopped-upgrade-070222                                                                                                                │ stopped-upgrade-070222    │ jenkins │ v1.37.0 │ 22 Nov 25 00:47 UTC │ 22 Nov 25 00:47 UTC │
	│ start   │ -p running-upgrade-234956 --memory=3072 --vm-driver=docker  --container-runtime=crio                                                     │ running-upgrade-234956    │ jenkins │ v1.32.0 │ 22 Nov 25 00:47 UTC │ 22 Nov 25 00:48 UTC │
	│ start   │ -p running-upgrade-234956 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio                                 │ running-upgrade-234956    │ jenkins │ v1.37.0 │ 22 Nov 25 00:48 UTC │ 22 Nov 25 00:48 UTC │
	│ delete  │ -p running-upgrade-234956                                                                                                                │ running-upgrade-234956    │ jenkins │ v1.37.0 │ 22 Nov 25 00:48 UTC │ 22 Nov 25 00:48 UTC │
	│ start   │ -p pause-028559 --memory=3072 --install-addons=false --wait=all --driver=docker  --container-runtime=crio                                │ pause-028559              │ jenkins │ v1.37.0 │ 22 Nov 25 00:48 UTC │ 22 Nov 25 00:49 UTC │
	│ start   │ -p pause-028559 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio                                                         │ pause-028559              │ jenkins │ v1.37.0 │ 22 Nov 25 00:49 UTC │ 22 Nov 25 00:49 UTC │
	│ pause   │ -p pause-028559 --alsologtostderr -v=5                                                                                                   │ pause-028559              │ jenkins │ v1.37.0 │ 22 Nov 25 00:49 UTC │                     │
	└─────────┴──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/22 00:49:12
	Running on machine: ip-172-31-21-244
	Binary: Built with gc go1.25.3 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1122 00:49:12.742265  675703 out.go:360] Setting OutFile to fd 1 ...
	I1122 00:49:12.742449  675703 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1122 00:49:12.742459  675703 out.go:374] Setting ErrFile to fd 2...
	I1122 00:49:12.742465  675703 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1122 00:49:12.742721  675703 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21934-513600/.minikube/bin
	I1122 00:49:12.743088  675703 out.go:368] Setting JSON to false
	I1122 00:49:12.744402  675703 start.go:133] hostinfo: {"hostname":"ip-172-31-21-244","uptime":19869,"bootTime":1763752684,"procs":203,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"da8ac1fd-6236-412a-a346-95873c98230d"}
	I1122 00:49:12.744473  675703 start.go:143] virtualization:  
	I1122 00:49:12.748296  675703 out.go:179] * [pause-028559] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1122 00:49:12.752033  675703 out.go:179]   - MINIKUBE_LOCATION=21934
	I1122 00:49:12.752135  675703 notify.go:221] Checking for updates...
	I1122 00:49:12.757649  675703 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1122 00:49:12.760587  675703 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21934-513600/kubeconfig
	I1122 00:49:12.763496  675703 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21934-513600/.minikube
	I1122 00:49:12.766366  675703 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1122 00:49:12.769443  675703 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1122 00:49:12.772761  675703 config.go:182] Loaded profile config "pause-028559": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1122 00:49:12.773310  675703 driver.go:422] Setting default libvirt URI to qemu:///system
	I1122 00:49:12.800610  675703 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1122 00:49:12.800719  675703 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1122 00:49:12.861541  675703 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:2 ContainersRunning:2 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:51 OomKillDisable:true NGoroutines:62 SystemTime:2025-11-22 00:49:12.85166246 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aa
rch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pa
th:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1122 00:49:12.861653  675703 docker.go:319] overlay module found
	I1122 00:49:12.866766  675703 out.go:179] * Using the docker driver based on existing profile
	I1122 00:49:12.869525  675703 start.go:309] selected driver: docker
	I1122 00:49:12.869544  675703 start.go:930] validating driver "docker" against &{Name:pause-028559 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763588073-21934@sha256:19d3da0413e1bfa354cbb88004c6796f8e9772a083e0230b0f6e50212ee04c7e Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:pause-028559 Namespace:default APIServerHAVIP: APIServerName:minikub
eCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false regi
stry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1122 00:49:12.869670  675703 start.go:941] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1122 00:49:12.869768  675703 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1122 00:49:12.934723  675703 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:2 ContainersRunning:2 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:51 OomKillDisable:true NGoroutines:62 SystemTime:2025-11-22 00:49:12.926156642 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1122 00:49:12.935122  675703 cni.go:84] Creating CNI manager for ""
	I1122 00:49:12.935187  675703 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1122 00:49:12.935229  675703 start.go:353] cluster config:
	{Name:pause-028559 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763588073-21934@sha256:19d3da0413e1bfa354cbb88004c6796f8e9772a083e0230b0f6e50212ee04c7e Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:pause-028559 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:c
rio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false
storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1122 00:49:12.940261  675703 out.go:179] * Starting "pause-028559" primary control-plane node in "pause-028559" cluster
	I1122 00:49:12.943231  675703 cache.go:134] Beginning downloading kic base image for docker with crio
	I1122 00:49:12.946084  675703 out.go:179] * Pulling base image v0.0.48-1763588073-21934 ...
	I1122 00:49:12.949098  675703 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1122 00:49:12.949150  675703 preload.go:203] Found local preload: /home/jenkins/minikube-integration/21934-513600/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4
	I1122 00:49:12.949160  675703 cache.go:65] Caching tarball of preloaded images
	I1122 00:49:12.949177  675703 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763588073-21934@sha256:19d3da0413e1bfa354cbb88004c6796f8e9772a083e0230b0f6e50212ee04c7e in local docker daemon
	I1122 00:49:12.949239  675703 preload.go:238] Found /home/jenkins/minikube-integration/21934-513600/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1122 00:49:12.949249  675703 cache.go:68] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1122 00:49:12.949385  675703 profile.go:143] Saving config to /home/jenkins/minikube-integration/21934-513600/.minikube/profiles/pause-028559/config.json ...
	I1122 00:49:12.969168  675703 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763588073-21934@sha256:19d3da0413e1bfa354cbb88004c6796f8e9772a083e0230b0f6e50212ee04c7e in local docker daemon, skipping pull
	I1122 00:49:12.969190  675703 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763588073-21934@sha256:19d3da0413e1bfa354cbb88004c6796f8e9772a083e0230b0f6e50212ee04c7e exists in daemon, skipping load
	I1122 00:49:12.969208  675703 cache.go:243] Successfully downloaded all kic artifacts
	I1122 00:49:12.969230  675703 start.go:360] acquireMachinesLock for pause-028559: {Name:mk639f010a9c552843e3e85aa47fa9daf6e9b9cc Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1122 00:49:12.969291  675703 start.go:364] duration metric: took 34.805µs to acquireMachinesLock for "pause-028559"
	I1122 00:49:12.969315  675703 start.go:96] Skipping create...Using existing machine configuration
	I1122 00:49:12.969320  675703 fix.go:54] fixHost starting: 
	I1122 00:49:12.969579  675703 cli_runner.go:164] Run: docker container inspect pause-028559 --format={{.State.Status}}
	I1122 00:49:12.988057  675703 fix.go:112] recreateIfNeeded on pause-028559: state=Running err=<nil>
	W1122 00:49:12.988082  675703 fix.go:138] unexpected machine state, will restart: <nil>
	I1122 00:49:15.839588  659783 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1122 00:49:15.839649  659783 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1122 00:49:15.839721  659783 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1122 00:49:15.866942  659783 cri.go:89] found id: "513641c31371d8c05f0f09cedcd3f1444e01013c0f90e22c86ddeadf4d1a516c"
	I1122 00:49:15.866964  659783 cri.go:89] found id: "b12168475560a7e49fcbb662aed1e0002ea914f70e776f082083dc3a3c822135"
	I1122 00:49:15.866968  659783 cri.go:89] found id: ""
	I1122 00:49:15.866976  659783 logs.go:282] 2 containers: [513641c31371d8c05f0f09cedcd3f1444e01013c0f90e22c86ddeadf4d1a516c b12168475560a7e49fcbb662aed1e0002ea914f70e776f082083dc3a3c822135]
	I1122 00:49:15.867031  659783 ssh_runner.go:195] Run: which crictl
	I1122 00:49:15.870652  659783 ssh_runner.go:195] Run: which crictl
	I1122 00:49:15.874915  659783 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1122 00:49:15.874985  659783 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1122 00:49:15.901174  659783 cri.go:89] found id: ""
	I1122 00:49:15.901206  659783 logs.go:282] 0 containers: []
	W1122 00:49:15.901215  659783 logs.go:284] No container was found matching "etcd"
	I1122 00:49:15.901221  659783 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1122 00:49:15.901297  659783 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1122 00:49:15.927620  659783 cri.go:89] found id: ""
	I1122 00:49:15.927641  659783 logs.go:282] 0 containers: []
	W1122 00:49:15.927650  659783 logs.go:284] No container was found matching "coredns"
	I1122 00:49:15.927656  659783 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1122 00:49:15.927712  659783 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1122 00:49:15.965325  659783 cri.go:89] found id: "c546164b4e284af119113a13e0a67e01c0dd3fe9b208a4092c79664dd912c8e8"
	I1122 00:49:15.965349  659783 cri.go:89] found id: ""
	I1122 00:49:15.965357  659783 logs.go:282] 1 containers: [c546164b4e284af119113a13e0a67e01c0dd3fe9b208a4092c79664dd912c8e8]
	I1122 00:49:15.965415  659783 ssh_runner.go:195] Run: which crictl
	I1122 00:49:15.970360  659783 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1122 00:49:15.970435  659783 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1122 00:49:16.006441  659783 cri.go:89] found id: ""
	I1122 00:49:16.006470  659783 logs.go:282] 0 containers: []
	W1122 00:49:16.006480  659783 logs.go:284] No container was found matching "kube-proxy"
	I1122 00:49:16.006488  659783 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1122 00:49:16.006555  659783 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1122 00:49:16.035453  659783 cri.go:89] found id: "492c7cccdf0364fe71128220d1afdd81ad6e587b857b7898958d72c0a7eb4953"
	I1122 00:49:16.035476  659783 cri.go:89] found id: "bb0e61234ebc57d4a32db1e400b0a3cab257c463daf8c1fd63a71be5978bb7ad"
	I1122 00:49:16.035481  659783 cri.go:89] found id: ""
	I1122 00:49:16.035490  659783 logs.go:282] 2 containers: [492c7cccdf0364fe71128220d1afdd81ad6e587b857b7898958d72c0a7eb4953 bb0e61234ebc57d4a32db1e400b0a3cab257c463daf8c1fd63a71be5978bb7ad]
	I1122 00:49:16.035549  659783 ssh_runner.go:195] Run: which crictl
	I1122 00:49:16.039706  659783 ssh_runner.go:195] Run: which crictl
	I1122 00:49:16.043883  659783 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1122 00:49:16.043962  659783 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1122 00:49:16.076539  659783 cri.go:89] found id: ""
	I1122 00:49:16.076566  659783 logs.go:282] 0 containers: []
	W1122 00:49:16.076575  659783 logs.go:284] No container was found matching "kindnet"
	I1122 00:49:16.076582  659783 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1122 00:49:16.076644  659783 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1122 00:49:16.104616  659783 cri.go:89] found id: ""
	I1122 00:49:16.104642  659783 logs.go:282] 0 containers: []
	W1122 00:49:16.104651  659783 logs.go:284] No container was found matching "storage-provisioner"
	I1122 00:49:16.104697  659783 logs.go:123] Gathering logs for kube-controller-manager [492c7cccdf0364fe71128220d1afdd81ad6e587b857b7898958d72c0a7eb4953] ...
	I1122 00:49:16.104717  659783 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 492c7cccdf0364fe71128220d1afdd81ad6e587b857b7898958d72c0a7eb4953"
	I1122 00:49:16.132023  659783 logs.go:123] Gathering logs for kube-controller-manager [bb0e61234ebc57d4a32db1e400b0a3cab257c463daf8c1fd63a71be5978bb7ad] ...
	I1122 00:49:16.132050  659783 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 bb0e61234ebc57d4a32db1e400b0a3cab257c463daf8c1fd63a71be5978bb7ad"
	I1122 00:49:16.157979  659783 logs.go:123] Gathering logs for CRI-O ...
	I1122 00:49:16.158005  659783 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1122 00:49:16.215931  659783 logs.go:123] Gathering logs for container status ...
	I1122 00:49:16.215968  659783 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1122 00:49:16.246098  659783 logs.go:123] Gathering logs for dmesg ...
	I1122 00:49:16.246128  659783 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1122 00:49:12.991600  675703 out.go:252] * Updating the running docker "pause-028559" container ...
	I1122 00:49:12.991639  675703 machine.go:94] provisionDockerMachine start ...
	I1122 00:49:12.991725  675703 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-028559
	I1122 00:49:13.010871  675703 main.go:143] libmachine: Using SSH client type: native
	I1122 00:49:13.011204  675703 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33745 <nil> <nil>}
	I1122 00:49:13.011221  675703 main.go:143] libmachine: About to run SSH command:
	hostname
	I1122 00:49:13.149372  675703 main.go:143] libmachine: SSH cmd err, output: <nil>: pause-028559
	
	I1122 00:49:13.149398  675703 ubuntu.go:182] provisioning hostname "pause-028559"
	I1122 00:49:13.149460  675703 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-028559
	I1122 00:49:13.167795  675703 main.go:143] libmachine: Using SSH client type: native
	I1122 00:49:13.168144  675703 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33745 <nil> <nil>}
	I1122 00:49:13.168165  675703 main.go:143] libmachine: About to run SSH command:
	sudo hostname pause-028559 && echo "pause-028559" | sudo tee /etc/hostname
	I1122 00:49:13.328740  675703 main.go:143] libmachine: SSH cmd err, output: <nil>: pause-028559
	
	I1122 00:49:13.328814  675703 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-028559
	I1122 00:49:13.348051  675703 main.go:143] libmachine: Using SSH client type: native
	I1122 00:49:13.348487  675703 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33745 <nil> <nil>}
	I1122 00:49:13.348505  675703 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\spause-028559' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 pause-028559/g' /etc/hosts;
				else 
					echo '127.0.1.1 pause-028559' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1122 00:49:13.494473  675703 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1122 00:49:13.494501  675703 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21934-513600/.minikube CaCertPath:/home/jenkins/minikube-integration/21934-513600/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21934-513600/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21934-513600/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21934-513600/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21934-513600/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21934-513600/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21934-513600/.minikube}
	I1122 00:49:13.494521  675703 ubuntu.go:190] setting up certificates
	I1122 00:49:13.494530  675703 provision.go:84] configureAuth start
	I1122 00:49:13.494587  675703 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" pause-028559
	I1122 00:49:13.512933  675703 provision.go:143] copyHostCerts
	I1122 00:49:13.513003  675703 exec_runner.go:144] found /home/jenkins/minikube-integration/21934-513600/.minikube/key.pem, removing ...
	I1122 00:49:13.513023  675703 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21934-513600/.minikube/key.pem
	I1122 00:49:13.513095  675703 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21934-513600/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21934-513600/.minikube/key.pem (1675 bytes)
	I1122 00:49:13.513193  675703 exec_runner.go:144] found /home/jenkins/minikube-integration/21934-513600/.minikube/ca.pem, removing ...
	I1122 00:49:13.513204  675703 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21934-513600/.minikube/ca.pem
	I1122 00:49:13.513232  675703 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21934-513600/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21934-513600/.minikube/ca.pem (1078 bytes)
	I1122 00:49:13.513286  675703 exec_runner.go:144] found /home/jenkins/minikube-integration/21934-513600/.minikube/cert.pem, removing ...
	I1122 00:49:13.513296  675703 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21934-513600/.minikube/cert.pem
	I1122 00:49:13.513320  675703 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21934-513600/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21934-513600/.minikube/cert.pem (1123 bytes)
	I1122 00:49:13.513366  675703 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21934-513600/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21934-513600/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21934-513600/.minikube/certs/ca-key.pem org=jenkins.pause-028559 san=[127.0.0.1 192.168.85.2 localhost minikube pause-028559]
	I1122 00:49:14.027133  675703 provision.go:177] copyRemoteCerts
	I1122 00:49:14.027237  675703 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1122 00:49:14.027321  675703 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-028559
	I1122 00:49:14.045612  675703 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33745 SSHKeyPath:/home/jenkins/minikube-integration/21934-513600/.minikube/machines/pause-028559/id_rsa Username:docker}
	I1122 00:49:14.149496  675703 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21934-513600/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1122 00:49:14.167820  675703 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21934-513600/.minikube/machines/server.pem --> /etc/docker/server.pem (1204 bytes)
	I1122 00:49:14.185598  675703 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21934-513600/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1122 00:49:14.205471  675703 provision.go:87] duration metric: took 710.919232ms to configureAuth
	I1122 00:49:14.205500  675703 ubuntu.go:206] setting minikube options for container-runtime
	I1122 00:49:14.205724  675703 config.go:182] Loaded profile config "pause-028559": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1122 00:49:14.205855  675703 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-028559
	I1122 00:49:14.222966  675703 main.go:143] libmachine: Using SSH client type: native
	I1122 00:49:14.223286  675703 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33745 <nil> <nil>}
	I1122 00:49:14.223308  675703 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1122 00:49:16.265015  659783 logs.go:123] Gathering logs for describe nodes ...
	I1122 00:49:16.265044  659783 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1122 00:49:20.687879  659783 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": (4.422814344s)
	W1122 00:49:20.687910  659783 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	Get "https://localhost:8443/api/v1/nodes?limit=500": dial tcp [::1]:8443: connect: connection refused - error from a previous attempt: read tcp [::1]:35182->[::1]:8443: read: connection reset by peer
	 output: 
	** stderr ** 
	Get "https://localhost:8443/api/v1/nodes?limit=500": dial tcp [::1]:8443: connect: connection refused - error from a previous attempt: read tcp [::1]:35182->[::1]:8443: read: connection reset by peer
	
	** /stderr **
	I1122 00:49:20.687918  659783 logs.go:123] Gathering logs for kube-scheduler [c546164b4e284af119113a13e0a67e01c0dd3fe9b208a4092c79664dd912c8e8] ...
	I1122 00:49:20.687929  659783 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 c546164b4e284af119113a13e0a67e01c0dd3fe9b208a4092c79664dd912c8e8"
	I1122 00:49:20.768382  659783 logs.go:123] Gathering logs for kubelet ...
	I1122 00:49:20.768465  659783 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1122 00:49:20.900070  659783 logs.go:123] Gathering logs for kube-apiserver [513641c31371d8c05f0f09cedcd3f1444e01013c0f90e22c86ddeadf4d1a516c] ...
	I1122 00:49:20.900104  659783 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 513641c31371d8c05f0f09cedcd3f1444e01013c0f90e22c86ddeadf4d1a516c"
	I1122 00:49:20.947744  659783 logs.go:123] Gathering logs for kube-apiserver [b12168475560a7e49fcbb662aed1e0002ea914f70e776f082083dc3a3c822135] ...
	I1122 00:49:20.947780  659783 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 b12168475560a7e49fcbb662aed1e0002ea914f70e776f082083dc3a3c822135"
	W1122 00:49:20.978746  659783 logs.go:130] failed kube-apiserver [b12168475560a7e49fcbb662aed1e0002ea914f70e776f082083dc3a3c822135]: command: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 b12168475560a7e49fcbb662aed1e0002ea914f70e776f082083dc3a3c822135" /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 b12168475560a7e49fcbb662aed1e0002ea914f70e776f082083dc3a3c822135": Process exited with status 1
	stdout:
	
	stderr:
	E1122 00:49:20.976033    3907 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b12168475560a7e49fcbb662aed1e0002ea914f70e776f082083dc3a3c822135\": container with ID starting with b12168475560a7e49fcbb662aed1e0002ea914f70e776f082083dc3a3c822135 not found: ID does not exist" containerID="b12168475560a7e49fcbb662aed1e0002ea914f70e776f082083dc3a3c822135"
	time="2025-11-22T00:49:20Z" level=fatal msg="rpc error: code = NotFound desc = could not find container \"b12168475560a7e49fcbb662aed1e0002ea914f70e776f082083dc3a3c822135\": container with ID starting with b12168475560a7e49fcbb662aed1e0002ea914f70e776f082083dc3a3c822135 not found: ID does not exist"
	 output: 
	** stderr ** 
	E1122 00:49:20.976033    3907 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b12168475560a7e49fcbb662aed1e0002ea914f70e776f082083dc3a3c822135\": container with ID starting with b12168475560a7e49fcbb662aed1e0002ea914f70e776f082083dc3a3c822135 not found: ID does not exist" containerID="b12168475560a7e49fcbb662aed1e0002ea914f70e776f082083dc3a3c822135"
	time="2025-11-22T00:49:20Z" level=fatal msg="rpc error: code = NotFound desc = could not find container \"b12168475560a7e49fcbb662aed1e0002ea914f70e776f082083dc3a3c822135\": container with ID starting with b12168475560a7e49fcbb662aed1e0002ea914f70e776f082083dc3a3c822135 not found: ID does not exist"
	
	** /stderr **
	I1122 00:49:19.583615  675703 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1122 00:49:19.583639  675703 machine.go:97] duration metric: took 6.591990734s to provisionDockerMachine
	I1122 00:49:19.583651  675703 start.go:293] postStartSetup for "pause-028559" (driver="docker")
	I1122 00:49:19.583662  675703 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1122 00:49:19.583720  675703 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1122 00:49:19.583760  675703 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-028559
	I1122 00:49:19.602527  675703 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33745 SSHKeyPath:/home/jenkins/minikube-integration/21934-513600/.minikube/machines/pause-028559/id_rsa Username:docker}
	I1122 00:49:19.705588  675703 ssh_runner.go:195] Run: cat /etc/os-release
	I1122 00:49:19.708796  675703 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1122 00:49:19.708822  675703 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1122 00:49:19.708850  675703 filesync.go:126] Scanning /home/jenkins/minikube-integration/21934-513600/.minikube/addons for local assets ...
	I1122 00:49:19.708911  675703 filesync.go:126] Scanning /home/jenkins/minikube-integration/21934-513600/.minikube/files for local assets ...
	I1122 00:49:19.709035  675703 filesync.go:149] local asset: /home/jenkins/minikube-integration/21934-513600/.minikube/files/etc/ssl/certs/5169372.pem -> 5169372.pem in /etc/ssl/certs
	I1122 00:49:19.709137  675703 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1122 00:49:19.716285  675703 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21934-513600/.minikube/files/etc/ssl/certs/5169372.pem --> /etc/ssl/certs/5169372.pem (1708 bytes)
	I1122 00:49:19.733029  675703 start.go:296] duration metric: took 149.362549ms for postStartSetup
	I1122 00:49:19.733114  675703 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1122 00:49:19.733173  675703 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-028559
	I1122 00:49:19.750385  675703 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33745 SSHKeyPath:/home/jenkins/minikube-integration/21934-513600/.minikube/machines/pause-028559/id_rsa Username:docker}
	I1122 00:49:19.847033  675703 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1122 00:49:19.851815  675703 fix.go:56] duration metric: took 6.882488052s for fixHost
	I1122 00:49:19.851840  675703 start.go:83] releasing machines lock for "pause-028559", held for 6.882537069s
	I1122 00:49:19.851920  675703 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" pause-028559
	I1122 00:49:19.868078  675703 ssh_runner.go:195] Run: cat /version.json
	I1122 00:49:19.868140  675703 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-028559
	I1122 00:49:19.868151  675703 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1122 00:49:19.868217  675703 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-028559
	I1122 00:49:19.886355  675703 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33745 SSHKeyPath:/home/jenkins/minikube-integration/21934-513600/.minikube/machines/pause-028559/id_rsa Username:docker}
	I1122 00:49:19.887694  675703 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33745 SSHKeyPath:/home/jenkins/minikube-integration/21934-513600/.minikube/machines/pause-028559/id_rsa Username:docker}
	I1122 00:49:19.985564  675703 ssh_runner.go:195] Run: systemctl --version
	I1122 00:49:20.079432  675703 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1122 00:49:20.122304  675703 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1122 00:49:20.126849  675703 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1122 00:49:20.126918  675703 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1122 00:49:20.135486  675703 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1122 00:49:20.135510  675703 start.go:496] detecting cgroup driver to use...
	I1122 00:49:20.135541  675703 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1122 00:49:20.135602  675703 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1122 00:49:20.150982  675703 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1122 00:49:20.164642  675703 docker.go:218] disabling cri-docker service (if available) ...
	I1122 00:49:20.164707  675703 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1122 00:49:20.181760  675703 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1122 00:49:20.195579  675703 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1122 00:49:20.337744  675703 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1122 00:49:20.485361  675703 docker.go:234] disabling docker service ...
	I1122 00:49:20.485423  675703 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1122 00:49:20.502268  675703 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1122 00:49:20.515993  675703 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1122 00:49:20.658658  675703 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1122 00:49:20.858345  675703 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1122 00:49:20.874536  675703 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1122 00:49:20.899875  675703 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1122 00:49:20.899946  675703 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1122 00:49:20.909845  675703 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1122 00:49:20.909912  675703 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1122 00:49:20.923518  675703 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1122 00:49:20.933988  675703 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1122 00:49:20.949689  675703 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1122 00:49:20.959059  675703 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1122 00:49:20.972008  675703 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1122 00:49:20.981590  675703 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1122 00:49:20.990490  675703 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1122 00:49:20.999363  675703 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1122 00:49:21.009130  675703 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1122 00:49:21.157828  675703 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1122 00:49:21.366060  675703 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1122 00:49:21.366129  675703 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1122 00:49:21.370129  675703 start.go:564] Will wait 60s for crictl version
	I1122 00:49:21.370191  675703 ssh_runner.go:195] Run: which crictl
	I1122 00:49:21.373639  675703 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1122 00:49:21.398612  675703 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.2
	RuntimeApiVersion:  v1
	I1122 00:49:21.398697  675703 ssh_runner.go:195] Run: crio --version
	I1122 00:49:21.425256  675703 ssh_runner.go:195] Run: crio --version
	I1122 00:49:21.455792  675703 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.2 ...
	I1122 00:49:21.458723  675703 cli_runner.go:164] Run: docker network inspect pause-028559 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1122 00:49:21.474025  675703 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I1122 00:49:21.477984  675703 kubeadm.go:884] updating cluster {Name:pause-028559 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763588073-21934@sha256:19d3da0413e1bfa354cbb88004c6796f8e9772a083e0230b0f6e50212ee04c7e Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:pause-028559 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerName
s:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false regist
ry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1122 00:49:21.478137  675703 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1122 00:49:21.478191  675703 ssh_runner.go:195] Run: sudo crictl images --output json
	I1122 00:49:21.510004  675703 crio.go:514] all images are preloaded for cri-o runtime.
	I1122 00:49:21.510031  675703 crio.go:433] Images already preloaded, skipping extraction
	I1122 00:49:21.510095  675703 ssh_runner.go:195] Run: sudo crictl images --output json
	I1122 00:49:21.535600  675703 crio.go:514] all images are preloaded for cri-o runtime.
	I1122 00:49:21.535620  675703 cache_images.go:86] Images are preloaded, skipping loading
	I1122 00:49:21.535628  675703 kubeadm.go:935] updating node { 192.168.85.2 8443 v1.34.1 crio true true} ...
	I1122 00:49:21.535731  675703 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=pause-028559 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:pause-028559 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1122 00:49:21.535809  675703 ssh_runner.go:195] Run: crio config
	I1122 00:49:21.587638  675703 cni.go:84] Creating CNI manager for ""
	I1122 00:49:21.587661  675703 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1122 00:49:21.587679  675703 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1122 00:49:21.587726  675703 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:pause-028559 NodeName:pause-028559 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernete
s/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1122 00:49:21.587911  675703 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "pause-028559"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.85.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1122 00:49:21.587999  675703 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1122 00:49:21.595942  675703 binaries.go:51] Found k8s binaries, skipping transfer
	I1122 00:49:21.596046  675703 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1122 00:49:21.603592  675703 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (362 bytes)
	I1122 00:49:21.617454  675703 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1122 00:49:21.629618  675703 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2209 bytes)
	I1122 00:49:21.644361  675703 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I1122 00:49:21.648034  675703 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1122 00:49:21.792492  675703 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1122 00:49:21.807347  675703 certs.go:69] Setting up /home/jenkins/minikube-integration/21934-513600/.minikube/profiles/pause-028559 for IP: 192.168.85.2
	I1122 00:49:21.807428  675703 certs.go:195] generating shared ca certs ...
	I1122 00:49:21.807460  675703 certs.go:227] acquiring lock for ca certs: {Name:mkaf4c79493334cb2058b349c36be7473837f9b7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1122 00:49:21.807632  675703 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21934-513600/.minikube/ca.key
	I1122 00:49:21.807707  675703 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21934-513600/.minikube/proxy-client-ca.key
	I1122 00:49:21.807744  675703 certs.go:257] generating profile certs ...
	I1122 00:49:21.807882  675703 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21934-513600/.minikube/profiles/pause-028559/client.key
	I1122 00:49:21.807986  675703 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21934-513600/.minikube/profiles/pause-028559/apiserver.key.d413a2b7
	I1122 00:49:21.808061  675703 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21934-513600/.minikube/profiles/pause-028559/proxy-client.key
	I1122 00:49:21.808205  675703 certs.go:484] found cert: /home/jenkins/minikube-integration/21934-513600/.minikube/certs/516937.pem (1338 bytes)
	W1122 00:49:21.808291  675703 certs.go:480] ignoring /home/jenkins/minikube-integration/21934-513600/.minikube/certs/516937_empty.pem, impossibly tiny 0 bytes
	I1122 00:49:21.808320  675703 certs.go:484] found cert: /home/jenkins/minikube-integration/21934-513600/.minikube/certs/ca-key.pem (1679 bytes)
	I1122 00:49:21.808367  675703 certs.go:484] found cert: /home/jenkins/minikube-integration/21934-513600/.minikube/certs/ca.pem (1078 bytes)
	I1122 00:49:21.808420  675703 certs.go:484] found cert: /home/jenkins/minikube-integration/21934-513600/.minikube/certs/cert.pem (1123 bytes)
	I1122 00:49:21.808472  675703 certs.go:484] found cert: /home/jenkins/minikube-integration/21934-513600/.minikube/certs/key.pem (1675 bytes)
	I1122 00:49:21.808563  675703 certs.go:484] found cert: /home/jenkins/minikube-integration/21934-513600/.minikube/files/etc/ssl/certs/5169372.pem (1708 bytes)
	I1122 00:49:21.809188  675703 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21934-513600/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1122 00:49:21.828740  675703 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21934-513600/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1122 00:49:21.846083  675703 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21934-513600/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1122 00:49:21.863576  675703 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21934-513600/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1122 00:49:21.880831  675703 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21934-513600/.minikube/profiles/pause-028559/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I1122 00:49:21.897968  675703 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21934-513600/.minikube/profiles/pause-028559/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1122 00:49:21.916136  675703 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21934-513600/.minikube/profiles/pause-028559/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1122 00:49:21.933505  675703 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21934-513600/.minikube/profiles/pause-028559/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1122 00:49:21.950594  675703 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21934-513600/.minikube/files/etc/ssl/certs/5169372.pem --> /usr/share/ca-certificates/5169372.pem (1708 bytes)
	I1122 00:49:21.967967  675703 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21934-513600/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1122 00:49:21.985075  675703 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21934-513600/.minikube/certs/516937.pem --> /usr/share/ca-certificates/516937.pem (1338 bytes)
	I1122 00:49:22.009505  675703 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1122 00:49:22.023923  675703 ssh_runner.go:195] Run: openssl version
	I1122 00:49:22.030361  675703 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/5169372.pem && ln -fs /usr/share/ca-certificates/5169372.pem /etc/ssl/certs/5169372.pem"
	I1122 00:49:22.039238  675703 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/5169372.pem
	I1122 00:49:22.043542  675703 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov 21 23:55 /usr/share/ca-certificates/5169372.pem
	I1122 00:49:22.043610  675703 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/5169372.pem
	I1122 00:49:22.089660  675703 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/5169372.pem /etc/ssl/certs/3ec20f2e.0"
	I1122 00:49:22.097984  675703 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1122 00:49:22.106565  675703 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1122 00:49:22.110214  675703 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov 21 23:48 /usr/share/ca-certificates/minikubeCA.pem
	I1122 00:49:22.110303  675703 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1122 00:49:22.152326  675703 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1122 00:49:22.160182  675703 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/516937.pem && ln -fs /usr/share/ca-certificates/516937.pem /etc/ssl/certs/516937.pem"
	I1122 00:49:22.168398  675703 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/516937.pem
	I1122 00:49:22.172129  675703 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov 21 23:55 /usr/share/ca-certificates/516937.pem
	I1122 00:49:22.172242  675703 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/516937.pem
	I1122 00:49:22.213186  675703 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/516937.pem /etc/ssl/certs/51391683.0"
	I1122 00:49:22.221286  675703 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1122 00:49:22.225048  675703 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1122 00:49:22.266039  675703 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1122 00:49:22.307350  675703 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1122 00:49:22.348238  675703 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1122 00:49:22.389177  675703 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1122 00:49:22.430093  675703 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1122 00:49:22.471760  675703 kubeadm.go:401] StartCluster: {Name:pause-028559 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763588073-21934@sha256:19d3da0413e1bfa354cbb88004c6796f8e9772a083e0230b0f6e50212ee04c7e Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:pause-028559 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[
] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-
aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1122 00:49:22.471883  675703 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1122 00:49:22.471950  675703 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1122 00:49:22.499207  675703 cri.go:89] found id: "e8a18ac29ae5ac05f6b5b4c70bcf6b1fc73a710e59136307cfa68c8bcd36557d"
	I1122 00:49:22.499229  675703 cri.go:89] found id: "7cf54964fa6209d21f425b45edbf33a457dcdc58ce72370d8035cad09e292b10"
	I1122 00:49:22.499233  675703 cri.go:89] found id: "8a47d7c0ee952b2d53a1e55f636df9b8ea9e35a1de95e8cd16ba1ee91d2429e5"
	I1122 00:49:22.499237  675703 cri.go:89] found id: "36494fae0c15c7ac23088851e0409e2f96cb7f3066877902ebe7aedf80916b67"
	I1122 00:49:22.499240  675703 cri.go:89] found id: "b36b71426eeddbbd8a66fee6ba6d51873fa9668612addcc8f00a16c6fdb775fd"
	I1122 00:49:22.499243  675703 cri.go:89] found id: "fe27dafacf48d07af4ed5cb9690723267dc16f0e9bc5356896a5e1d595009ff0"
	I1122 00:49:22.499247  675703 cri.go:89] found id: "c1bb2c7a299bbbaca1218d7345b106ea8559d1df49b48d1e8effc89a6e7a38b3"
	I1122 00:49:22.499250  675703 cri.go:89] found id: ""
	I1122 00:49:22.499328  675703 ssh_runner.go:195] Run: sudo runc list -f json
	W1122 00:49:22.510348  675703 kubeadm.go:408] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-22T00:49:22Z" level=error msg="open /run/runc: no such file or directory"
	I1122 00:49:22.510452  675703 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1122 00:49:22.518456  675703 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1122 00:49:22.518476  675703 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1122 00:49:22.518527  675703 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1122 00:49:22.525863  675703 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1122 00:49:22.526504  675703 kubeconfig.go:125] found "pause-028559" server: "https://192.168.85.2:8443"
	I1122 00:49:22.527325  675703 kapi.go:59] client config for pause-028559: &rest.Config{Host:"https://192.168.85.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21934-513600/.minikube/profiles/pause-028559/client.crt", KeyFile:"/home/jenkins/minikube-integration/21934-513600/.minikube/profiles/pause-028559/client.key", CAFile:"/home/jenkins/minikube-integration/21934-513600/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]s
tring(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1fb2df0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1122 00:49:22.527861  675703 envvar.go:172] "Feature gate default state" feature="InOrderInformers" enabled=true
	I1122 00:49:22.527879  675703 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I1122 00:49:22.527886  675703 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
	I1122 00:49:22.527892  675703 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
	I1122 00:49:22.527899  675703 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I1122 00:49:22.528166  675703 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1122 00:49:22.535965  675703 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.85.2
	I1122 00:49:22.535999  675703 kubeadm.go:602] duration metric: took 17.516524ms to restartPrimaryControlPlane
	I1122 00:49:22.536009  675703 kubeadm.go:403] duration metric: took 64.261728ms to StartCluster
	I1122 00:49:22.536024  675703 settings.go:142] acquiring lock: {Name:mk6c31eb57ec65b047b78b4e1046e03fe7cc77bb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1122 00:49:22.536082  675703 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21934-513600/kubeconfig
	I1122 00:49:22.536975  675703 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21934-513600/kubeconfig: {Name:mkcd094d8a765e5a81369c0fb33cb5e696b17bb8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1122 00:49:22.537195  675703 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1122 00:49:22.537586  675703 config.go:182] Loaded profile config "pause-028559": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1122 00:49:22.537637  675703 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1122 00:49:22.542171  675703 out.go:179] * Verifying Kubernetes components...
	I1122 00:49:22.544048  675703 out.go:179] * Enabled addons: 
	I1122 00:49:22.546908  675703 addons.go:530] duration metric: took 9.271441ms for enable addons: enabled=[]
	I1122 00:49:22.546954  675703 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1122 00:49:22.701557  675703 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1122 00:49:22.714884  675703 node_ready.go:35] waiting up to 6m0s for node "pause-028559" to be "Ready" ...
	I1122 00:49:23.479039  659783 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1122 00:49:23.479428  659783 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1122 00:49:23.479469  659783 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1122 00:49:23.479520  659783 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1122 00:49:23.517661  659783 cri.go:89] found id: "513641c31371d8c05f0f09cedcd3f1444e01013c0f90e22c86ddeadf4d1a516c"
	I1122 00:49:23.517680  659783 cri.go:89] found id: ""
	I1122 00:49:23.517687  659783 logs.go:282] 1 containers: [513641c31371d8c05f0f09cedcd3f1444e01013c0f90e22c86ddeadf4d1a516c]
	I1122 00:49:23.517741  659783 ssh_runner.go:195] Run: which crictl
	I1122 00:49:23.527344  659783 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1122 00:49:23.527428  659783 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1122 00:49:23.580193  659783 cri.go:89] found id: ""
	I1122 00:49:23.580214  659783 logs.go:282] 0 containers: []
	W1122 00:49:23.580222  659783 logs.go:284] No container was found matching "etcd"
	I1122 00:49:23.580228  659783 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1122 00:49:23.580285  659783 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1122 00:49:23.661303  659783 cri.go:89] found id: ""
	I1122 00:49:23.661324  659783 logs.go:282] 0 containers: []
	W1122 00:49:23.661332  659783 logs.go:284] No container was found matching "coredns"
	I1122 00:49:23.661339  659783 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1122 00:49:23.661395  659783 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1122 00:49:23.709061  659783 cri.go:89] found id: "c546164b4e284af119113a13e0a67e01c0dd3fe9b208a4092c79664dd912c8e8"
	I1122 00:49:23.709079  659783 cri.go:89] found id: ""
	I1122 00:49:23.709086  659783 logs.go:282] 1 containers: [c546164b4e284af119113a13e0a67e01c0dd3fe9b208a4092c79664dd912c8e8]
	I1122 00:49:23.709146  659783 ssh_runner.go:195] Run: which crictl
	I1122 00:49:23.717708  659783 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1122 00:49:23.717776  659783 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1122 00:49:23.760550  659783 cri.go:89] found id: ""
	I1122 00:49:23.760571  659783 logs.go:282] 0 containers: []
	W1122 00:49:23.760579  659783 logs.go:284] No container was found matching "kube-proxy"
	I1122 00:49:23.760586  659783 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1122 00:49:23.760643  659783 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1122 00:49:23.808446  659783 cri.go:89] found id: "492c7cccdf0364fe71128220d1afdd81ad6e587b857b7898958d72c0a7eb4953"
	I1122 00:49:23.808519  659783 cri.go:89] found id: "bb0e61234ebc57d4a32db1e400b0a3cab257c463daf8c1fd63a71be5978bb7ad"
	I1122 00:49:23.808539  659783 cri.go:89] found id: ""
	I1122 00:49:23.808563  659783 logs.go:282] 2 containers: [492c7cccdf0364fe71128220d1afdd81ad6e587b857b7898958d72c0a7eb4953 bb0e61234ebc57d4a32db1e400b0a3cab257c463daf8c1fd63a71be5978bb7ad]
	I1122 00:49:23.808649  659783 ssh_runner.go:195] Run: which crictl
	I1122 00:49:23.812612  659783 ssh_runner.go:195] Run: which crictl
	I1122 00:49:23.822331  659783 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1122 00:49:23.822451  659783 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1122 00:49:23.874912  659783 cri.go:89] found id: ""
	I1122 00:49:23.874986  659783 logs.go:282] 0 containers: []
	W1122 00:49:23.875008  659783 logs.go:284] No container was found matching "kindnet"
	I1122 00:49:23.875026  659783 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1122 00:49:23.875115  659783 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1122 00:49:23.922229  659783 cri.go:89] found id: ""
	I1122 00:49:23.922303  659783 logs.go:282] 0 containers: []
	W1122 00:49:23.922327  659783 logs.go:284] No container was found matching "storage-provisioner"
	I1122 00:49:23.922370  659783 logs.go:123] Gathering logs for kubelet ...
	I1122 00:49:23.922399  659783 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1122 00:49:24.095003  659783 logs.go:123] Gathering logs for dmesg ...
	I1122 00:49:24.095043  659783 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1122 00:49:24.114309  659783 logs.go:123] Gathering logs for kube-controller-manager [492c7cccdf0364fe71128220d1afdd81ad6e587b857b7898958d72c0a7eb4953] ...
	I1122 00:49:24.114342  659783 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 492c7cccdf0364fe71128220d1afdd81ad6e587b857b7898958d72c0a7eb4953"
	I1122 00:49:24.151737  659783 logs.go:123] Gathering logs for CRI-O ...
	I1122 00:49:24.151764  659783 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1122 00:49:24.256697  659783 logs.go:123] Gathering logs for describe nodes ...
	I1122 00:49:24.256775  659783 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1122 00:49:24.387958  659783 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1122 00:49:24.387976  659783 logs.go:123] Gathering logs for kube-apiserver [513641c31371d8c05f0f09cedcd3f1444e01013c0f90e22c86ddeadf4d1a516c] ...
	I1122 00:49:24.387991  659783 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 513641c31371d8c05f0f09cedcd3f1444e01013c0f90e22c86ddeadf4d1a516c"
	I1122 00:49:24.427807  659783 logs.go:123] Gathering logs for kube-scheduler [c546164b4e284af119113a13e0a67e01c0dd3fe9b208a4092c79664dd912c8e8] ...
	I1122 00:49:24.427883  659783 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 c546164b4e284af119113a13e0a67e01c0dd3fe9b208a4092c79664dd912c8e8"
	I1122 00:49:24.509719  659783 logs.go:123] Gathering logs for kube-controller-manager [bb0e61234ebc57d4a32db1e400b0a3cab257c463daf8c1fd63a71be5978bb7ad] ...
	I1122 00:49:24.509878  659783 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 bb0e61234ebc57d4a32db1e400b0a3cab257c463daf8c1fd63a71be5978bb7ad"
	I1122 00:49:24.589091  659783 logs.go:123] Gathering logs for container status ...
	I1122 00:49:24.589117  659783 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1122 00:49:27.174752  659783 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1122 00:49:27.175242  659783 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1122 00:49:27.175327  659783 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1122 00:49:27.175444  659783 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1122 00:49:27.217336  659783 cri.go:89] found id: "513641c31371d8c05f0f09cedcd3f1444e01013c0f90e22c86ddeadf4d1a516c"
	I1122 00:49:27.217395  659783 cri.go:89] found id: ""
	I1122 00:49:27.217426  659783 logs.go:282] 1 containers: [513641c31371d8c05f0f09cedcd3f1444e01013c0f90e22c86ddeadf4d1a516c]
	I1122 00:49:27.217510  659783 ssh_runner.go:195] Run: which crictl
	I1122 00:49:27.230837  659783 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1122 00:49:27.230962  659783 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1122 00:49:27.279157  659783 cri.go:89] found id: ""
	I1122 00:49:27.279231  659783 logs.go:282] 0 containers: []
	W1122 00:49:27.279253  659783 logs.go:284] No container was found matching "etcd"
	I1122 00:49:27.279273  659783 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1122 00:49:27.279379  659783 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1122 00:49:27.344590  659783 cri.go:89] found id: ""
	I1122 00:49:27.344664  659783 logs.go:282] 0 containers: []
	W1122 00:49:27.344686  659783 logs.go:284] No container was found matching "coredns"
	I1122 00:49:27.344705  659783 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1122 00:49:27.344792  659783 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1122 00:49:27.383127  659783 cri.go:89] found id: "c546164b4e284af119113a13e0a67e01c0dd3fe9b208a4092c79664dd912c8e8"
	I1122 00:49:27.383199  659783 cri.go:89] found id: ""
	I1122 00:49:27.383222  659783 logs.go:282] 1 containers: [c546164b4e284af119113a13e0a67e01c0dd3fe9b208a4092c79664dd912c8e8]
	I1122 00:49:27.383314  659783 ssh_runner.go:195] Run: which crictl
	I1122 00:49:27.389051  659783 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1122 00:49:27.389184  659783 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1122 00:49:27.436305  659783 cri.go:89] found id: ""
	I1122 00:49:27.436384  659783 logs.go:282] 0 containers: []
	W1122 00:49:27.436408  659783 logs.go:284] No container was found matching "kube-proxy"
	I1122 00:49:27.436426  659783 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1122 00:49:27.436531  659783 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1122 00:49:27.508007  659783 cri.go:89] found id: "492c7cccdf0364fe71128220d1afdd81ad6e587b857b7898958d72c0a7eb4953"
	I1122 00:49:27.508080  659783 cri.go:89] found id: "bb0e61234ebc57d4a32db1e400b0a3cab257c463daf8c1fd63a71be5978bb7ad"
	I1122 00:49:27.508116  659783 cri.go:89] found id: ""
	I1122 00:49:27.508140  659783 logs.go:282] 2 containers: [492c7cccdf0364fe71128220d1afdd81ad6e587b857b7898958d72c0a7eb4953 bb0e61234ebc57d4a32db1e400b0a3cab257c463daf8c1fd63a71be5978bb7ad]
	I1122 00:49:27.508228  659783 ssh_runner.go:195] Run: which crictl
	I1122 00:49:27.512426  659783 ssh_runner.go:195] Run: which crictl
	I1122 00:49:27.522456  659783 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1122 00:49:27.522588  659783 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1122 00:49:27.563609  659783 cri.go:89] found id: ""
	I1122 00:49:27.563692  659783 logs.go:282] 0 containers: []
	W1122 00:49:27.563714  659783 logs.go:284] No container was found matching "kindnet"
	I1122 00:49:27.563734  659783 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1122 00:49:27.563841  659783 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1122 00:49:27.620944  659783 cri.go:89] found id: ""
	I1122 00:49:27.621017  659783 logs.go:282] 0 containers: []
	W1122 00:49:27.621039  659783 logs.go:284] No container was found matching "storage-provisioner"
	I1122 00:49:27.621081  659783 logs.go:123] Gathering logs for kubelet ...
	I1122 00:49:27.621108  659783 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1122 00:49:27.787270  659783 logs.go:123] Gathering logs for describe nodes ...
	I1122 00:49:27.787301  659783 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1122 00:49:27.904766  659783 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1122 00:49:27.904785  659783 logs.go:123] Gathering logs for kube-apiserver [513641c31371d8c05f0f09cedcd3f1444e01013c0f90e22c86ddeadf4d1a516c] ...
	I1122 00:49:27.904798  659783 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 513641c31371d8c05f0f09cedcd3f1444e01013c0f90e22c86ddeadf4d1a516c"
	I1122 00:49:27.954024  659783 logs.go:123] Gathering logs for kube-scheduler [c546164b4e284af119113a13e0a67e01c0dd3fe9b208a4092c79664dd912c8e8] ...
	I1122 00:49:27.954108  659783 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 c546164b4e284af119113a13e0a67e01c0dd3fe9b208a4092c79664dd912c8e8"
	I1122 00:49:28.057382  659783 logs.go:123] Gathering logs for kube-controller-manager [bb0e61234ebc57d4a32db1e400b0a3cab257c463daf8c1fd63a71be5978bb7ad] ...
	I1122 00:49:28.057480  659783 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 bb0e61234ebc57d4a32db1e400b0a3cab257c463daf8c1fd63a71be5978bb7ad"
	I1122 00:49:28.111713  659783 logs.go:123] Gathering logs for CRI-O ...
	I1122 00:49:28.111737  659783 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1122 00:49:28.180238  659783 logs.go:123] Gathering logs for container status ...
	I1122 00:49:28.180314  659783 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1122 00:49:28.228480  659783 logs.go:123] Gathering logs for dmesg ...
	I1122 00:49:28.228504  659783 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1122 00:49:28.248652  659783 logs.go:123] Gathering logs for kube-controller-manager [492c7cccdf0364fe71128220d1afdd81ad6e587b857b7898958d72c0a7eb4953] ...
	I1122 00:49:28.248722  659783 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 492c7cccdf0364fe71128220d1afdd81ad6e587b857b7898958d72c0a7eb4953"
	I1122 00:49:30.796894  659783 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1122 00:49:30.797406  659783 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1122 00:49:30.797458  659783 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1122 00:49:30.797520  659783 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1122 00:49:30.824603  659783 cri.go:89] found id: "513641c31371d8c05f0f09cedcd3f1444e01013c0f90e22c86ddeadf4d1a516c"
	I1122 00:49:30.824626  659783 cri.go:89] found id: ""
	I1122 00:49:30.824633  659783 logs.go:282] 1 containers: [513641c31371d8c05f0f09cedcd3f1444e01013c0f90e22c86ddeadf4d1a516c]
	I1122 00:49:30.824697  659783 ssh_runner.go:195] Run: which crictl
	I1122 00:49:30.828378  659783 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1122 00:49:30.828448  659783 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1122 00:49:30.860932  659783 cri.go:89] found id: ""
	I1122 00:49:30.860956  659783 logs.go:282] 0 containers: []
	W1122 00:49:30.860965  659783 logs.go:284] No container was found matching "etcd"
	I1122 00:49:30.860971  659783 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1122 00:49:30.861028  659783 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1122 00:49:30.889540  659783 cri.go:89] found id: ""
	I1122 00:49:30.889562  659783 logs.go:282] 0 containers: []
	W1122 00:49:30.889572  659783 logs.go:284] No container was found matching "coredns"
	I1122 00:49:30.889578  659783 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1122 00:49:30.889638  659783 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1122 00:49:30.920066  659783 cri.go:89] found id: "c546164b4e284af119113a13e0a67e01c0dd3fe9b208a4092c79664dd912c8e8"
	I1122 00:49:30.920095  659783 cri.go:89] found id: ""
	I1122 00:49:30.920104  659783 logs.go:282] 1 containers: [c546164b4e284af119113a13e0a67e01c0dd3fe9b208a4092c79664dd912c8e8]
	I1122 00:49:30.920162  659783 ssh_runner.go:195] Run: which crictl
	I1122 00:49:30.924202  659783 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1122 00:49:30.924290  659783 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1122 00:49:30.952573  659783 cri.go:89] found id: ""
	I1122 00:49:30.952596  659783 logs.go:282] 0 containers: []
	W1122 00:49:30.952605  659783 logs.go:284] No container was found matching "kube-proxy"
	I1122 00:49:30.952611  659783 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1122 00:49:30.952669  659783 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1122 00:49:30.982018  659783 cri.go:89] found id: "492c7cccdf0364fe71128220d1afdd81ad6e587b857b7898958d72c0a7eb4953"
	I1122 00:49:30.982044  659783 cri.go:89] found id: ""
	I1122 00:49:30.982052  659783 logs.go:282] 1 containers: [492c7cccdf0364fe71128220d1afdd81ad6e587b857b7898958d72c0a7eb4953]
	I1122 00:49:30.982117  659783 ssh_runner.go:195] Run: which crictl
	I1122 00:49:30.986210  659783 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1122 00:49:30.986362  659783 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1122 00:49:31.020631  659783 cri.go:89] found id: ""
	I1122 00:49:31.020715  659783 logs.go:282] 0 containers: []
	W1122 00:49:31.020741  659783 logs.go:284] No container was found matching "kindnet"
	I1122 00:49:31.020760  659783 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1122 00:49:31.020871  659783 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1122 00:49:31.048252  659783 cri.go:89] found id: ""
	I1122 00:49:31.048278  659783 logs.go:282] 0 containers: []
	W1122 00:49:31.048287  659783 logs.go:284] No container was found matching "storage-provisioner"
	I1122 00:49:31.048297  659783 logs.go:123] Gathering logs for describe nodes ...
	I1122 00:49:31.048341  659783 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1122 00:49:31.117367  659783 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1122 00:49:31.117393  659783 logs.go:123] Gathering logs for kube-apiserver [513641c31371d8c05f0f09cedcd3f1444e01013c0f90e22c86ddeadf4d1a516c] ...
	I1122 00:49:31.117411  659783 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 513641c31371d8c05f0f09cedcd3f1444e01013c0f90e22c86ddeadf4d1a516c"
	I1122 00:49:31.153446  659783 logs.go:123] Gathering logs for kube-scheduler [c546164b4e284af119113a13e0a67e01c0dd3fe9b208a4092c79664dd912c8e8] ...
	I1122 00:49:31.153483  659783 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 c546164b4e284af119113a13e0a67e01c0dd3fe9b208a4092c79664dd912c8e8"
	I1122 00:49:31.242082  659783 logs.go:123] Gathering logs for kube-controller-manager [492c7cccdf0364fe71128220d1afdd81ad6e587b857b7898958d72c0a7eb4953] ...
	I1122 00:49:31.242159  659783 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 492c7cccdf0364fe71128220d1afdd81ad6e587b857b7898958d72c0a7eb4953"
	I1122 00:49:29.222497  675703 node_ready.go:49] node "pause-028559" is "Ready"
	I1122 00:49:29.222523  675703 node_ready.go:38] duration metric: took 6.507599327s for node "pause-028559" to be "Ready" ...
	I1122 00:49:29.222537  675703 api_server.go:52] waiting for apiserver process to appear ...
	I1122 00:49:29.222596  675703 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1122 00:49:29.234532  675703 api_server.go:72] duration metric: took 6.697297012s to wait for apiserver process to appear ...
	I1122 00:49:29.234554  675703 api_server.go:88] waiting for apiserver healthz status ...
	I1122 00:49:29.234573  675703 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1122 00:49:29.281690  675703 api_server.go:279] https://192.168.85.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1122 00:49:29.281774  675703 api_server.go:103] status: https://192.168.85.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1122 00:49:29.735012  675703 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1122 00:49:29.743237  675703 api_server.go:279] https://192.168.85.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1122 00:49:29.743327  675703 api_server.go:103] status: https://192.168.85.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1122 00:49:30.234922  675703 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1122 00:49:30.243000  675703 api_server.go:279] https://192.168.85.2:8443/healthz returned 200:
	ok
	I1122 00:49:30.244073  675703 api_server.go:141] control plane version: v1.34.1
	I1122 00:49:30.244096  675703 api_server.go:131] duration metric: took 1.00953584s to wait for apiserver health ...
	I1122 00:49:30.244105  675703 system_pods.go:43] waiting for kube-system pods to appear ...
	I1122 00:49:30.248099  675703 system_pods.go:59] 7 kube-system pods found
	I1122 00:49:30.248135  675703 system_pods.go:61] "coredns-66bc5c9577-mf9wz" [c60bc6ef-6579-4cd2-821a-d54eed09dd2f] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1122 00:49:30.248144  675703 system_pods.go:61] "etcd-pause-028559" [cde6ab83-039f-4d41-b9d1-9f014e5a0cc2] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1122 00:49:30.248157  675703 system_pods.go:61] "kindnet-md6h6" [c69079f9-3127-43a1-99c6-9ec5a41b79cc] Running
	I1122 00:49:30.248163  675703 system_pods.go:61] "kube-apiserver-pause-028559" [e01b3f00-761a-4a4a-883d-0f10a9dcee53] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1122 00:49:30.248171  675703 system_pods.go:61] "kube-controller-manager-pause-028559" [facefbfd-f19e-48a4-9b4a-bc60f64a69bd] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1122 00:49:30.248175  675703 system_pods.go:61] "kube-proxy-qnj6x" [1e6d9e0d-242d-484d-be05-aaaf175e8c31] Running
	I1122 00:49:30.248202  675703 system_pods.go:61] "kube-scheduler-pause-028559" [f94e531d-a5fe-4de0-9d3d-4779af25bf97] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1122 00:49:30.248212  675703 system_pods.go:74] duration metric: took 4.101007ms to wait for pod list to return data ...
	I1122 00:49:30.248223  675703 default_sa.go:34] waiting for default service account to be created ...
	I1122 00:49:30.251213  675703 default_sa.go:45] found service account: "default"
	I1122 00:49:30.251240  675703 default_sa.go:55] duration metric: took 3.00851ms for default service account to be created ...
	I1122 00:49:30.251249  675703 system_pods.go:116] waiting for k8s-apps to be running ...
	I1122 00:49:30.254542  675703 system_pods.go:86] 7 kube-system pods found
	I1122 00:49:30.254625  675703 system_pods.go:89] "coredns-66bc5c9577-mf9wz" [c60bc6ef-6579-4cd2-821a-d54eed09dd2f] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1122 00:49:30.254648  675703 system_pods.go:89] "etcd-pause-028559" [cde6ab83-039f-4d41-b9d1-9f014e5a0cc2] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1122 00:49:30.254667  675703 system_pods.go:89] "kindnet-md6h6" [c69079f9-3127-43a1-99c6-9ec5a41b79cc] Running
	I1122 00:49:30.254704  675703 system_pods.go:89] "kube-apiserver-pause-028559" [e01b3f00-761a-4a4a-883d-0f10a9dcee53] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1122 00:49:30.254729  675703 system_pods.go:89] "kube-controller-manager-pause-028559" [facefbfd-f19e-48a4-9b4a-bc60f64a69bd] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1122 00:49:30.254748  675703 system_pods.go:89] "kube-proxy-qnj6x" [1e6d9e0d-242d-484d-be05-aaaf175e8c31] Running
	I1122 00:49:30.254787  675703 system_pods.go:89] "kube-scheduler-pause-028559" [f94e531d-a5fe-4de0-9d3d-4779af25bf97] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1122 00:49:30.254814  675703 system_pods.go:126] duration metric: took 3.558107ms to wait for k8s-apps to be running ...
	I1122 00:49:30.254837  675703 system_svc.go:44] waiting for kubelet service to be running ....
	I1122 00:49:30.254921  675703 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1122 00:49:30.273565  675703 system_svc.go:56] duration metric: took 18.720025ms WaitForService to wait for kubelet
	I1122 00:49:30.273645  675703 kubeadm.go:587] duration metric: took 7.736413892s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1122 00:49:30.273677  675703 node_conditions.go:102] verifying NodePressure condition ...
	I1122 00:49:30.281343  675703 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1122 00:49:30.281423  675703 node_conditions.go:123] node cpu capacity is 2
	I1122 00:49:30.281450  675703 node_conditions.go:105] duration metric: took 7.753356ms to run NodePressure ...
	I1122 00:49:30.281478  675703 start.go:242] waiting for startup goroutines ...
	I1122 00:49:30.281518  675703 start.go:247] waiting for cluster config update ...
	I1122 00:49:30.281540  675703 start.go:256] writing updated cluster config ...
	I1122 00:49:30.281929  675703 ssh_runner.go:195] Run: rm -f paused
	I1122 00:49:30.290498  675703 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1122 00:49:30.291251  675703 kapi.go:59] client config for pause-028559: &rest.Config{Host:"https://192.168.85.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21934-513600/.minikube/profiles/pause-028559/client.crt", KeyFile:"/home/jenkins/minikube-integration/21934-513600/.minikube/profiles/pause-028559/client.key", CAFile:"/home/jenkins/minikube-integration/21934-513600/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]s
tring(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1fb2df0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1122 00:49:30.349437  675703 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-mf9wz" in "kube-system" namespace to be "Ready" or be gone ...
	W1122 00:49:32.356308  675703 pod_ready.go:104] pod "coredns-66bc5c9577-mf9wz" is not "Ready", error: <nil>
	I1122 00:49:31.287880  659783 logs.go:123] Gathering logs for CRI-O ...
	I1122 00:49:31.287910  659783 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1122 00:49:31.361711  659783 logs.go:123] Gathering logs for container status ...
	I1122 00:49:31.361784  659783 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1122 00:49:31.395895  659783 logs.go:123] Gathering logs for kubelet ...
	I1122 00:49:31.395974  659783 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1122 00:49:31.519014  659783 logs.go:123] Gathering logs for dmesg ...
	I1122 00:49:31.519054  659783 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1122 00:49:34.040116  659783 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1122 00:49:34.040588  659783 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1122 00:49:34.040657  659783 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1122 00:49:34.040732  659783 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1122 00:49:34.068378  659783 cri.go:89] found id: "513641c31371d8c05f0f09cedcd3f1444e01013c0f90e22c86ddeadf4d1a516c"
	I1122 00:49:34.068400  659783 cri.go:89] found id: ""
	I1122 00:49:34.068408  659783 logs.go:282] 1 containers: [513641c31371d8c05f0f09cedcd3f1444e01013c0f90e22c86ddeadf4d1a516c]
	I1122 00:49:34.068465  659783 ssh_runner.go:195] Run: which crictl
	I1122 00:49:34.072298  659783 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1122 00:49:34.072373  659783 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1122 00:49:34.098629  659783 cri.go:89] found id: ""
	I1122 00:49:34.098653  659783 logs.go:282] 0 containers: []
	W1122 00:49:34.098663  659783 logs.go:284] No container was found matching "etcd"
	I1122 00:49:34.098669  659783 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1122 00:49:34.098726  659783 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1122 00:49:34.124615  659783 cri.go:89] found id: ""
	I1122 00:49:34.124639  659783 logs.go:282] 0 containers: []
	W1122 00:49:34.124648  659783 logs.go:284] No container was found matching "coredns"
	I1122 00:49:34.124654  659783 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1122 00:49:34.124716  659783 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1122 00:49:34.151983  659783 cri.go:89] found id: "c546164b4e284af119113a13e0a67e01c0dd3fe9b208a4092c79664dd912c8e8"
	I1122 00:49:34.152005  659783 cri.go:89] found id: ""
	I1122 00:49:34.152013  659783 logs.go:282] 1 containers: [c546164b4e284af119113a13e0a67e01c0dd3fe9b208a4092c79664dd912c8e8]
	I1122 00:49:34.152067  659783 ssh_runner.go:195] Run: which crictl
	I1122 00:49:34.155804  659783 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1122 00:49:34.155880  659783 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1122 00:49:34.182319  659783 cri.go:89] found id: ""
	I1122 00:49:34.182344  659783 logs.go:282] 0 containers: []
	W1122 00:49:34.182353  659783 logs.go:284] No container was found matching "kube-proxy"
	I1122 00:49:34.182360  659783 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1122 00:49:34.182438  659783 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1122 00:49:34.210190  659783 cri.go:89] found id: "492c7cccdf0364fe71128220d1afdd81ad6e587b857b7898958d72c0a7eb4953"
	I1122 00:49:34.210211  659783 cri.go:89] found id: ""
	I1122 00:49:34.210219  659783 logs.go:282] 1 containers: [492c7cccdf0364fe71128220d1afdd81ad6e587b857b7898958d72c0a7eb4953]
	I1122 00:49:34.210296  659783 ssh_runner.go:195] Run: which crictl
	I1122 00:49:34.214023  659783 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1122 00:49:34.214112  659783 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1122 00:49:34.241405  659783 cri.go:89] found id: ""
	I1122 00:49:34.241429  659783 logs.go:282] 0 containers: []
	W1122 00:49:34.241437  659783 logs.go:284] No container was found matching "kindnet"
	I1122 00:49:34.241443  659783 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1122 00:49:34.241553  659783 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1122 00:49:34.267639  659783 cri.go:89] found id: ""
	I1122 00:49:34.267667  659783 logs.go:282] 0 containers: []
	W1122 00:49:34.267676  659783 logs.go:284] No container was found matching "storage-provisioner"
	I1122 00:49:34.267686  659783 logs.go:123] Gathering logs for kube-apiserver [513641c31371d8c05f0f09cedcd3f1444e01013c0f90e22c86ddeadf4d1a516c] ...
	I1122 00:49:34.267728  659783 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 513641c31371d8c05f0f09cedcd3f1444e01013c0f90e22c86ddeadf4d1a516c"
	I1122 00:49:34.302018  659783 logs.go:123] Gathering logs for kube-scheduler [c546164b4e284af119113a13e0a67e01c0dd3fe9b208a4092c79664dd912c8e8] ...
	I1122 00:49:34.302050  659783 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 c546164b4e284af119113a13e0a67e01c0dd3fe9b208a4092c79664dd912c8e8"
	I1122 00:49:34.398301  659783 logs.go:123] Gathering logs for kube-controller-manager [492c7cccdf0364fe71128220d1afdd81ad6e587b857b7898958d72c0a7eb4953] ...
	I1122 00:49:34.398338  659783 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 492c7cccdf0364fe71128220d1afdd81ad6e587b857b7898958d72c0a7eb4953"
	I1122 00:49:34.430046  659783 logs.go:123] Gathering logs for CRI-O ...
	I1122 00:49:34.430079  659783 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1122 00:49:34.493544  659783 logs.go:123] Gathering logs for container status ...
	I1122 00:49:34.493578  659783 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1122 00:49:34.525986  659783 logs.go:123] Gathering logs for kubelet ...
	I1122 00:49:34.526015  659783 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1122 00:49:34.641721  659783 logs.go:123] Gathering logs for dmesg ...
	I1122 00:49:34.641756  659783 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1122 00:49:34.659594  659783 logs.go:123] Gathering logs for describe nodes ...
	I1122 00:49:34.659624  659783 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1122 00:49:34.728222  659783 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	W1122 00:49:34.358009  675703 pod_ready.go:104] pod "coredns-66bc5c9577-mf9wz" is not "Ready", error: <nil>
	I1122 00:49:35.364144  675703 pod_ready.go:94] pod "coredns-66bc5c9577-mf9wz" is "Ready"
	I1122 00:49:35.364170  675703 pod_ready.go:86] duration metric: took 5.014700497s for pod "coredns-66bc5c9577-mf9wz" in "kube-system" namespace to be "Ready" or be gone ...
	I1122 00:49:35.371385  675703 pod_ready.go:83] waiting for pod "etcd-pause-028559" in "kube-system" namespace to be "Ready" or be gone ...
	I1122 00:49:35.384840  675703 pod_ready.go:94] pod "etcd-pause-028559" is "Ready"
	I1122 00:49:35.384918  675703 pod_ready.go:86] duration metric: took 13.494906ms for pod "etcd-pause-028559" in "kube-system" namespace to be "Ready" or be gone ...
	I1122 00:49:35.397388  675703 pod_ready.go:83] waiting for pod "kube-apiserver-pause-028559" in "kube-system" namespace to be "Ready" or be gone ...
	I1122 00:49:35.407774  675703 pod_ready.go:94] pod "kube-apiserver-pause-028559" is "Ready"
	I1122 00:49:35.407853  675703 pod_ready.go:86] duration metric: took 10.439628ms for pod "kube-apiserver-pause-028559" in "kube-system" namespace to be "Ready" or be gone ...
	I1122 00:49:35.411381  675703 pod_ready.go:83] waiting for pod "kube-controller-manager-pause-028559" in "kube-system" namespace to be "Ready" or be gone ...
	I1122 00:49:35.552765  675703 pod_ready.go:94] pod "kube-controller-manager-pause-028559" is "Ready"
	I1122 00:49:35.552790  675703 pod_ready.go:86] duration metric: took 141.288685ms for pod "kube-controller-manager-pause-028559" in "kube-system" namespace to be "Ready" or be gone ...
	I1122 00:49:35.752574  675703 pod_ready.go:83] waiting for pod "kube-proxy-qnj6x" in "kube-system" namespace to be "Ready" or be gone ...
	I1122 00:49:36.153230  675703 pod_ready.go:94] pod "kube-proxy-qnj6x" is "Ready"
	I1122 00:49:36.153256  675703 pod_ready.go:86] duration metric: took 400.655531ms for pod "kube-proxy-qnj6x" in "kube-system" namespace to be "Ready" or be gone ...
	I1122 00:49:36.353229  675703 pod_ready.go:83] waiting for pod "kube-scheduler-pause-028559" in "kube-system" namespace to be "Ready" or be gone ...
	I1122 00:49:38.358301  675703 pod_ready.go:94] pod "kube-scheduler-pause-028559" is "Ready"
	I1122 00:49:38.358331  675703 pod_ready.go:86] duration metric: took 2.005075887s for pod "kube-scheduler-pause-028559" in "kube-system" namespace to be "Ready" or be gone ...
	I1122 00:49:38.358344  675703 pod_ready.go:40] duration metric: took 8.067764447s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1122 00:49:38.417058  675703 start.go:628] kubectl: 1.33.2, cluster: 1.34.1 (minor skew: 1)
	I1122 00:49:38.420098  675703 out.go:179] * Done! kubectl is now configured to use "pause-028559" cluster and "default" namespace by default
	I1122 00:49:37.228499  659783 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1122 00:49:37.228919  659783 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1122 00:49:37.228992  659783 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1122 00:49:37.229062  659783 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1122 00:49:37.254860  659783 cri.go:89] found id: "513641c31371d8c05f0f09cedcd3f1444e01013c0f90e22c86ddeadf4d1a516c"
	I1122 00:49:37.254927  659783 cri.go:89] found id: ""
	I1122 00:49:37.254949  659783 logs.go:282] 1 containers: [513641c31371d8c05f0f09cedcd3f1444e01013c0f90e22c86ddeadf4d1a516c]
	I1122 00:49:37.255027  659783 ssh_runner.go:195] Run: which crictl
	I1122 00:49:37.258686  659783 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1122 00:49:37.258780  659783 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1122 00:49:37.287483  659783 cri.go:89] found id: ""
	I1122 00:49:37.287505  659783 logs.go:282] 0 containers: []
	W1122 00:49:37.287515  659783 logs.go:284] No container was found matching "etcd"
	I1122 00:49:37.287521  659783 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1122 00:49:37.287577  659783 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1122 00:49:37.319527  659783 cri.go:89] found id: ""
	I1122 00:49:37.319552  659783 logs.go:282] 0 containers: []
	W1122 00:49:37.319561  659783 logs.go:284] No container was found matching "coredns"
	I1122 00:49:37.319568  659783 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1122 00:49:37.319631  659783 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1122 00:49:37.346037  659783 cri.go:89] found id: "c546164b4e284af119113a13e0a67e01c0dd3fe9b208a4092c79664dd912c8e8"
	I1122 00:49:37.346058  659783 cri.go:89] found id: ""
	I1122 00:49:37.346066  659783 logs.go:282] 1 containers: [c546164b4e284af119113a13e0a67e01c0dd3fe9b208a4092c79664dd912c8e8]
	I1122 00:49:37.346123  659783 ssh_runner.go:195] Run: which crictl
	I1122 00:49:37.349584  659783 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1122 00:49:37.349651  659783 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1122 00:49:37.376164  659783 cri.go:89] found id: ""
	I1122 00:49:37.376187  659783 logs.go:282] 0 containers: []
	W1122 00:49:37.376196  659783 logs.go:284] No container was found matching "kube-proxy"
	I1122 00:49:37.376202  659783 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1122 00:49:37.376265  659783 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1122 00:49:37.403661  659783 cri.go:89] found id: "492c7cccdf0364fe71128220d1afdd81ad6e587b857b7898958d72c0a7eb4953"
	I1122 00:49:37.403734  659783 cri.go:89] found id: ""
	I1122 00:49:37.403752  659783 logs.go:282] 1 containers: [492c7cccdf0364fe71128220d1afdd81ad6e587b857b7898958d72c0a7eb4953]
	I1122 00:49:37.403817  659783 ssh_runner.go:195] Run: which crictl
	I1122 00:49:37.407561  659783 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1122 00:49:37.407631  659783 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1122 00:49:37.433527  659783 cri.go:89] found id: ""
	I1122 00:49:37.433547  659783 logs.go:282] 0 containers: []
	W1122 00:49:37.433556  659783 logs.go:284] No container was found matching "kindnet"
	I1122 00:49:37.433562  659783 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1122 00:49:37.433622  659783 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1122 00:49:37.466464  659783 cri.go:89] found id: ""
	I1122 00:49:37.466489  659783 logs.go:282] 0 containers: []
	W1122 00:49:37.466498  659783 logs.go:284] No container was found matching "storage-provisioner"
	I1122 00:49:37.466508  659783 logs.go:123] Gathering logs for dmesg ...
	I1122 00:49:37.466520  659783 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1122 00:49:37.484698  659783 logs.go:123] Gathering logs for describe nodes ...
	I1122 00:49:37.484728  659783 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1122 00:49:37.558527  659783 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1122 00:49:37.558549  659783 logs.go:123] Gathering logs for kube-apiserver [513641c31371d8c05f0f09cedcd3f1444e01013c0f90e22c86ddeadf4d1a516c] ...
	I1122 00:49:37.558561  659783 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 513641c31371d8c05f0f09cedcd3f1444e01013c0f90e22c86ddeadf4d1a516c"
	I1122 00:49:37.592244  659783 logs.go:123] Gathering logs for kube-scheduler [c546164b4e284af119113a13e0a67e01c0dd3fe9b208a4092c79664dd912c8e8] ...
	I1122 00:49:37.592278  659783 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 c546164b4e284af119113a13e0a67e01c0dd3fe9b208a4092c79664dd912c8e8"
	I1122 00:49:37.654272  659783 logs.go:123] Gathering logs for kube-controller-manager [492c7cccdf0364fe71128220d1afdd81ad6e587b857b7898958d72c0a7eb4953] ...
	I1122 00:49:37.654308  659783 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 492c7cccdf0364fe71128220d1afdd81ad6e587b857b7898958d72c0a7eb4953"
	I1122 00:49:37.683329  659783 logs.go:123] Gathering logs for CRI-O ...
	I1122 00:49:37.683354  659783 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1122 00:49:37.742893  659783 logs.go:123] Gathering logs for container status ...
	I1122 00:49:37.742939  659783 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1122 00:49:37.774790  659783 logs.go:123] Gathering logs for kubelet ...
	I1122 00:49:37.774817  659783 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1122 00:49:40.392116  659783 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1122 00:49:40.392564  659783 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1122 00:49:40.392613  659783 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1122 00:49:40.392673  659783 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1122 00:49:40.421353  659783 cri.go:89] found id: "513641c31371d8c05f0f09cedcd3f1444e01013c0f90e22c86ddeadf4d1a516c"
	I1122 00:49:40.421373  659783 cri.go:89] found id: ""
	I1122 00:49:40.421381  659783 logs.go:282] 1 containers: [513641c31371d8c05f0f09cedcd3f1444e01013c0f90e22c86ddeadf4d1a516c]
	I1122 00:49:40.421438  659783 ssh_runner.go:195] Run: which crictl
	I1122 00:49:40.425192  659783 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1122 00:49:40.425271  659783 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1122 00:49:40.450286  659783 cri.go:89] found id: ""
	I1122 00:49:40.450314  659783 logs.go:282] 0 containers: []
	W1122 00:49:40.450323  659783 logs.go:284] No container was found matching "etcd"
	I1122 00:49:40.450330  659783 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1122 00:49:40.450386  659783 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1122 00:49:40.476864  659783 cri.go:89] found id: ""
	I1122 00:49:40.476889  659783 logs.go:282] 0 containers: []
	W1122 00:49:40.476898  659783 logs.go:284] No container was found matching "coredns"
	I1122 00:49:40.476904  659783 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1122 00:49:40.476962  659783 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1122 00:49:40.505369  659783 cri.go:89] found id: "c546164b4e284af119113a13e0a67e01c0dd3fe9b208a4092c79664dd912c8e8"
	I1122 00:49:40.505397  659783 cri.go:89] found id: ""
	I1122 00:49:40.505405  659783 logs.go:282] 1 containers: [c546164b4e284af119113a13e0a67e01c0dd3fe9b208a4092c79664dd912c8e8]
	I1122 00:49:40.505462  659783 ssh_runner.go:195] Run: which crictl
	I1122 00:49:40.509111  659783 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1122 00:49:40.509187  659783 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1122 00:49:40.536796  659783 cri.go:89] found id: ""
	I1122 00:49:40.536820  659783 logs.go:282] 0 containers: []
	W1122 00:49:40.536829  659783 logs.go:284] No container was found matching "kube-proxy"
	I1122 00:49:40.536835  659783 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1122 00:49:40.536892  659783 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1122 00:49:40.563132  659783 cri.go:89] found id: "492c7cccdf0364fe71128220d1afdd81ad6e587b857b7898958d72c0a7eb4953"
	I1122 00:49:40.563153  659783 cri.go:89] found id: ""
	I1122 00:49:40.563161  659783 logs.go:282] 1 containers: [492c7cccdf0364fe71128220d1afdd81ad6e587b857b7898958d72c0a7eb4953]
	I1122 00:49:40.563228  659783 ssh_runner.go:195] Run: which crictl
	I1122 00:49:40.567173  659783 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1122 00:49:40.567237  659783 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1122 00:49:40.595171  659783 cri.go:89] found id: ""
	I1122 00:49:40.595195  659783 logs.go:282] 0 containers: []
	W1122 00:49:40.595205  659783 logs.go:284] No container was found matching "kindnet"
	I1122 00:49:40.595212  659783 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1122 00:49:40.595268  659783 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1122 00:49:40.621301  659783 cri.go:89] found id: ""
	I1122 00:49:40.621326  659783 logs.go:282] 0 containers: []
	W1122 00:49:40.621335  659783 logs.go:284] No container was found matching "storage-provisioner"
	I1122 00:49:40.621344  659783 logs.go:123] Gathering logs for kube-apiserver [513641c31371d8c05f0f09cedcd3f1444e01013c0f90e22c86ddeadf4d1a516c] ...
	I1122 00:49:40.621357  659783 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 513641c31371d8c05f0f09cedcd3f1444e01013c0f90e22c86ddeadf4d1a516c"
	I1122 00:49:40.660396  659783 logs.go:123] Gathering logs for kube-scheduler [c546164b4e284af119113a13e0a67e01c0dd3fe9b208a4092c79664dd912c8e8] ...
	I1122 00:49:40.660430  659783 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 c546164b4e284af119113a13e0a67e01c0dd3fe9b208a4092c79664dd912c8e8"
	I1122 00:49:40.723262  659783 logs.go:123] Gathering logs for kube-controller-manager [492c7cccdf0364fe71128220d1afdd81ad6e587b857b7898958d72c0a7eb4953] ...
	I1122 00:49:40.723325  659783 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 492c7cccdf0364fe71128220d1afdd81ad6e587b857b7898958d72c0a7eb4953"
	I1122 00:49:40.753222  659783 logs.go:123] Gathering logs for CRI-O ...
	I1122 00:49:40.753301  659783 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1122 00:49:40.837307  659783 logs.go:123] Gathering logs for container status ...
	I1122 00:49:40.837395  659783 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1122 00:49:40.890696  659783 logs.go:123] Gathering logs for kubelet ...
	I1122 00:49:40.890769  659783 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1122 00:49:41.028245  659783 logs.go:123] Gathering logs for dmesg ...
	I1122 00:49:41.028300  659783 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1122 00:49:41.048201  659783 logs.go:123] Gathering logs for describe nodes ...
	I1122 00:49:41.048436  659783 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1122 00:49:41.156552  659783 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	
	
	==> CRI-O <==
	Nov 22 00:49:22 pause-028559 crio[2044]: time="2025-11-22T00:49:22.913778011Z" level=info msg="Started container" PID=2338 containerID=d3753a55d1b0852aeac6d506d250fc46da9733dcd90885d3802044a0a80ad951 description=kube-system/kube-scheduler-pause-028559/kube-scheduler id=05227cb2-968f-4bc1-97bb-6a520506fee5 name=/runtime.v1.RuntimeService/StartContainer sandboxID=447f373a9450540c35ccff421b669437b04b54109fdd4952b307a3a2dbd1aa13
	Nov 22 00:49:22 pause-028559 crio[2044]: time="2025-11-22T00:49:22.921084356Z" level=info msg="Created container 87e6aa383dd95744b3c571bbd518cffcd7a4eb8cd0d3a6e584ed17b3615976fd: kube-system/kindnet-md6h6/kindnet-cni" id=12247211-8cd6-4ae1-a318-a7b24b4d4501 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 22 00:49:22 pause-028559 crio[2044]: time="2025-11-22T00:49:22.921856165Z" level=info msg="Starting container: 87e6aa383dd95744b3c571bbd518cffcd7a4eb8cd0d3a6e584ed17b3615976fd" id=6bff490c-8540-4c74-931d-5c68197f6a12 name=/runtime.v1.RuntimeService/StartContainer
	Nov 22 00:49:22 pause-028559 crio[2044]: time="2025-11-22T00:49:22.926423966Z" level=info msg="Started container" PID=2347 containerID=87e6aa383dd95744b3c571bbd518cffcd7a4eb8cd0d3a6e584ed17b3615976fd description=kube-system/kindnet-md6h6/kindnet-cni id=6bff490c-8540-4c74-931d-5c68197f6a12 name=/runtime.v1.RuntimeService/StartContainer sandboxID=d59ab14bb3ccc2a17b058cab963d51d99b6250fe01089cb9ec2107b7add11a95
	Nov 22 00:49:22 pause-028559 crio[2044]: time="2025-11-22T00:49:22.96926119Z" level=info msg="Created container 56e196e0cbb051ab34ffee4c1a27c68525705ef12cf36abbf63ad2924e3b38d1: kube-system/coredns-66bc5c9577-mf9wz/coredns" id=1f1ed80a-1f96-436f-aa86-1417ed37b58d name=/runtime.v1.RuntimeService/CreateContainer
	Nov 22 00:49:22 pause-028559 crio[2044]: time="2025-11-22T00:49:22.97361851Z" level=info msg="Starting container: 56e196e0cbb051ab34ffee4c1a27c68525705ef12cf36abbf63ad2924e3b38d1" id=0b9576d6-7191-4f6b-91b2-b4fa3eba215f name=/runtime.v1.RuntimeService/StartContainer
	Nov 22 00:49:22 pause-028559 crio[2044]: time="2025-11-22T00:49:22.976075365Z" level=info msg="Started container" PID=2376 containerID=56e196e0cbb051ab34ffee4c1a27c68525705ef12cf36abbf63ad2924e3b38d1 description=kube-system/coredns-66bc5c9577-mf9wz/coredns id=0b9576d6-7191-4f6b-91b2-b4fa3eba215f name=/runtime.v1.RuntimeService/StartContainer sandboxID=d7e1d7e0efbc2a7ec9c7c67cdaa6fb979dec664c9f36c6a48c08c993cd41dee8
	Nov 22 00:49:22 pause-028559 crio[2044]: time="2025-11-22T00:49:22.97730609Z" level=info msg="Created container 04ac46e69fa75a9a16c68b21e8dbf076926ba49248fd3f5118e8445fb78a9d5a: kube-system/kube-proxy-qnj6x/kube-proxy" id=2d83e4a2-0ebf-4888-a9de-8321dcaffab0 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 22 00:49:22 pause-028559 crio[2044]: time="2025-11-22T00:49:22.978094744Z" level=info msg="Starting container: 04ac46e69fa75a9a16c68b21e8dbf076926ba49248fd3f5118e8445fb78a9d5a" id=36170a58-33a9-4414-9b81-bddab701a2ba name=/runtime.v1.RuntimeService/StartContainer
	Nov 22 00:49:22 pause-028559 crio[2044]: time="2025-11-22T00:49:22.986553622Z" level=info msg="Started container" PID=2351 containerID=04ac46e69fa75a9a16c68b21e8dbf076926ba49248fd3f5118e8445fb78a9d5a description=kube-system/kube-proxy-qnj6x/kube-proxy id=36170a58-33a9-4414-9b81-bddab701a2ba name=/runtime.v1.RuntimeService/StartContainer sandboxID=c2dbf2401845cb444159fa771efde2c2d158ea0001fde11ce0f78c0d60a59f06
	Nov 22 00:49:33 pause-028559 crio[2044]: time="2025-11-22T00:49:33.282215374Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 22 00:49:33 pause-028559 crio[2044]: time="2025-11-22T00:49:33.2861763Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 22 00:49:33 pause-028559 crio[2044]: time="2025-11-22T00:49:33.286352894Z" level=info msg="Updated default CNI network name to kindnet"
	Nov 22 00:49:33 pause-028559 crio[2044]: time="2025-11-22T00:49:33.286386985Z" level=info msg="CNI monitoring event WRITE         \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 22 00:49:33 pause-028559 crio[2044]: time="2025-11-22T00:49:33.289526388Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 22 00:49:33 pause-028559 crio[2044]: time="2025-11-22T00:49:33.28957309Z" level=info msg="Updated default CNI network name to kindnet"
	Nov 22 00:49:33 pause-028559 crio[2044]: time="2025-11-22T00:49:33.289592708Z" level=info msg="CNI monitoring event WRITE         \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 22 00:49:33 pause-028559 crio[2044]: time="2025-11-22T00:49:33.293053128Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 22 00:49:33 pause-028559 crio[2044]: time="2025-11-22T00:49:33.293086768Z" level=info msg="Updated default CNI network name to kindnet"
	Nov 22 00:49:33 pause-028559 crio[2044]: time="2025-11-22T00:49:33.293108822Z" level=info msg="CNI monitoring event RENAME        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 22 00:49:33 pause-028559 crio[2044]: time="2025-11-22T00:49:33.296181077Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 22 00:49:33 pause-028559 crio[2044]: time="2025-11-22T00:49:33.296212977Z" level=info msg="Updated default CNI network name to kindnet"
	Nov 22 00:49:33 pause-028559 crio[2044]: time="2025-11-22T00:49:33.296235377Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist\" ← \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 22 00:49:33 pause-028559 crio[2044]: time="2025-11-22T00:49:33.299265195Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 22 00:49:33 pause-028559 crio[2044]: time="2025-11-22T00:49:33.299301199Z" level=info msg="Updated default CNI network name to kindnet"
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                    NAMESPACE
	56e196e0cbb05       138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc   21 seconds ago      Running             coredns                   1                   d7e1d7e0efbc2       coredns-66bc5c9577-mf9wz               kube-system
	87e6aa383dd95       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c   21 seconds ago      Running             kindnet-cni               1                   d59ab14bb3ccc       kindnet-md6h6                          kube-system
	04ac46e69fa75       05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9   21 seconds ago      Running             kube-proxy                1                   c2dbf2401845c       kube-proxy-qnj6x                       kube-system
	096e422d76e9c       7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a   21 seconds ago      Running             kube-controller-manager   1                   f15ca62deff89       kube-controller-manager-pause-028559   kube-system
	d3753a55d1b08       b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0   21 seconds ago      Running             kube-scheduler            1                   447f373a94505       kube-scheduler-pause-028559            kube-system
	369bd2d9f6691       43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196   21 seconds ago      Running             kube-apiserver            1                   4bf93441f7197       kube-apiserver-pause-028559            kube-system
	c697a18245b56       a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e   21 seconds ago      Running             etcd                      1                   c5f392634b9bc       etcd-pause-028559                      kube-system
	e8a18ac29ae5a       138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc   34 seconds ago      Exited              coredns                   0                   d7e1d7e0efbc2       coredns-66bc5c9577-mf9wz               kube-system
	7cf54964fa620       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c   45 seconds ago      Exited              kindnet-cni               0                   d59ab14bb3ccc       kindnet-md6h6                          kube-system
	8a47d7c0ee952       05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9   45 seconds ago      Exited              kube-proxy                0                   c2dbf2401845c       kube-proxy-qnj6x                       kube-system
	36494fae0c15c       7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a   59 seconds ago      Exited              kube-controller-manager   0                   f15ca62deff89       kube-controller-manager-pause-028559   kube-system
	b36b71426eedd       b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0   59 seconds ago      Exited              kube-scheduler            0                   447f373a94505       kube-scheduler-pause-028559            kube-system
	fe27dafacf48d       43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196   59 seconds ago      Exited              kube-apiserver            0                   4bf93441f7197       kube-apiserver-pause-028559            kube-system
	c1bb2c7a299bb       a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e   59 seconds ago      Exited              etcd                      0                   c5f392634b9bc       etcd-pause-028559                      kube-system
	
	
	==> coredns [56e196e0cbb051ab34ffee4c1a27c68525705ef12cf36abbf63ad2924e3b38d1] <==
	maxprocs: Leaving GOMAXPROCS=2: CPU quota undefined
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = fa9a0cdcdddcb4be74a0eaf7cfcb211c40e29ddf5507e03bbfc0065bade31f0f2641a2513136e246f32328dd126fc93236fb5c595246f0763926a524386705e8
	CoreDNS-1.12.1
	linux/arm64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:52746 - 36541 "HINFO IN 1552381233468049872.8928525143797496713. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.023222908s
	
	
	==> coredns [e8a18ac29ae5ac05f6b5b4c70bcf6b1fc73a710e59136307cfa68c8bcd36557d] <==
	maxprocs: Leaving GOMAXPROCS=2: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = fa9a0cdcdddcb4be74a0eaf7cfcb211c40e29ddf5507e03bbfc0065bade31f0f2641a2513136e246f32328dd126fc93236fb5c595246f0763926a524386705e8
	CoreDNS-1.12.1
	linux/arm64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:42276 - 1746 "HINFO IN 4082627028878797549.2099873197882724034. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.014723523s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> describe nodes <==
	Name:               pause-028559
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=pause-028559
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=299bbe887a12c40541707cc636234f35f4ff1785
	                    minikube.k8s.io/name=pause-028559
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_11_22T00_48_53_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 22 Nov 2025 00:48:49 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  pause-028559
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 22 Nov 2025 00:49:23 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 22 Nov 2025 00:49:23 +0000   Sat, 22 Nov 2025 00:48:45 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 22 Nov 2025 00:49:23 +0000   Sat, 22 Nov 2025 00:48:45 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 22 Nov 2025 00:49:23 +0000   Sat, 22 Nov 2025 00:48:45 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 22 Nov 2025 00:49:23 +0000   Sat, 22 Nov 2025 00:49:09 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.85.2
	  Hostname:    pause-028559
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	System Info:
	  Machine ID:                 c92eea4d3c03c0156e01fcf8691e3907
	  System UUID:                c6993104-bd8d-4c82-9995-66f6f1c875cf
	  Boot ID:                    72ac7385-472f-47d1-a23e-bd80468d6e09
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.2
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (7 in total)
	  Namespace                   Name                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                    ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-66bc5c9577-mf9wz                100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     47s
	  kube-system                 etcd-pause-028559                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         54s
	  kube-system                 kindnet-md6h6                           100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      47s
	  kube-system                 kube-apiserver-pause-028559             250m (12%)    0 (0%)      0 (0%)           0 (0%)         52s
	  kube-system                 kube-controller-manager-pause-028559    200m (10%)    0 (0%)      0 (0%)           0 (0%)         52s
	  kube-system                 kube-proxy-qnj6x                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         47s
	  kube-system                 kube-scheduler-pause-028559             100m (5%)     0 (0%)      0 (0%)           0 (0%)         52s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  100m (5%)
	  memory             220Mi (2%)  220Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                From             Message
	  ----     ------                   ----               ----             -------
	  Normal   Starting                 45s                kube-proxy       
	  Normal   Starting                 15s                kube-proxy       
	  Normal   NodeHasSufficientPID     60s (x8 over 60s)  kubelet          Node pause-028559 status is now: NodeHasSufficientPID
	  Warning  CgroupV1                 60s                kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  60s (x8 over 60s)  kubelet          Node pause-028559 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    60s (x8 over 60s)  kubelet          Node pause-028559 status is now: NodeHasNoDiskPressure
	  Normal   Starting                 60s                kubelet          Starting kubelet.
	  Normal   Starting                 52s                kubelet          Starting kubelet.
	  Warning  CgroupV1                 52s                kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  52s                kubelet          Node pause-028559 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    52s                kubelet          Node pause-028559 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     52s                kubelet          Node pause-028559 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           48s                node-controller  Node pause-028559 event: Registered Node pause-028559 in Controller
	  Normal   NodeReady                35s                kubelet          Node pause-028559 status is now: NodeReady
	  Normal   RegisteredNode           12s                node-controller  Node pause-028559 event: Registered Node pause-028559 in Controller
	
	
	==> dmesg <==
	[  +3.904643] overlayfs: idmapped layers are currently not supported
	[Nov22 00:15] overlayfs: idmapped layers are currently not supported
	[Nov22 00:23] overlayfs: idmapped layers are currently not supported
	[  +4.038304] overlayfs: idmapped layers are currently not supported
	[Nov22 00:24] overlayfs: idmapped layers are currently not supported
	[Nov22 00:25] overlayfs: idmapped layers are currently not supported
	[Nov22 00:26] overlayfs: idmapped layers are currently not supported
	[Nov22 00:31] overlayfs: idmapped layers are currently not supported
	[ +30.712010] overlayfs: idmapped layers are currently not supported
	[Nov22 00:32] overlayfs: idmapped layers are currently not supported
	[Nov22 00:33] overlayfs: idmapped layers are currently not supported
	[Nov22 00:35] overlayfs: idmapped layers are currently not supported
	[Nov22 00:36] overlayfs: idmapped layers are currently not supported
	[ +18.168104] overlayfs: idmapped layers are currently not supported
	[Nov22 00:37] overlayfs: idmapped layers are currently not supported
	[ +56.322609] overlayfs: idmapped layers are currently not supported
	[Nov22 00:38] overlayfs: idmapped layers are currently not supported
	[Nov22 00:39] overlayfs: idmapped layers are currently not supported
	[ +23.174928] overlayfs: idmapped layers are currently not supported
	[Nov22 00:41] overlayfs: idmapped layers are currently not supported
	[Nov22 00:42] overlayfs: idmapped layers are currently not supported
	[Nov22 00:44] overlayfs: idmapped layers are currently not supported
	[Nov22 00:45] overlayfs: idmapped layers are currently not supported
	[Nov22 00:46] overlayfs: idmapped layers are currently not supported
	[Nov22 00:48] overlayfs: idmapped layers are currently not supported
	
	
	==> etcd [c1bb2c7a299bbbaca1218d7345b106ea8559d1df49b48d1e8effc89a6e7a38b3] <==
	{"level":"warn","ts":"2025-11-22T00:48:48.674347Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35068","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-22T00:48:48.710781Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35092","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-22T00:48:48.726288Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35112","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-22T00:48:48.830058Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35132","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-11-22T00:48:58.102773Z","caller":"traceutil/trace.go:172","msg":"trace[427647510] transaction","detail":"{read_only:false; response_revision:373; number_of_response:1; }","duration":"101.393068ms","start":"2025-11-22T00:48:58.001356Z","end":"2025-11-22T00:48:58.102749Z","steps":["trace[427647510] 'compare'  (duration: 26.57408ms)","trace[427647510] 'store kv pair into bolt db' {req_type:put; key:/registry/configmaps/kube-node-lease/kube-root-ca.crt; req_size:1740; } (duration: 43.925413ms)"],"step_count":2}
	{"level":"info","ts":"2025-11-22T00:48:58.103949Z","caller":"traceutil/trace.go:172","msg":"trace[1198625101] transaction","detail":"{read_only:false; response_revision:374; number_of_response:1; }","duration":"101.295725ms","start":"2025-11-22T00:48:58.002631Z","end":"2025-11-22T00:48:58.103927Z","steps":["trace[1198625101] 'process raft request'  (duration: 71.347394ms)"],"step_count":1}
	{"level":"info","ts":"2025-11-22T00:48:58.151315Z","caller":"traceutil/trace.go:172","msg":"trace[534132880] transaction","detail":"{read_only:false; response_revision:375; number_of_response:1; }","duration":"148.537473ms","start":"2025-11-22T00:48:58.002759Z","end":"2025-11-22T00:48:58.151296Z","steps":["trace[534132880] 'process raft request'  (duration: 129.11136ms)","trace[534132880] 'compare'  (duration: 16.445195ms)"],"step_count":2}
	{"level":"info","ts":"2025-11-22T00:49:14.399681Z","caller":"osutil/interrupt_unix.go:65","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2025-11-22T00:49:14.399724Z","caller":"embed/etcd.go:426","msg":"closing etcd server","name":"pause-028559","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.85.2:2380"],"advertise-client-urls":["https://192.168.85.2:2379"]}
	{"level":"error","ts":"2025-11-22T00:49:14.399941Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"http: Server closed","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*serveCtx).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/serve.go:90"}
	{"level":"error","ts":"2025-11-22T00:49:14.537856Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"http: Server closed","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*serveCtx).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/serve.go:90"}
	{"level":"error","ts":"2025-11-22T00:49:14.539293Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 127.0.0.1:2381: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"warn","ts":"2025-11-22T00:49:14.539350Z","caller":"embed/serve.go:245","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-11-22T00:49:14.539434Z","caller":"embed/serve.go:245","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.85.2:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-11-22T00:49:14.539458Z","caller":"embed/serve.go:247","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.85.2:2379: use of closed network connection"}
	{"level":"error","ts":"2025-11-22T00:49:14.539469Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 192.168.85.2:2379: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-11-22T00:49:14.539486Z","caller":"etcdserver/server.go:1281","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"9f0758e1c58a86ed","current-leader-member-id":"9f0758e1c58a86ed"}
	{"level":"warn","ts":"2025-11-22T00:49:14.539444Z","caller":"embed/serve.go:247","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"info","ts":"2025-11-22T00:49:14.539530Z","caller":"etcdserver/server.go:2319","msg":"server has stopped; stopping cluster version's monitor"}
	{"level":"error","ts":"2025-11-22T00:49:14.539538Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 127.0.0.1:2379: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-11-22T00:49:14.539520Z","caller":"etcdserver/server.go:2342","msg":"server has stopped; stopping storage version's monitor"}
	{"level":"info","ts":"2025-11-22T00:49:14.542789Z","caller":"embed/etcd.go:621","msg":"stopping serving peer traffic","address":"192.168.85.2:2380"}
	{"level":"error","ts":"2025-11-22T00:49:14.542870Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 192.168.85.2:2380: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-11-22T00:49:14.542908Z","caller":"embed/etcd.go:626","msg":"stopped serving peer traffic","address":"192.168.85.2:2380"}
	{"level":"info","ts":"2025-11-22T00:49:14.542916Z","caller":"embed/etcd.go:428","msg":"closed etcd server","name":"pause-028559","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.85.2:2380"],"advertise-client-urls":["https://192.168.85.2:2379"]}
	
	
	==> etcd [c697a18245b5616f58771650b29470b900c4e63fb555bd2347a20b506820e266] <==
	{"level":"warn","ts":"2025-11-22T00:49:27.747549Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60686","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-22T00:49:27.769697Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60696","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-22T00:49:27.789443Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60720","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-22T00:49:27.829439Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60740","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-22T00:49:27.848381Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60756","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-22T00:49:27.886618Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60770","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-22T00:49:27.920916Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60790","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-22T00:49:27.926458Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60804","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-22T00:49:27.984157Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60820","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-22T00:49:28.004268Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60848","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-22T00:49:28.046793Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60860","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-22T00:49:28.051129Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60866","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-22T00:49:28.094987Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60892","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-22T00:49:28.099102Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60918","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-22T00:49:28.119320Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60924","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-22T00:49:28.146047Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60942","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-22T00:49:28.171455Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60954","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-22T00:49:28.189189Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60974","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-22T00:49:28.206441Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60980","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-22T00:49:28.227836Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:32772","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-22T00:49:28.251214Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:32794","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-22T00:49:28.280129Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:32802","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-22T00:49:28.315650Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:32832","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-22T00:49:28.330323Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:32852","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-22T00:49:28.381196Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:32874","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 00:49:44 up  5:31,  0 user,  load average: 1.89, 2.45, 1.96
	Linux pause-028559 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [7cf54964fa6209d21f425b45edbf33a457dcdc58ce72370d8035cad09e292b10] <==
	I1122 00:48:59.538061       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1122 00:48:59.610085       1 main.go:139] hostIP = 192.168.85.2
	podIP = 192.168.85.2
	I1122 00:48:59.610292       1 main.go:148] setting mtu 1500 for CNI 
	I1122 00:48:59.610334       1 main.go:178] kindnetd IP family: "ipv4"
	I1122 00:48:59.610375       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-11-22T00:48:59Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1122 00:48:59.719884       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1122 00:48:59.809952       1 controller.go:381] "Waiting for informer caches to sync"
	I1122 00:48:59.809981       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1122 00:48:59.810112       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1122 00:49:00.125861       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1122 00:49:00.125976       1 metrics.go:72] Registering metrics
	I1122 00:49:00.126100       1 controller.go:711] "Syncing nftables rules"
	I1122 00:49:09.718622       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1122 00:49:09.718666       1 main.go:301] handling current node
	
	
	==> kindnet [87e6aa383dd95744b3c571bbd518cffcd7a4eb8cd0d3a6e584ed17b3615976fd] <==
	I1122 00:49:23.019359       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1122 00:49:23.019745       1 main.go:139] hostIP = 192.168.85.2
	podIP = 192.168.85.2
	I1122 00:49:23.019916       1 main.go:148] setting mtu 1500 for CNI 
	I1122 00:49:23.019957       1 main.go:178] kindnetd IP family: "ipv4"
	I1122 00:49:23.019992       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-11-22T00:49:23Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1122 00:49:23.281604       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1122 00:49:23.281673       1 controller.go:381] "Waiting for informer caches to sync"
	I1122 00:49:23.281706       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1122 00:49:23.282580       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1122 00:49:29.385942       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1122 00:49:29.385980       1 metrics.go:72] Registering metrics
	I1122 00:49:29.386039       1 controller.go:711] "Syncing nftables rules"
	I1122 00:49:33.281875       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1122 00:49:33.281923       1 main.go:301] handling current node
	I1122 00:49:43.281892       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1122 00:49:43.281921       1 main.go:301] handling current node
	
	
	==> kube-apiserver [369bd2d9f6691fe8442c1241cc0e13dde6eb84069c52da0c86e0481560a45f58] <==
	I1122 00:49:29.227576       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I1122 00:49:29.227589       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I1122 00:49:29.228257       1 shared_informer.go:356] "Caches are synced" controller="kubernetes-service-cidr-controller"
	I1122 00:49:29.228304       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1122 00:49:29.245644       1 shared_informer.go:356] "Caches are synced" controller="ipallocator-repair-controller"
	I1122 00:49:29.251739       1 shared_informer.go:356] "Caches are synced" controller="configmaps"
	I1122 00:49:29.251928       1 cache.go:39] Caches are synced for LocalAvailability controller
	I1122 00:49:29.252156       1 shared_informer.go:356] "Caches are synced" controller="cluster_authentication_trust_controller"
	I1122 00:49:29.252448       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1122 00:49:29.260293       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I1122 00:49:29.295392       1 shared_informer.go:356] "Caches are synced" controller="*generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]"
	I1122 00:49:29.295523       1 policy_source.go:240] refreshing policies
	I1122 00:49:29.308471       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1122 00:49:29.309783       1 shared_informer.go:356] "Caches are synced" controller="crd-autoregister"
	I1122 00:49:29.316136       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1122 00:49:29.333383       1 aggregator.go:171] initial CRD sync complete...
	I1122 00:49:29.333535       1 autoregister_controller.go:144] Starting autoregister controller
	I1122 00:49:29.335086       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1122 00:49:29.335153       1 cache.go:39] Caches are synced for autoregister controller
	I1122 00:49:29.944674       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1122 00:49:31.176374       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1122 00:49:32.571263       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1122 00:49:32.820014       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1122 00:49:32.869979       1 controller.go:667] quota admission added evaluator for: endpoints
	I1122 00:49:32.970319       1 controller.go:667] quota admission added evaluator for: deployments.apps
	
	
	==> kube-apiserver [fe27dafacf48d07af4ed5cb9690723267dc16f0e9bc5356896a5e1d595009ff0] <==
	W1122 00:49:14.414256       1 logging.go:55] [core] [Channel #103 SubChannel #105]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1122 00:49:14.414561       1 logging.go:55] [core] [Channel #251 SubChannel #253]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1122 00:49:14.414321       1 logging.go:55] [core] [Channel #163 SubChannel #165]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1122 00:49:14.414352       1 logging.go:55] [core] [Channel #147 SubChannel #149]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1122 00:49:14.414378       1 logging.go:55] [core] [Channel #215 SubChannel #217]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1122 00:49:14.414781       1 logging.go:55] [core] [Channel #63 SubChannel #65]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1122 00:49:14.414413       1 logging.go:55] [core] [Channel #87 SubChannel #89]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1122 00:49:14.414870       1 logging.go:55] [core] [Channel #211 SubChannel #213]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1122 00:49:14.414930       1 logging.go:55] [core] [Channel #43 SubChannel #45]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1122 00:49:14.415000       1 logging.go:55] [core] [Channel #123 SubChannel #125]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1122 00:49:14.415208       1 logging.go:55] [core] [Channel #143 SubChannel #145]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1122 00:49:14.415319       1 logging.go:55] [core] [Channel #203 SubChannel #205]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1122 00:49:14.415445       1 logging.go:55] [core] [Channel #219 SubChannel #221]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1122 00:49:14.415309       1 logging.go:55] [core] [Channel #187 SubChannel #189]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1122 00:49:14.415590       1 logging.go:55] [core] [Channel #111 SubChannel #113]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1122 00:49:14.414526       1 logging.go:55] [core] [Channel #155 SubChannel #157]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1122 00:49:14.414625       1 logging.go:55] [core] [Channel #115 SubChannel #117]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1122 00:49:14.414652       1 logging.go:55] [core] [Channel #127 SubChannel #129]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1122 00:49:14.414690       1 logging.go:55] [core] [Channel #139 SubChannel #141]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1122 00:49:14.414724       1 logging.go:55] [core] [Channel #151 SubChannel #153]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1122 00:49:14.414752       1 logging.go:55] [core] [Channel #183 SubChannel #185]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1122 00:49:14.414843       1 logging.go:55] [core] [Channel #91 SubChannel #93]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1122 00:49:14.415693       1 logging.go:55] [core] [Channel #1 SubChannel #4]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1122 00:49:14.415771       1 logging.go:55] [core] [Channel #119 SubChannel #121]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1122 00:49:14.416154       1 logging.go:55] [core] [Channel #83 SubChannel #85]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	
	
	==> kube-controller-manager [096e422d76e9c2d03ee46d5100a4c9d88d27872157f3e04d2ca3d33d12269f96] <==
	I1122 00:49:32.566720       1 shared_informer.go:356] "Caches are synced" controller="service-cidr-controller"
	I1122 00:49:32.569057       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	I1122 00:49:32.569194       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1122 00:49:32.573401       1 shared_informer.go:356] "Caches are synced" controller="disruption"
	I1122 00:49:32.579859       1 shared_informer.go:356] "Caches are synced" controller="bootstrap_signer"
	I1122 00:49:32.582079       1 shared_informer.go:356] "Caches are synced" controller="taint"
	I1122 00:49:32.582177       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I1122 00:49:32.582275       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="pause-028559"
	I1122 00:49:32.582321       1 node_lifecycle_controller.go:1067] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I1122 00:49:32.587432       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1122 00:49:32.612018       1 shared_informer.go:356] "Caches are synced" controller="job"
	I1122 00:49:32.612109       1 shared_informer.go:356] "Caches are synced" controller="VAC protection"
	I1122 00:49:32.612126       1 shared_informer.go:356] "Caches are synced" controller="namespace"
	I1122 00:49:32.612167       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1122 00:49:32.612613       1 shared_informer.go:356] "Caches are synced" controller="TTL after finished"
	I1122 00:49:32.612638       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I1122 00:49:32.612686       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I1122 00:49:32.612714       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I1122 00:49:32.618390       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	I1122 00:49:32.620746       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I1122 00:49:32.621740       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1122 00:49:32.624085       1 shared_informer.go:356] "Caches are synced" controller="cronjob"
	I1122 00:49:32.631992       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-legacy-unknown"
	I1122 00:49:32.632091       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-client"
	I1122 00:49:32.632124       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kube-apiserver-client"
	
	
	==> kube-controller-manager [36494fae0c15c7ac23088851e0409e2f96cb7f3066877902ebe7aedf80916b67] <==
	I1122 00:48:56.571666       1 node_lifecycle_controller.go:1025] "Controller detected that all Nodes are not-Ready. Entering master disruption mode" logger="node-lifecycle-controller"
	I1122 00:48:56.572768       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I1122 00:48:56.572937       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrapproving"
	I1122 00:48:56.577489       1 shared_informer.go:356] "Caches are synced" controller="cronjob"
	I1122 00:48:56.580594       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1122 00:48:56.583019       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I1122 00:48:56.589997       1 shared_informer.go:356] "Caches are synced" controller="crt configmap"
	I1122 00:48:56.597260       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1122 00:48:56.616142       1 shared_informer.go:356] "Caches are synced" controller="disruption"
	I1122 00:48:56.617413       1 shared_informer.go:356] "Caches are synced" controller="node"
	I1122 00:48:56.617422       1 shared_informer.go:356] "Caches are synced" controller="service-cidr-controller"
	I1122 00:48:56.617547       1 range_allocator.go:177] "Sending events to api server" logger="node-ipam-controller"
	I1122 00:48:56.617600       1 range_allocator.go:183] "Starting range CIDR allocator" logger="node-ipam-controller"
	I1122 00:48:56.617629       1 shared_informer.go:349] "Waiting for caches to sync" controller="cidrallocator"
	I1122 00:48:56.617658       1 shared_informer.go:356] "Caches are synced" controller="cidrallocator"
	I1122 00:48:56.617705       1 shared_informer.go:356] "Caches are synced" controller="PV protection"
	I1122 00:48:56.618543       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I1122 00:48:56.620806       1 shared_informer.go:356] "Caches are synced" controller="expand"
	I1122 00:48:56.620936       1 shared_informer.go:356] "Caches are synced" controller="GC"
	I1122 00:48:56.620997       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1122 00:48:56.621181       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	I1122 00:48:56.625680       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1122 00:48:56.627207       1 range_allocator.go:428] "Set node PodCIDR" logger="node-ipam-controller" node="pause-028559" podCIDRs=["10.244.0.0/24"]
	I1122 00:48:56.630768       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I1122 00:49:11.574061       1 node_lifecycle_controller.go:1044] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	
	
	==> kube-proxy [04ac46e69fa75a9a16c68b21e8dbf076926ba49248fd3f5118e8445fb78a9d5a] <==
	I1122 00:49:23.044612       1 server_linux.go:53] "Using iptables proxy"
	I1122 00:49:24.965982       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1122 00:49:29.289556       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1122 00:49:29.305885       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.85.2"]
	E1122 00:49:29.325934       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1122 00:49:29.443747       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1122 00:49:29.443811       1 server_linux.go:132] "Using iptables Proxier"
	I1122 00:49:29.460265       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1122 00:49:29.460655       1 server.go:527] "Version info" version="v1.34.1"
	I1122 00:49:29.460679       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1122 00:49:29.473967       1 config.go:106] "Starting endpoint slice config controller"
	I1122 00:49:29.474153       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1122 00:49:29.474491       1 config.go:200] "Starting service config controller"
	I1122 00:49:29.474546       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1122 00:49:29.474913       1 config.go:403] "Starting serviceCIDR config controller"
	I1122 00:49:29.474971       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1122 00:49:29.475467       1 config.go:309] "Starting node config controller"
	I1122 00:49:29.475527       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1122 00:49:29.475558       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1122 00:49:29.576274       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1122 00:49:29.576344       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1122 00:49:29.576586       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	
	
	==> kube-proxy [8a47d7c0ee952b2d53a1e55f636df9b8ea9e35a1de95e8cd16ba1ee91d2429e5] <==
	I1122 00:48:58.792896       1 server_linux.go:53] "Using iptables proxy"
	I1122 00:48:58.883512       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1122 00:48:58.992801       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1122 00:48:58.992842       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.85.2"]
	E1122 00:48:58.992906       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1122 00:48:59.012676       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1122 00:48:59.012735       1 server_linux.go:132] "Using iptables Proxier"
	I1122 00:48:59.018438       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1122 00:48:59.018776       1 server.go:527] "Version info" version="v1.34.1"
	I1122 00:48:59.018797       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1122 00:48:59.020072       1 config.go:200] "Starting service config controller"
	I1122 00:48:59.020095       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1122 00:48:59.020111       1 config.go:106] "Starting endpoint slice config controller"
	I1122 00:48:59.020115       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1122 00:48:59.020127       1 config.go:403] "Starting serviceCIDR config controller"
	I1122 00:48:59.020131       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1122 00:48:59.022920       1 config.go:309] "Starting node config controller"
	I1122 00:48:59.022941       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1122 00:48:59.022949       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1122 00:48:59.120833       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1122 00:48:59.120875       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1122 00:48:59.120922       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [b36b71426eeddbbd8a66fee6ba6d51873fa9668612addcc8f00a16c6fdb775fd] <==
	E1122 00:48:49.859305       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1122 00:48:49.859530       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1122 00:48:49.859629       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1122 00:48:49.859704       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1122 00:48:49.859781       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1122 00:48:49.859852       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1122 00:48:49.860125       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1122 00:48:49.860187       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1122 00:48:49.860246       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1122 00:48:49.860289       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1122 00:48:49.860374       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1122 00:48:49.860416       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1122 00:48:49.860584       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1122 00:48:50.675390       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1122 00:48:50.687613       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1122 00:48:50.729285       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1122 00:48:50.816346       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1122 00:48:50.842088       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	I1122 00:48:51.537610       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1122 00:49:14.386312       1 secure_serving.go:259] Stopped listening on 127.0.0.1:10259
	I1122 00:49:14.386339       1 server.go:263] "[graceful-termination] secure server has stopped listening"
	I1122 00:49:14.386373       1 tlsconfig.go:258] "Shutting down DynamicServingCertificateController"
	I1122 00:49:14.386403       1 configmap_cafile_content.go:226] "Shutting down controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1122 00:49:14.386529       1 server.go:265] "[graceful-termination] secure server is exiting"
	E1122 00:49:14.386557       1 run.go:72] "command failed" err="finished without leader elect"
	
	
	==> kube-scheduler [d3753a55d1b0852aeac6d506d250fc46da9733dcd90885d3802044a0a80ad951] <==
	I1122 00:49:26.862955       1 serving.go:386] Generated self-signed cert in-memory
	W1122 00:49:29.202036       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1122 00:49:29.202077       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1122 00:49:29.202087       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1122 00:49:29.202094       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1122 00:49:29.297401       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.1"
	I1122 00:49:29.297502       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1122 00:49:29.302123       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1122 00:49:29.302231       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1122 00:49:29.304579       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1122 00:49:29.304672       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1122 00:49:29.402777       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Nov 22 00:49:22 pause-028559 kubelet[1296]: E1122 00:49:22.802037    1296 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.85.2:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-pause-028559\": dial tcp 192.168.85.2:8443: connect: connection refused" podUID="44c3af8ce59f9041bc4996c94c884532" pod="kube-system/kube-apiserver-pause-028559"
	Nov 22 00:49:22 pause-028559 kubelet[1296]: E1122 00:49:22.802536    1296 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.85.2:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-pause-028559\": dial tcp 192.168.85.2:8443: connect: connection refused" podUID="b87ad4bf409fe052b180b22ad3a54cf6" pod="kube-system/kube-scheduler-pause-028559"
	Nov 22 00:49:22 pause-028559 kubelet[1296]: I1122 00:49:22.807112    1296 scope.go:117] "RemoveContainer" containerID="7cf54964fa6209d21f425b45edbf33a457dcdc58ce72370d8035cad09e292b10"
	Nov 22 00:49:22 pause-028559 kubelet[1296]: E1122 00:49:22.807543    1296 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.85.2:8443/api/v1/namespaces/kube-system/pods/etcd-pause-028559\": dial tcp 192.168.85.2:8443: connect: connection refused" podUID="60135025660a8ec9cd48fe139ccdc20a" pod="kube-system/etcd-pause-028559"
	Nov 22 00:49:22 pause-028559 kubelet[1296]: E1122 00:49:22.808540    1296 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.85.2:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-pause-028559\": dial tcp 192.168.85.2:8443: connect: connection refused" podUID="44c3af8ce59f9041bc4996c94c884532" pod="kube-system/kube-apiserver-pause-028559"
	Nov 22 00:49:22 pause-028559 kubelet[1296]: E1122 00:49:22.814206    1296 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.85.2:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-pause-028559\": dial tcp 192.168.85.2:8443: connect: connection refused" podUID="b87ad4bf409fe052b180b22ad3a54cf6" pod="kube-system/kube-scheduler-pause-028559"
	Nov 22 00:49:22 pause-028559 kubelet[1296]: E1122 00:49:22.814480    1296 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.85.2:8443/api/v1/namespaces/kube-system/pods/kindnet-md6h6\": dial tcp 192.168.85.2:8443: connect: connection refused" podUID="c69079f9-3127-43a1-99c6-9ec5a41b79cc" pod="kube-system/kindnet-md6h6"
	Nov 22 00:49:22 pause-028559 kubelet[1296]: E1122 00:49:22.814681    1296 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.85.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-qnj6x\": dial tcp 192.168.85.2:8443: connect: connection refused" podUID="1e6d9e0d-242d-484d-be05-aaaf175e8c31" pod="kube-system/kube-proxy-qnj6x"
	Nov 22 00:49:22 pause-028559 kubelet[1296]: E1122 00:49:22.814872    1296 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.85.2:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-pause-028559\": dial tcp 192.168.85.2:8443: connect: connection refused" podUID="67f331a25b5bd2923696a64e0dc87204" pod="kube-system/kube-controller-manager-pause-028559"
	Nov 22 00:49:22 pause-028559 kubelet[1296]: I1122 00:49:22.823727    1296 scope.go:117] "RemoveContainer" containerID="e8a18ac29ae5ac05f6b5b4c70bcf6b1fc73a710e59136307cfa68c8bcd36557d"
	Nov 22 00:49:22 pause-028559 kubelet[1296]: E1122 00:49:22.824368    1296 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.85.2:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-pause-028559\": dial tcp 192.168.85.2:8443: connect: connection refused" podUID="67f331a25b5bd2923696a64e0dc87204" pod="kube-system/kube-controller-manager-pause-028559"
	Nov 22 00:49:22 pause-028559 kubelet[1296]: E1122 00:49:22.824588    1296 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.85.2:8443/api/v1/namespaces/kube-system/pods/etcd-pause-028559\": dial tcp 192.168.85.2:8443: connect: connection refused" podUID="60135025660a8ec9cd48fe139ccdc20a" pod="kube-system/etcd-pause-028559"
	Nov 22 00:49:22 pause-028559 kubelet[1296]: E1122 00:49:22.824780    1296 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.85.2:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-pause-028559\": dial tcp 192.168.85.2:8443: connect: connection refused" podUID="44c3af8ce59f9041bc4996c94c884532" pod="kube-system/kube-apiserver-pause-028559"
	Nov 22 00:49:22 pause-028559 kubelet[1296]: E1122 00:49:22.824974    1296 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.85.2:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-pause-028559\": dial tcp 192.168.85.2:8443: connect: connection refused" podUID="b87ad4bf409fe052b180b22ad3a54cf6" pod="kube-system/kube-scheduler-pause-028559"
	Nov 22 00:49:22 pause-028559 kubelet[1296]: E1122 00:49:22.825183    1296 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.85.2:8443/api/v1/namespaces/kube-system/pods/kindnet-md6h6\": dial tcp 192.168.85.2:8443: connect: connection refused" podUID="c69079f9-3127-43a1-99c6-9ec5a41b79cc" pod="kube-system/kindnet-md6h6"
	Nov 22 00:49:22 pause-028559 kubelet[1296]: E1122 00:49:22.825374    1296 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.85.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-qnj6x\": dial tcp 192.168.85.2:8443: connect: connection refused" podUID="1e6d9e0d-242d-484d-be05-aaaf175e8c31" pod="kube-system/kube-proxy-qnj6x"
	Nov 22 00:49:22 pause-028559 kubelet[1296]: E1122 00:49:22.825564    1296 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.85.2:8443/api/v1/namespaces/kube-system/pods/coredns-66bc5c9577-mf9wz\": dial tcp 192.168.85.2:8443: connect: connection refused" podUID="c60bc6ef-6579-4cd2-821a-d54eed09dd2f" pod="kube-system/coredns-66bc5c9577-mf9wz"
	Nov 22 00:49:29 pause-028559 kubelet[1296]: E1122 00:49:29.114419    1296 status_manager.go:1018] "Failed to get status for pod" err="pods \"etcd-pause-028559\" is forbidden: User \"system:node:pause-028559\" cannot get resource \"pods\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'pause-028559' and this object" podUID="60135025660a8ec9cd48fe139ccdc20a" pod="kube-system/etcd-pause-028559"
	Nov 22 00:49:29 pause-028559 kubelet[1296]: E1122 00:49:29.114956    1296 reflector.go:205] "Failed to watch" err="configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:pause-028559\" cannot watch resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'pause-028559' and this object" logger="UnhandledError" reflector="object-\"kube-system\"/\"kube-root-ca.crt\"" type="*v1.ConfigMap"
	Nov 22 00:49:29 pause-028559 kubelet[1296]: E1122 00:49:29.149132    1296 status_manager.go:1018] "Failed to get status for pod" err="pods \"kube-apiserver-pause-028559\" is forbidden: User \"system:node:pause-028559\" cannot get resource \"pods\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'pause-028559' and this object" podUID="44c3af8ce59f9041bc4996c94c884532" pod="kube-system/kube-apiserver-pause-028559"
	Nov 22 00:49:29 pause-028559 kubelet[1296]: E1122 00:49:29.199478    1296 status_manager.go:1018] "Failed to get status for pod" err="pods \"kube-scheduler-pause-028559\" is forbidden: User \"system:node:pause-028559\" cannot get resource \"pods\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'pause-028559' and this object" podUID="b87ad4bf409fe052b180b22ad3a54cf6" pod="kube-system/kube-scheduler-pause-028559"
	Nov 22 00:49:32 pause-028559 kubelet[1296]: W1122 00:49:32.861596    1296 conversion.go:112] Could not get instant cpu stats: cumulative stats decrease
	Nov 22 00:49:38 pause-028559 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Nov 22 00:49:38 pause-028559 systemd[1]: kubelet.service: Deactivated successfully.
	Nov 22 00:49:38 pause-028559 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p pause-028559 -n pause-028559
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p pause-028559 -n pause-028559: exit status 2 (362.263994ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:269: (dbg) Run:  kubectl --context pause-028559 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestPause/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestPause/serial/Pause (7.55s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (2.47s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p old-k8s-version-625837 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Non-zero exit: out/minikube-linux-arm64 addons enable metrics-server -p old-k8s-version-625837 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: exit status 11 (272.985864ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-22T00:53:15Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_2bafae6fa40fec163538f94366e390b0317a8b15_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:205: failed to enable an addon post-stop. args "out/minikube-linux-arm64 addons enable metrics-server -p old-k8s-version-625837 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain": exit status 11
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context old-k8s-version-625837 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:213: (dbg) Non-zero exit: kubectl --context old-k8s-version-625837 describe deploy/metrics-server -n kube-system: exit status 1 (87.68557ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): deployments.apps "metrics-server" not found

                                                
                                                
** /stderr **
start_stop_delete_test.go:215: failed to get info on auto-pause deployments. args "kubectl --context old-k8s-version-625837 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:219: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect old-k8s-version-625837
helpers_test.go:243: (dbg) docker inspect old-k8s-version-625837:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "c1b8e95ff95e3c8d00f04e99efeff1b9aae77957e1730bef176997569aa985cb",
	        "Created": "2025-11-22T00:52:11.631298738Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 693178,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-11-22T00:52:11.694996457Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:97061622515a85470dad13cb046ad5fc5021fcd2b05aa921a620abb52951cd5d",
	        "ResolvConfPath": "/var/lib/docker/containers/c1b8e95ff95e3c8d00f04e99efeff1b9aae77957e1730bef176997569aa985cb/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/c1b8e95ff95e3c8d00f04e99efeff1b9aae77957e1730bef176997569aa985cb/hostname",
	        "HostsPath": "/var/lib/docker/containers/c1b8e95ff95e3c8d00f04e99efeff1b9aae77957e1730bef176997569aa985cb/hosts",
	        "LogPath": "/var/lib/docker/containers/c1b8e95ff95e3c8d00f04e99efeff1b9aae77957e1730bef176997569aa985cb/c1b8e95ff95e3c8d00f04e99efeff1b9aae77957e1730bef176997569aa985cb-json.log",
	        "Name": "/old-k8s-version-625837",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "old-k8s-version-625837:/var",
	                "/lib/modules:/lib/modules:ro"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "old-k8s-version-625837",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "c1b8e95ff95e3c8d00f04e99efeff1b9aae77957e1730bef176997569aa985cb",
	                "LowerDir": "/var/lib/docker/overlay2/d5742c1c24516207890a7b6e13b848d2f42ff607041b29dcbd0346c7d43d472d-init/diff:/var/lib/docker/overlay2/7e8788c6de692bc1c3758a2bb2c4b8da0fbba26855f855c0f3b655bfbdd92f8e/diff",
	                "MergedDir": "/var/lib/docker/overlay2/d5742c1c24516207890a7b6e13b848d2f42ff607041b29dcbd0346c7d43d472d/merged",
	                "UpperDir": "/var/lib/docker/overlay2/d5742c1c24516207890a7b6e13b848d2f42ff607041b29dcbd0346c7d43d472d/diff",
	                "WorkDir": "/var/lib/docker/overlay2/d5742c1c24516207890a7b6e13b848d2f42ff607041b29dcbd0346c7d43d472d/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "old-k8s-version-625837",
	                "Source": "/var/lib/docker/volumes/old-k8s-version-625837/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "old-k8s-version-625837",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763588073-21934@sha256:19d3da0413e1bfa354cbb88004c6796f8e9772a083e0230b0f6e50212ee04c7e",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "old-k8s-version-625837",
	                "name.minikube.sigs.k8s.io": "old-k8s-version-625837",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "10bd96593cdc5b6be87b45352ed6570fd95f0f53fc3ca3bbef931dc17c170fd9",
	            "SandboxKey": "/var/run/docker/netns/10bd96593cdc",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33770"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33771"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33774"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33772"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33773"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "old-k8s-version-625837": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.85.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "1a:1d:c7:d9:98:1d",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "ee4ddaee680d222041a033cf4edb5764a7a32b1715bb1145e84ad0704600fbeb",
	                    "EndpointID": "57f4e7ea0b4d00ce4f7dd0033b65d2932b0683078bbbae6d3869dee09d5b17b8",
	                    "Gateway": "192.168.85.1",
	                    "IPAddress": "192.168.85.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "old-k8s-version-625837",
	                        "c1b8e95ff95e"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-625837 -n old-k8s-version-625837
helpers_test.go:252: <<< TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p old-k8s-version-625837 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p old-k8s-version-625837 logs -n 25: (1.207599645s)
helpers_test.go:260: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────
────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │          PROFILE          │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────
────┤
	│ ssh     │ -p cilium-163229 sudo systemctl status cri-docker --all --full --no-pager                                                                                                                                                                     │ cilium-163229             │ jenkins │ v1.37.0 │ 22 Nov 25 00:50 UTC │                     │
	│ ssh     │ -p cilium-163229 sudo systemctl cat cri-docker --no-pager                                                                                                                                                                                     │ cilium-163229             │ jenkins │ v1.37.0 │ 22 Nov 25 00:50 UTC │                     │
	│ ssh     │ -p cilium-163229 sudo cat /etc/systemd/system/cri-docker.service.d/10-cni.conf                                                                                                                                                                │ cilium-163229             │ jenkins │ v1.37.0 │ 22 Nov 25 00:50 UTC │                     │
	│ ssh     │ -p cilium-163229 sudo cat /usr/lib/systemd/system/cri-docker.service                                                                                                                                                                          │ cilium-163229             │ jenkins │ v1.37.0 │ 22 Nov 25 00:50 UTC │                     │
	│ ssh     │ -p cilium-163229 sudo cri-dockerd --version                                                                                                                                                                                                   │ cilium-163229             │ jenkins │ v1.37.0 │ 22 Nov 25 00:50 UTC │                     │
	│ ssh     │ -p cilium-163229 sudo systemctl status containerd --all --full --no-pager                                                                                                                                                                     │ cilium-163229             │ jenkins │ v1.37.0 │ 22 Nov 25 00:50 UTC │                     │
	│ ssh     │ -p cilium-163229 sudo systemctl cat containerd --no-pager                                                                                                                                                                                     │ cilium-163229             │ jenkins │ v1.37.0 │ 22 Nov 25 00:50 UTC │                     │
	│ ssh     │ -p cilium-163229 sudo cat /lib/systemd/system/containerd.service                                                                                                                                                                              │ cilium-163229             │ jenkins │ v1.37.0 │ 22 Nov 25 00:50 UTC │                     │
	│ ssh     │ -p cilium-163229 sudo cat /etc/containerd/config.toml                                                                                                                                                                                         │ cilium-163229             │ jenkins │ v1.37.0 │ 22 Nov 25 00:50 UTC │                     │
	│ ssh     │ -p cilium-163229 sudo containerd config dump                                                                                                                                                                                                  │ cilium-163229             │ jenkins │ v1.37.0 │ 22 Nov 25 00:50 UTC │                     │
	│ ssh     │ -p cilium-163229 sudo systemctl status crio --all --full --no-pager                                                                                                                                                                           │ cilium-163229             │ jenkins │ v1.37.0 │ 22 Nov 25 00:50 UTC │                     │
	│ ssh     │ -p cilium-163229 sudo systemctl cat crio --no-pager                                                                                                                                                                                           │ cilium-163229             │ jenkins │ v1.37.0 │ 22 Nov 25 00:50 UTC │                     │
	│ ssh     │ -p cilium-163229 sudo find /etc/crio -type f -exec sh -c 'echo {}; cat {}' \;                                                                                                                                                                 │ cilium-163229             │ jenkins │ v1.37.0 │ 22 Nov 25 00:50 UTC │                     │
	│ ssh     │ -p cilium-163229 sudo crio config                                                                                                                                                                                                             │ cilium-163229             │ jenkins │ v1.37.0 │ 22 Nov 25 00:50 UTC │                     │
	│ delete  │ -p cilium-163229                                                                                                                                                                                                                              │ cilium-163229             │ jenkins │ v1.37.0 │ 22 Nov 25 00:50 UTC │ 22 Nov 25 00:50 UTC │
	│ start   │ -p force-systemd-env-634519 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio                                                                                                                                    │ force-systemd-env-634519  │ jenkins │ v1.37.0 │ 22 Nov 25 00:50 UTC │ 22 Nov 25 00:51 UTC │
	│ delete  │ -p kubernetes-upgrade-134864                                                                                                                                                                                                                  │ kubernetes-upgrade-134864 │ jenkins │ v1.37.0 │ 22 Nov 25 00:50 UTC │ 22 Nov 25 00:51 UTC │
	│ start   │ -p cert-expiration-621390 --memory=3072 --cert-expiration=3m --driver=docker  --container-runtime=crio                                                                                                                                        │ cert-expiration-621390    │ jenkins │ v1.37.0 │ 22 Nov 25 00:51 UTC │ 22 Nov 25 00:51 UTC │
	│ delete  │ -p force-systemd-env-634519                                                                                                                                                                                                                   │ force-systemd-env-634519  │ jenkins │ v1.37.0 │ 22 Nov 25 00:51 UTC │ 22 Nov 25 00:51 UTC │
	│ start   │ -p cert-options-002126 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=crio                     │ cert-options-002126       │ jenkins │ v1.37.0 │ 22 Nov 25 00:51 UTC │ 22 Nov 25 00:52 UTC │
	│ ssh     │ cert-options-002126 ssh openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt                                                                                                                                                   │ cert-options-002126       │ jenkins │ v1.37.0 │ 22 Nov 25 00:52 UTC │ 22 Nov 25 00:52 UTC │
	│ ssh     │ -p cert-options-002126 -- sudo cat /etc/kubernetes/admin.conf                                                                                                                                                                                 │ cert-options-002126       │ jenkins │ v1.37.0 │ 22 Nov 25 00:52 UTC │ 22 Nov 25 00:52 UTC │
	│ delete  │ -p cert-options-002126                                                                                                                                                                                                                        │ cert-options-002126       │ jenkins │ v1.37.0 │ 22 Nov 25 00:52 UTC │ 22 Nov 25 00:52 UTC │
	│ start   │ -p old-k8s-version-625837 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-625837    │ jenkins │ v1.37.0 │ 22 Nov 25 00:52 UTC │ 22 Nov 25 00:53 UTC │
	│ addons  │ enable metrics-server -p old-k8s-version-625837 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                  │ old-k8s-version-625837    │ jenkins │ v1.37.0 │ 22 Nov 25 00:53 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────
────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/22 00:52:05
	Running on machine: ip-172-31-21-244
	Binary: Built with gc go1.25.3 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1122 00:52:05.582562  692782 out.go:360] Setting OutFile to fd 1 ...
	I1122 00:52:05.582818  692782 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1122 00:52:05.582860  692782 out.go:374] Setting ErrFile to fd 2...
	I1122 00:52:05.582886  692782 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1122 00:52:05.583184  692782 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21934-513600/.minikube/bin
	I1122 00:52:05.583792  692782 out.go:368] Setting JSON to false
	I1122 00:52:05.584886  692782 start.go:133] hostinfo: {"hostname":"ip-172-31-21-244","uptime":20042,"bootTime":1763752684,"procs":176,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"da8ac1fd-6236-412a-a346-95873c98230d"}
	I1122 00:52:05.584995  692782 start.go:143] virtualization:  
	I1122 00:52:05.589293  692782 out.go:179] * [old-k8s-version-625837] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1122 00:52:05.593997  692782 out.go:179]   - MINIKUBE_LOCATION=21934
	I1122 00:52:05.594197  692782 notify.go:221] Checking for updates...
	I1122 00:52:05.598240  692782 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1122 00:52:05.601611  692782 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21934-513600/kubeconfig
	I1122 00:52:05.605083  692782 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21934-513600/.minikube
	I1122 00:52:05.608353  692782 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1122 00:52:05.611714  692782 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1122 00:52:05.615413  692782 config.go:182] Loaded profile config "cert-expiration-621390": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1122 00:52:05.615561  692782 driver.go:422] Setting default libvirt URI to qemu:///system
	I1122 00:52:05.643424  692782 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1122 00:52:05.643557  692782 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1122 00:52:05.706358  692782 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:38 OomKillDisable:true NGoroutines:52 SystemTime:2025-11-22 00:52:05.695964349 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1122 00:52:05.706458  692782 docker.go:319] overlay module found
	I1122 00:52:05.709837  692782 out.go:179] * Using the docker driver based on user configuration
	I1122 00:52:05.712940  692782 start.go:309] selected driver: docker
	I1122 00:52:05.712962  692782 start.go:930] validating driver "docker" against <nil>
	I1122 00:52:05.712975  692782 start.go:941] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1122 00:52:05.713738  692782 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1122 00:52:05.774593  692782 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:38 OomKillDisable:true NGoroutines:52 SystemTime:2025-11-22 00:52:05.765408672 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1122 00:52:05.774746  692782 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1122 00:52:05.774974  692782 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1122 00:52:05.778393  692782 out.go:179] * Using Docker driver with root privileges
	I1122 00:52:05.781419  692782 cni.go:84] Creating CNI manager for ""
	I1122 00:52:05.781487  692782 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1122 00:52:05.781501  692782 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1122 00:52:05.781592  692782 start.go:353] cluster config:
	{Name:old-k8s-version-625837 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763588073-21934@sha256:19d3da0413e1bfa354cbb88004c6796f8e9772a083e0230b0f6e50212ee04c7e Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:old-k8s-version-625837 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local
ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SS
HAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1122 00:52:05.784961  692782 out.go:179] * Starting "old-k8s-version-625837" primary control-plane node in "old-k8s-version-625837" cluster
	I1122 00:52:05.787919  692782 cache.go:134] Beginning downloading kic base image for docker with crio
	I1122 00:52:05.791198  692782 out.go:179] * Pulling base image v0.0.48-1763588073-21934 ...
	I1122 00:52:05.794103  692782 preload.go:188] Checking if preload exists for k8s version v1.28.0 and runtime crio
	I1122 00:52:05.794159  692782 preload.go:203] Found local preload: /home/jenkins/minikube-integration/21934-513600/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-arm64.tar.lz4
	I1122 00:52:05.794174  692782 cache.go:65] Caching tarball of preloaded images
	I1122 00:52:05.794193  692782 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763588073-21934@sha256:19d3da0413e1bfa354cbb88004c6796f8e9772a083e0230b0f6e50212ee04c7e in local docker daemon
	I1122 00:52:05.794274  692782 preload.go:238] Found /home/jenkins/minikube-integration/21934-513600/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1122 00:52:05.794286  692782 cache.go:68] Finished verifying existence of preloaded tar for v1.28.0 on crio
	I1122 00:52:05.794393  692782 profile.go:143] Saving config to /home/jenkins/minikube-integration/21934-513600/.minikube/profiles/old-k8s-version-625837/config.json ...
	I1122 00:52:05.794410  692782 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21934-513600/.minikube/profiles/old-k8s-version-625837/config.json: {Name:mk18d5504e33b43c53bae289e9c08788274add72 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1122 00:52:05.817365  692782 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763588073-21934@sha256:19d3da0413e1bfa354cbb88004c6796f8e9772a083e0230b0f6e50212ee04c7e in local docker daemon, skipping pull
	I1122 00:52:05.817387  692782 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763588073-21934@sha256:19d3da0413e1bfa354cbb88004c6796f8e9772a083e0230b0f6e50212ee04c7e exists in daemon, skipping load
	I1122 00:52:05.817405  692782 cache.go:243] Successfully downloaded all kic artifacts
	I1122 00:52:05.817429  692782 start.go:360] acquireMachinesLock for old-k8s-version-625837: {Name:mk3a3c501372daeff07fa7d5836846284b6136f5 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1122 00:52:05.817540  692782 start.go:364] duration metric: took 91.624µs to acquireMachinesLock for "old-k8s-version-625837"
	I1122 00:52:05.817567  692782 start.go:93] Provisioning new machine with config: &{Name:old-k8s-version-625837 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763588073-21934@sha256:19d3da0413e1bfa354cbb88004c6796f8e9772a083e0230b0f6e50212ee04c7e Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:old-k8s-version-625837 Namespace:default APIServerHAVIP:
APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQ
emuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1122 00:52:05.817648  692782 start.go:125] createHost starting for "" (driver="docker")
	I1122 00:52:05.821087  692782 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1122 00:52:05.821302  692782 start.go:159] libmachine.API.Create for "old-k8s-version-625837" (driver="docker")
	I1122 00:52:05.821332  692782 client.go:173] LocalClient.Create starting
	I1122 00:52:05.821432  692782 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21934-513600/.minikube/certs/ca.pem
	I1122 00:52:05.821469  692782 main.go:143] libmachine: Decoding PEM data...
	I1122 00:52:05.821486  692782 main.go:143] libmachine: Parsing certificate...
	I1122 00:52:05.821568  692782 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21934-513600/.minikube/certs/cert.pem
	I1122 00:52:05.821589  692782 main.go:143] libmachine: Decoding PEM data...
	I1122 00:52:05.821601  692782 main.go:143] libmachine: Parsing certificate...
	I1122 00:52:05.822026  692782 cli_runner.go:164] Run: docker network inspect old-k8s-version-625837 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1122 00:52:05.840019  692782 cli_runner.go:211] docker network inspect old-k8s-version-625837 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1122 00:52:05.840147  692782 network_create.go:284] running [docker network inspect old-k8s-version-625837] to gather additional debugging logs...
	I1122 00:52:05.840180  692782 cli_runner.go:164] Run: docker network inspect old-k8s-version-625837
	W1122 00:52:05.857035  692782 cli_runner.go:211] docker network inspect old-k8s-version-625837 returned with exit code 1
	I1122 00:52:05.857067  692782 network_create.go:287] error running [docker network inspect old-k8s-version-625837]: docker network inspect old-k8s-version-625837: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network old-k8s-version-625837 not found
	I1122 00:52:05.857081  692782 network_create.go:289] output of [docker network inspect old-k8s-version-625837]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network old-k8s-version-625837 not found
	
	** /stderr **
	I1122 00:52:05.857190  692782 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1122 00:52:05.875011  692782 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-b16c782e3da8 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:ba:82:00:9d:45:d0} reservation:<nil>}
	I1122 00:52:05.875390  692782 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-13c9c00b5de5 IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:7a:4e:a4:3d:42:9e} reservation:<nil>}
	I1122 00:52:05.875727  692782 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-3c074a6aa87b IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:9a:1f:77:e5:90:0b} reservation:<nil>}
	I1122 00:52:05.876071  692782 network.go:211] skipping subnet 192.168.76.0/24 that is taken: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName:br-dbcd48fe48d9 IfaceIPv4:192.168.76.1 IfaceMTU:1500 IfaceMAC:3e:60:c6:7d:54:5c} reservation:<nil>}
	I1122 00:52:05.876509  692782 network.go:206] using free private subnet 192.168.85.0/24: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x40019d9b80}
	I1122 00:52:05.876532  692782 network_create.go:124] attempt to create docker network old-k8s-version-625837 192.168.85.0/24 with gateway 192.168.85.1 and MTU of 1500 ...
	I1122 00:52:05.876597  692782 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.85.0/24 --gateway=192.168.85.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=old-k8s-version-625837 old-k8s-version-625837
	I1122 00:52:05.947664  692782 network_create.go:108] docker network old-k8s-version-625837 192.168.85.0/24 created
	I1122 00:52:05.947705  692782 kic.go:121] calculated static IP "192.168.85.2" for the "old-k8s-version-625837" container
	I1122 00:52:05.947777  692782 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1122 00:52:05.964108  692782 cli_runner.go:164] Run: docker volume create old-k8s-version-625837 --label name.minikube.sigs.k8s.io=old-k8s-version-625837 --label created_by.minikube.sigs.k8s.io=true
	I1122 00:52:05.981005  692782 oci.go:103] Successfully created a docker volume old-k8s-version-625837
	I1122 00:52:05.981093  692782 cli_runner.go:164] Run: docker run --rm --name old-k8s-version-625837-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=old-k8s-version-625837 --entrypoint /usr/bin/test -v old-k8s-version-625837:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763588073-21934@sha256:19d3da0413e1bfa354cbb88004c6796f8e9772a083e0230b0f6e50212ee04c7e -d /var/lib
	I1122 00:52:06.529942  692782 oci.go:107] Successfully prepared a docker volume old-k8s-version-625837
	I1122 00:52:06.530017  692782 preload.go:188] Checking if preload exists for k8s version v1.28.0 and runtime crio
	I1122 00:52:06.530034  692782 kic.go:194] Starting extracting preloaded images to volume ...
	I1122 00:52:06.530098  692782 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21934-513600/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v old-k8s-version-625837:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763588073-21934@sha256:19d3da0413e1bfa354cbb88004c6796f8e9772a083e0230b0f6e50212ee04c7e -I lz4 -xf /preloaded.tar -C /extractDir
	I1122 00:52:11.562679  692782 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21934-513600/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v old-k8s-version-625837:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763588073-21934@sha256:19d3da0413e1bfa354cbb88004c6796f8e9772a083e0230b0f6e50212ee04c7e -I lz4 -xf /preloaded.tar -C /extractDir: (5.032544312s)
	I1122 00:52:11.562715  692782 kic.go:203] duration metric: took 5.032677806s to extract preloaded images to volume ...
	W1122 00:52:11.562864  692782 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I1122 00:52:11.562979  692782 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1122 00:52:11.616527  692782 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname old-k8s-version-625837 --name old-k8s-version-625837 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=old-k8s-version-625837 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=old-k8s-version-625837 --network old-k8s-version-625837 --ip 192.168.85.2 --volume old-k8s-version-625837:/var --security-opt apparmor=unconfined --memory=3072mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763588073-21934@sha256:19d3da0413e1bfa354cbb88004c6796f8e9772a083e0230b0f6e50212ee04c7e
	I1122 00:52:11.918721  692782 cli_runner.go:164] Run: docker container inspect old-k8s-version-625837 --format={{.State.Running}}
	I1122 00:52:11.936143  692782 cli_runner.go:164] Run: docker container inspect old-k8s-version-625837 --format={{.State.Status}}
	I1122 00:52:11.955655  692782 cli_runner.go:164] Run: docker exec old-k8s-version-625837 stat /var/lib/dpkg/alternatives/iptables
	I1122 00:52:12.007181  692782 oci.go:144] the created container "old-k8s-version-625837" has a running status.
	I1122 00:52:12.007225  692782 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21934-513600/.minikube/machines/old-k8s-version-625837/id_rsa...
	I1122 00:52:12.055832  692782 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21934-513600/.minikube/machines/old-k8s-version-625837/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1122 00:52:12.082845  692782 cli_runner.go:164] Run: docker container inspect old-k8s-version-625837 --format={{.State.Status}}
	I1122 00:52:12.104810  692782 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1122 00:52:12.104855  692782 kic_runner.go:114] Args: [docker exec --privileged old-k8s-version-625837 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1122 00:52:12.155566  692782 cli_runner.go:164] Run: docker container inspect old-k8s-version-625837 --format={{.State.Status}}
	I1122 00:52:12.179631  692782 machine.go:94] provisionDockerMachine start ...
	I1122 00:52:12.179812  692782 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-625837
	I1122 00:52:12.205251  692782 main.go:143] libmachine: Using SSH client type: native
	I1122 00:52:12.205592  692782 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33770 <nil> <nil>}
	I1122 00:52:12.205609  692782 main.go:143] libmachine: About to run SSH command:
	hostname
	I1122 00:52:12.206327  692782 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I1122 00:52:15.346486  692782 main.go:143] libmachine: SSH cmd err, output: <nil>: old-k8s-version-625837
	
	I1122 00:52:15.346511  692782 ubuntu.go:182] provisioning hostname "old-k8s-version-625837"
	I1122 00:52:15.346582  692782 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-625837
	I1122 00:52:15.368866  692782 main.go:143] libmachine: Using SSH client type: native
	I1122 00:52:15.369192  692782 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33770 <nil> <nil>}
	I1122 00:52:15.369207  692782 main.go:143] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-625837 && echo "old-k8s-version-625837" | sudo tee /etc/hostname
	I1122 00:52:15.527223  692782 main.go:143] libmachine: SSH cmd err, output: <nil>: old-k8s-version-625837
	
	I1122 00:52:15.527310  692782 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-625837
	I1122 00:52:15.553363  692782 main.go:143] libmachine: Using SSH client type: native
	I1122 00:52:15.553670  692782 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33770 <nil> <nil>}
	I1122 00:52:15.553685  692782 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-625837' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-625837/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-625837' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1122 00:52:15.694031  692782 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1122 00:52:15.694058  692782 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21934-513600/.minikube CaCertPath:/home/jenkins/minikube-integration/21934-513600/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21934-513600/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21934-513600/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21934-513600/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21934-513600/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21934-513600/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21934-513600/.minikube}
	I1122 00:52:15.694078  692782 ubuntu.go:190] setting up certificates
	I1122 00:52:15.694088  692782 provision.go:84] configureAuth start
	I1122 00:52:15.694145  692782 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-625837
	I1122 00:52:15.711286  692782 provision.go:143] copyHostCerts
	I1122 00:52:15.711363  692782 exec_runner.go:144] found /home/jenkins/minikube-integration/21934-513600/.minikube/ca.pem, removing ...
	I1122 00:52:15.711376  692782 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21934-513600/.minikube/ca.pem
	I1122 00:52:15.711463  692782 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21934-513600/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21934-513600/.minikube/ca.pem (1078 bytes)
	I1122 00:52:15.711575  692782 exec_runner.go:144] found /home/jenkins/minikube-integration/21934-513600/.minikube/cert.pem, removing ...
	I1122 00:52:15.711587  692782 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21934-513600/.minikube/cert.pem
	I1122 00:52:15.711617  692782 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21934-513600/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21934-513600/.minikube/cert.pem (1123 bytes)
	I1122 00:52:15.711682  692782 exec_runner.go:144] found /home/jenkins/minikube-integration/21934-513600/.minikube/key.pem, removing ...
	I1122 00:52:15.711695  692782 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21934-513600/.minikube/key.pem
	I1122 00:52:15.711719  692782 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21934-513600/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21934-513600/.minikube/key.pem (1675 bytes)
	I1122 00:52:15.711781  692782 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21934-513600/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21934-513600/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21934-513600/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-625837 san=[127.0.0.1 192.168.85.2 localhost minikube old-k8s-version-625837]
	I1122 00:52:15.948517  692782 provision.go:177] copyRemoteCerts
	I1122 00:52:15.948602  692782 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1122 00:52:15.948714  692782 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-625837
	I1122 00:52:15.968789  692782 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33770 SSHKeyPath:/home/jenkins/minikube-integration/21934-513600/.minikube/machines/old-k8s-version-625837/id_rsa Username:docker}
	I1122 00:52:16.070638  692782 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21934-513600/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1122 00:52:16.091280  692782 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21934-513600/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I1122 00:52:16.110125  692782 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21934-513600/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1122 00:52:16.129695  692782 provision.go:87] duration metric: took 435.582999ms to configureAuth
	I1122 00:52:16.129725  692782 ubuntu.go:206] setting minikube options for container-runtime
	I1122 00:52:16.129969  692782 config.go:182] Loaded profile config "old-k8s-version-625837": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.0
	I1122 00:52:16.130108  692782 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-625837
	I1122 00:52:16.148455  692782 main.go:143] libmachine: Using SSH client type: native
	I1122 00:52:16.148784  692782 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33770 <nil> <nil>}
	I1122 00:52:16.148805  692782 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1122 00:52:16.445871  692782 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1122 00:52:16.445893  692782 machine.go:97] duration metric: took 4.266196245s to provisionDockerMachine
	I1122 00:52:16.445903  692782 client.go:176] duration metric: took 10.62456146s to LocalClient.Create
	I1122 00:52:16.445917  692782 start.go:167] duration metric: took 10.624617269s to libmachine.API.Create "old-k8s-version-625837"
	I1122 00:52:16.445925  692782 start.go:293] postStartSetup for "old-k8s-version-625837" (driver="docker")
	I1122 00:52:16.445935  692782 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1122 00:52:16.445994  692782 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1122 00:52:16.446088  692782 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-625837
	I1122 00:52:16.463488  692782 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33770 SSHKeyPath:/home/jenkins/minikube-integration/21934-513600/.minikube/machines/old-k8s-version-625837/id_rsa Username:docker}
	I1122 00:52:16.566480  692782 ssh_runner.go:195] Run: cat /etc/os-release
	I1122 00:52:16.569776  692782 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1122 00:52:16.569869  692782 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1122 00:52:16.569886  692782 filesync.go:126] Scanning /home/jenkins/minikube-integration/21934-513600/.minikube/addons for local assets ...
	I1122 00:52:16.569934  692782 filesync.go:126] Scanning /home/jenkins/minikube-integration/21934-513600/.minikube/files for local assets ...
	I1122 00:52:16.570018  692782 filesync.go:149] local asset: /home/jenkins/minikube-integration/21934-513600/.minikube/files/etc/ssl/certs/5169372.pem -> 5169372.pem in /etc/ssl/certs
	I1122 00:52:16.570121  692782 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1122 00:52:16.577238  692782 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21934-513600/.minikube/files/etc/ssl/certs/5169372.pem --> /etc/ssl/certs/5169372.pem (1708 bytes)
	I1122 00:52:16.594653  692782 start.go:296] duration metric: took 148.71354ms for postStartSetup
	I1122 00:52:16.595021  692782 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-625837
	I1122 00:52:16.612453  692782 profile.go:143] Saving config to /home/jenkins/minikube-integration/21934-513600/.minikube/profiles/old-k8s-version-625837/config.json ...
	I1122 00:52:16.612740  692782 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1122 00:52:16.612790  692782 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-625837
	I1122 00:52:16.630040  692782 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33770 SSHKeyPath:/home/jenkins/minikube-integration/21934-513600/.minikube/machines/old-k8s-version-625837/id_rsa Username:docker}
	I1122 00:52:16.726537  692782 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1122 00:52:16.730963  692782 start.go:128] duration metric: took 10.913301246s to createHost
	I1122 00:52:16.730988  692782 start.go:83] releasing machines lock for "old-k8s-version-625837", held for 10.913436086s
	I1122 00:52:16.731056  692782 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-625837
	I1122 00:52:16.747102  692782 ssh_runner.go:195] Run: cat /version.json
	I1122 00:52:16.747149  692782 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-625837
	I1122 00:52:16.747157  692782 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1122 00:52:16.747216  692782 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-625837
	I1122 00:52:16.769751  692782 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33770 SSHKeyPath:/home/jenkins/minikube-integration/21934-513600/.minikube/machines/old-k8s-version-625837/id_rsa Username:docker}
	I1122 00:52:16.769976  692782 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33770 SSHKeyPath:/home/jenkins/minikube-integration/21934-513600/.minikube/machines/old-k8s-version-625837/id_rsa Username:docker}
	I1122 00:52:16.869356  692782 ssh_runner.go:195] Run: systemctl --version
	I1122 00:52:16.959899  692782 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1122 00:52:17.000445  692782 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1122 00:52:17.006379  692782 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1122 00:52:17.006452  692782 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1122 00:52:17.036050  692782 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/10-crio-bridge.conflist.disabled] bridge cni config(s)
	I1122 00:52:17.036120  692782 start.go:496] detecting cgroup driver to use...
	I1122 00:52:17.036163  692782 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1122 00:52:17.036241  692782 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1122 00:52:17.053422  692782 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1122 00:52:17.066441  692782 docker.go:218] disabling cri-docker service (if available) ...
	I1122 00:52:17.066513  692782 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1122 00:52:17.084447  692782 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1122 00:52:17.101777  692782 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1122 00:52:17.225309  692782 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1122 00:52:17.348822  692782 docker.go:234] disabling docker service ...
	I1122 00:52:17.348891  692782 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1122 00:52:17.372449  692782 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1122 00:52:17.387342  692782 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1122 00:52:17.515175  692782 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1122 00:52:17.644232  692782 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1122 00:52:17.657278  692782 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1122 00:52:17.670693  692782 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I1122 00:52:17.670763  692782 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1122 00:52:17.679765  692782 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1122 00:52:17.679852  692782 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1122 00:52:17.688213  692782 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1122 00:52:17.698133  692782 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1122 00:52:17.707511  692782 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1122 00:52:17.715413  692782 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1122 00:52:17.723719  692782 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1122 00:52:17.736625  692782 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1122 00:52:17.746068  692782 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1122 00:52:17.753583  692782 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1122 00:52:17.760749  692782 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1122 00:52:17.878174  692782 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1122 00:52:18.061472  692782 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1122 00:52:18.061612  692782 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1122 00:52:18.065529  692782 start.go:564] Will wait 60s for crictl version
	I1122 00:52:18.065611  692782 ssh_runner.go:195] Run: which crictl
	I1122 00:52:18.069117  692782 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1122 00:52:18.099900  692782 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.2
	RuntimeApiVersion:  v1
	I1122 00:52:18.099981  692782 ssh_runner.go:195] Run: crio --version
	I1122 00:52:18.128542  692782 ssh_runner.go:195] Run: crio --version
	I1122 00:52:18.162764  692782 out.go:179] * Preparing Kubernetes v1.28.0 on CRI-O 1.34.2 ...
	I1122 00:52:18.165728  692782 cli_runner.go:164] Run: docker network inspect old-k8s-version-625837 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1122 00:52:18.187373  692782 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I1122 00:52:18.191310  692782 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.85.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1122 00:52:18.202120  692782 kubeadm.go:884] updating cluster {Name:old-k8s-version-625837 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763588073-21934@sha256:19d3da0413e1bfa354cbb88004c6796f8e9772a083e0230b0f6e50212ee04c7e Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:old-k8s-version-625837 Namespace:default APIServerHAVIP: APIServerName:minik
ubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirm
warePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1122 00:52:18.202295  692782 preload.go:188] Checking if preload exists for k8s version v1.28.0 and runtime crio
	I1122 00:52:18.202358  692782 ssh_runner.go:195] Run: sudo crictl images --output json
	I1122 00:52:18.246078  692782 crio.go:514] all images are preloaded for cri-o runtime.
	I1122 00:52:18.246108  692782 crio.go:433] Images already preloaded, skipping extraction
	I1122 00:52:18.246166  692782 ssh_runner.go:195] Run: sudo crictl images --output json
	I1122 00:52:18.290009  692782 crio.go:514] all images are preloaded for cri-o runtime.
	I1122 00:52:18.290030  692782 cache_images.go:86] Images are preloaded, skipping loading
	I1122 00:52:18.290038  692782 kubeadm.go:935] updating node { 192.168.85.2 8443 v1.28.0 crio true true} ...
	I1122 00:52:18.290186  692782 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=old-k8s-version-625837 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.0 ClusterName:old-k8s-version-625837 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1122 00:52:18.290264  692782 ssh_runner.go:195] Run: crio config
	I1122 00:52:18.350389  692782 cni.go:84] Creating CNI manager for ""
	I1122 00:52:18.350411  692782 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1122 00:52:18.350427  692782 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1122 00:52:18.350450  692782 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8443 KubernetesVersion:v1.28.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-625837 NodeName:old-k8s-version-625837 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPo
dPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1122 00:52:18.350593  692782 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "old-k8s-version-625837"
	  kubeletExtraArgs:
	    node-ip: 192.168.85.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1122 00:52:18.350672  692782 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.0
	I1122 00:52:18.358532  692782 binaries.go:51] Found k8s binaries, skipping transfer
	I1122 00:52:18.358642  692782 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1122 00:52:18.366634  692782 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (372 bytes)
	I1122 00:52:18.380334  692782 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1122 00:52:18.396898  692782 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2160 bytes)
	I1122 00:52:18.409648  692782 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I1122 00:52:18.413308  692782 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1122 00:52:18.423968  692782 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1122 00:52:18.560549  692782 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1122 00:52:18.587617  692782 certs.go:69] Setting up /home/jenkins/minikube-integration/21934-513600/.minikube/profiles/old-k8s-version-625837 for IP: 192.168.85.2
	I1122 00:52:18.587641  692782 certs.go:195] generating shared ca certs ...
	I1122 00:52:18.587657  692782 certs.go:227] acquiring lock for ca certs: {Name:mkaf4c79493334cb2058b349c36be7473837f9b7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1122 00:52:18.587797  692782 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21934-513600/.minikube/ca.key
	I1122 00:52:18.587847  692782 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21934-513600/.minikube/proxy-client-ca.key
	I1122 00:52:18.587859  692782 certs.go:257] generating profile certs ...
	I1122 00:52:18.587915  692782 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21934-513600/.minikube/profiles/old-k8s-version-625837/client.key
	I1122 00:52:18.587932  692782 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21934-513600/.minikube/profiles/old-k8s-version-625837/client.crt with IP's: []
	I1122 00:52:18.732100  692782 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21934-513600/.minikube/profiles/old-k8s-version-625837/client.crt ...
	I1122 00:52:18.732131  692782 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21934-513600/.minikube/profiles/old-k8s-version-625837/client.crt: {Name:mk212fa24b7dca80eca04392e1ece500a90788c6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1122 00:52:18.732346  692782 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21934-513600/.minikube/profiles/old-k8s-version-625837/client.key ...
	I1122 00:52:18.732364  692782 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21934-513600/.minikube/profiles/old-k8s-version-625837/client.key: {Name:mk363d1366672af59901ad063baae004291fd0cd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1122 00:52:18.732466  692782 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21934-513600/.minikube/profiles/old-k8s-version-625837/apiserver.key.4c41b9ba
	I1122 00:52:18.732487  692782 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21934-513600/.minikube/profiles/old-k8s-version-625837/apiserver.crt.4c41b9ba with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.85.2]
	I1122 00:52:19.037888  692782 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21934-513600/.minikube/profiles/old-k8s-version-625837/apiserver.crt.4c41b9ba ...
	I1122 00:52:19.037920  692782 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21934-513600/.minikube/profiles/old-k8s-version-625837/apiserver.crt.4c41b9ba: {Name:mke37f83b4940d06ac7759ca1d00b085208ba27c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1122 00:52:19.038113  692782 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21934-513600/.minikube/profiles/old-k8s-version-625837/apiserver.key.4c41b9ba ...
	I1122 00:52:19.038130  692782 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21934-513600/.minikube/profiles/old-k8s-version-625837/apiserver.key.4c41b9ba: {Name:mk6d08d1502db37e348ac11eff03d219f363328e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1122 00:52:19.038220  692782 certs.go:382] copying /home/jenkins/minikube-integration/21934-513600/.minikube/profiles/old-k8s-version-625837/apiserver.crt.4c41b9ba -> /home/jenkins/minikube-integration/21934-513600/.minikube/profiles/old-k8s-version-625837/apiserver.crt
	I1122 00:52:19.038298  692782 certs.go:386] copying /home/jenkins/minikube-integration/21934-513600/.minikube/profiles/old-k8s-version-625837/apiserver.key.4c41b9ba -> /home/jenkins/minikube-integration/21934-513600/.minikube/profiles/old-k8s-version-625837/apiserver.key
	I1122 00:52:19.038360  692782 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21934-513600/.minikube/profiles/old-k8s-version-625837/proxy-client.key
	I1122 00:52:19.038378  692782 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21934-513600/.minikube/profiles/old-k8s-version-625837/proxy-client.crt with IP's: []
	I1122 00:52:19.290991  692782 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21934-513600/.minikube/profiles/old-k8s-version-625837/proxy-client.crt ...
	I1122 00:52:19.291019  692782 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21934-513600/.minikube/profiles/old-k8s-version-625837/proxy-client.crt: {Name:mk9c49fc158bae74b458c23c4185fdc1f4aacfa8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1122 00:52:19.291206  692782 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21934-513600/.minikube/profiles/old-k8s-version-625837/proxy-client.key ...
	I1122 00:52:19.291227  692782 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21934-513600/.minikube/profiles/old-k8s-version-625837/proxy-client.key: {Name:mk9b9ab9112c7038a11b3d0d5940383fb6f5df1c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1122 00:52:19.291446  692782 certs.go:484] found cert: /home/jenkins/minikube-integration/21934-513600/.minikube/certs/516937.pem (1338 bytes)
	W1122 00:52:19.291493  692782 certs.go:480] ignoring /home/jenkins/minikube-integration/21934-513600/.minikube/certs/516937_empty.pem, impossibly tiny 0 bytes
	I1122 00:52:19.291505  692782 certs.go:484] found cert: /home/jenkins/minikube-integration/21934-513600/.minikube/certs/ca-key.pem (1679 bytes)
	I1122 00:52:19.291532  692782 certs.go:484] found cert: /home/jenkins/minikube-integration/21934-513600/.minikube/certs/ca.pem (1078 bytes)
	I1122 00:52:19.291563  692782 certs.go:484] found cert: /home/jenkins/minikube-integration/21934-513600/.minikube/certs/cert.pem (1123 bytes)
	I1122 00:52:19.291590  692782 certs.go:484] found cert: /home/jenkins/minikube-integration/21934-513600/.minikube/certs/key.pem (1675 bytes)
	I1122 00:52:19.291652  692782 certs.go:484] found cert: /home/jenkins/minikube-integration/21934-513600/.minikube/files/etc/ssl/certs/5169372.pem (1708 bytes)
	I1122 00:52:19.292204  692782 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21934-513600/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1122 00:52:19.311465  692782 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21934-513600/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1122 00:52:19.330282  692782 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21934-513600/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1122 00:52:19.349702  692782 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21934-513600/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1122 00:52:19.367758  692782 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21934-513600/.minikube/profiles/old-k8s-version-625837/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I1122 00:52:19.385244  692782 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21934-513600/.minikube/profiles/old-k8s-version-625837/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1122 00:52:19.404316  692782 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21934-513600/.minikube/profiles/old-k8s-version-625837/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1122 00:52:19.422385  692782 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21934-513600/.minikube/profiles/old-k8s-version-625837/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1122 00:52:19.440079  692782 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21934-513600/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1122 00:52:19.456923  692782 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21934-513600/.minikube/certs/516937.pem --> /usr/share/ca-certificates/516937.pem (1338 bytes)
	I1122 00:52:19.481705  692782 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21934-513600/.minikube/files/etc/ssl/certs/5169372.pem --> /usr/share/ca-certificates/5169372.pem (1708 bytes)
	I1122 00:52:19.499604  692782 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1122 00:52:19.513384  692782 ssh_runner.go:195] Run: openssl version
	I1122 00:52:19.519627  692782 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/5169372.pem && ln -fs /usr/share/ca-certificates/5169372.pem /etc/ssl/certs/5169372.pem"
	I1122 00:52:19.530623  692782 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/5169372.pem
	I1122 00:52:19.537307  692782 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov 21 23:55 /usr/share/ca-certificates/5169372.pem
	I1122 00:52:19.537389  692782 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/5169372.pem
	I1122 00:52:19.590699  692782 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/5169372.pem /etc/ssl/certs/3ec20f2e.0"
	I1122 00:52:19.600122  692782 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1122 00:52:19.610239  692782 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1122 00:52:19.614146  692782 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov 21 23:48 /usr/share/ca-certificates/minikubeCA.pem
	I1122 00:52:19.614209  692782 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1122 00:52:19.655876  692782 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1122 00:52:19.665078  692782 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/516937.pem && ln -fs /usr/share/ca-certificates/516937.pem /etc/ssl/certs/516937.pem"
	I1122 00:52:19.674623  692782 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/516937.pem
	I1122 00:52:19.679081  692782 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov 21 23:55 /usr/share/ca-certificates/516937.pem
	I1122 00:52:19.679141  692782 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/516937.pem
	I1122 00:52:19.725206  692782 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/516937.pem /etc/ssl/certs/51391683.0"
	I1122 00:52:19.734261  692782 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1122 00:52:19.738365  692782 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1122 00:52:19.738425  692782 kubeadm.go:401] StartCluster: {Name:old-k8s-version-625837 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763588073-21934@sha256:19d3da0413e1bfa354cbb88004c6796f8e9772a083e0230b0f6e50212ee04c7e Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:old-k8s-version-625837 Namespace:default APIServerHAVIP: APIServerName:minikube
CA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwar
ePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1122 00:52:19.738505  692782 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1122 00:52:19.738565  692782 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1122 00:52:19.772427  692782 cri.go:89] found id: ""
	I1122 00:52:19.772493  692782 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1122 00:52:19.781546  692782 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1122 00:52:19.789728  692782 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1122 00:52:19.789790  692782 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1122 00:52:19.797698  692782 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1122 00:52:19.797718  692782 kubeadm.go:158] found existing configuration files:
	
	I1122 00:52:19.797767  692782 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1122 00:52:19.805670  692782 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1122 00:52:19.805773  692782 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1122 00:52:19.813483  692782 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1122 00:52:19.821420  692782 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1122 00:52:19.821484  692782 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1122 00:52:19.829401  692782 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1122 00:52:19.837706  692782 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1122 00:52:19.837797  692782 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1122 00:52:19.844783  692782 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1122 00:52:19.852530  692782 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1122 00:52:19.852601  692782 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1122 00:52:19.860697  692782 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.28.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1122 00:52:19.905153  692782 kubeadm.go:319] [init] Using Kubernetes version: v1.28.0
	I1122 00:52:19.905524  692782 kubeadm.go:319] [preflight] Running pre-flight checks
	I1122 00:52:19.951581  692782 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1122 00:52:19.951659  692782 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1122 00:52:19.951706  692782 kubeadm.go:319] OS: Linux
	I1122 00:52:19.951756  692782 kubeadm.go:319] CGROUPS_CPU: enabled
	I1122 00:52:19.951809  692782 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1122 00:52:19.951860  692782 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1122 00:52:19.951911  692782 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1122 00:52:19.951962  692782 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1122 00:52:19.952020  692782 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1122 00:52:19.952070  692782 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1122 00:52:19.952123  692782 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1122 00:52:19.952173  692782 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1122 00:52:20.038561  692782 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1122 00:52:20.038682  692782 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1122 00:52:20.038782  692782 kubeadm.go:319] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1122 00:52:20.205226  692782 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1122 00:52:20.210045  692782 out.go:252]   - Generating certificates and keys ...
	I1122 00:52:20.210204  692782 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1122 00:52:20.210289  692782 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1122 00:52:20.656870  692782 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1122 00:52:21.415555  692782 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1122 00:52:21.768760  692782 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1122 00:52:22.015573  692782 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1122 00:52:23.236265  692782 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1122 00:52:23.236637  692782 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [localhost old-k8s-version-625837] and IPs [192.168.85.2 127.0.0.1 ::1]
	I1122 00:52:23.698636  692782 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1122 00:52:23.698985  692782 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [localhost old-k8s-version-625837] and IPs [192.168.85.2 127.0.0.1 ::1]
	I1122 00:52:23.948759  692782 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1122 00:52:24.302522  692782 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1122 00:52:24.496684  692782 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1122 00:52:24.496952  692782 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1122 00:52:24.897159  692782 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1122 00:52:25.257775  692782 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1122 00:52:25.820795  692782 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1122 00:52:26.021096  692782 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1122 00:52:26.022197  692782 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1122 00:52:26.025597  692782 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1122 00:52:26.028873  692782 out.go:252]   - Booting up control plane ...
	I1122 00:52:26.028979  692782 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1122 00:52:26.029056  692782 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1122 00:52:26.030328  692782 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1122 00:52:26.048223  692782 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1122 00:52:26.049218  692782 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1122 00:52:26.049271  692782 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1122 00:52:26.175116  692782 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1122 00:52:33.678507  692782 kubeadm.go:319] [apiclient] All control plane components are healthy after 7.504615 seconds
	I1122 00:52:33.678631  692782 kubeadm.go:319] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1122 00:52:33.703029  692782 kubeadm.go:319] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1122 00:52:34.238597  692782 kubeadm.go:319] [upload-certs] Skipping phase. Please see --upload-certs
	I1122 00:52:34.238810  692782 kubeadm.go:319] [mark-control-plane] Marking the node old-k8s-version-625837 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1122 00:52:34.751769  692782 kubeadm.go:319] [bootstrap-token] Using token: 9oe7wx.7a0gz3anb3gsketc
	I1122 00:52:34.754700  692782 out.go:252]   - Configuring RBAC rules ...
	I1122 00:52:34.754821  692782 kubeadm.go:319] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1122 00:52:34.759101  692782 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1122 00:52:34.767304  692782 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1122 00:52:34.771349  692782 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1122 00:52:34.775495  692782 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1122 00:52:34.781535  692782 kubeadm.go:319] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1122 00:52:34.795127  692782 kubeadm.go:319] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1122 00:52:35.152936  692782 kubeadm.go:319] [addons] Applied essential addon: CoreDNS
	I1122 00:52:35.209113  692782 kubeadm.go:319] [addons] Applied essential addon: kube-proxy
	I1122 00:52:35.211155  692782 kubeadm.go:319] 
	I1122 00:52:35.211230  692782 kubeadm.go:319] Your Kubernetes control-plane has initialized successfully!
	I1122 00:52:35.211235  692782 kubeadm.go:319] 
	I1122 00:52:35.211312  692782 kubeadm.go:319] To start using your cluster, you need to run the following as a regular user:
	I1122 00:52:35.211316  692782 kubeadm.go:319] 
	I1122 00:52:35.211353  692782 kubeadm.go:319]   mkdir -p $HOME/.kube
	I1122 00:52:35.211413  692782 kubeadm.go:319]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1122 00:52:35.211463  692782 kubeadm.go:319]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1122 00:52:35.211467  692782 kubeadm.go:319] 
	I1122 00:52:35.211521  692782 kubeadm.go:319] Alternatively, if you are the root user, you can run:
	I1122 00:52:35.211525  692782 kubeadm.go:319] 
	I1122 00:52:35.211572  692782 kubeadm.go:319]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1122 00:52:35.211576  692782 kubeadm.go:319] 
	I1122 00:52:35.211628  692782 kubeadm.go:319] You should now deploy a pod network to the cluster.
	I1122 00:52:35.211702  692782 kubeadm.go:319] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1122 00:52:35.211771  692782 kubeadm.go:319]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1122 00:52:35.211775  692782 kubeadm.go:319] 
	I1122 00:52:35.211859  692782 kubeadm.go:319] You can now join any number of control-plane nodes by copying certificate authorities
	I1122 00:52:35.211937  692782 kubeadm.go:319] and service account keys on each node and then running the following as root:
	I1122 00:52:35.211941  692782 kubeadm.go:319] 
	I1122 00:52:35.212025  692782 kubeadm.go:319]   kubeadm join control-plane.minikube.internal:8443 --token 9oe7wx.7a0gz3anb3gsketc \
	I1122 00:52:35.212128  692782 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:ecfebb5fda4f065a571cf90106e71e452abce05aaa4d3155b81d7383977d6854 \
	I1122 00:52:35.212148  692782 kubeadm.go:319] 	--control-plane 
	I1122 00:52:35.212152  692782 kubeadm.go:319] 
	I1122 00:52:35.212237  692782 kubeadm.go:319] Then you can join any number of worker nodes by running the following on each as root:
	I1122 00:52:35.212241  692782 kubeadm.go:319] 
	I1122 00:52:35.212323  692782 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8443 --token 9oe7wx.7a0gz3anb3gsketc \
	I1122 00:52:35.212426  692782 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:ecfebb5fda4f065a571cf90106e71e452abce05aaa4d3155b81d7383977d6854 
	I1122 00:52:35.216870  692782 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1122 00:52:35.216998  692782 kubeadm.go:319] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1122 00:52:35.217013  692782 cni.go:84] Creating CNI manager for ""
	I1122 00:52:35.217021  692782 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1122 00:52:35.220212  692782 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I1122 00:52:35.223111  692782 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1122 00:52:35.227566  692782 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.28.0/kubectl ...
	I1122 00:52:35.227595  692782 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1122 00:52:35.262912  692782 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1122 00:52:36.268747  692782 ssh_runner.go:235] Completed: sudo /var/lib/minikube/binaries/v1.28.0/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml: (1.005802325s)
	I1122 00:52:36.268799  692782 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1122 00:52:36.268953  692782 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1122 00:52:36.269053  692782 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes old-k8s-version-625837 minikube.k8s.io/updated_at=2025_11_22T00_52_36_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=299bbe887a12c40541707cc636234f35f4ff1785 minikube.k8s.io/name=old-k8s-version-625837 minikube.k8s.io/primary=true
	I1122 00:52:36.280184  692782 ops.go:34] apiserver oom_adj: -16
	I1122 00:52:36.416900  692782 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1122 00:52:36.916947  692782 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1122 00:52:37.416998  692782 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1122 00:52:37.917049  692782 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1122 00:52:38.417516  692782 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1122 00:52:38.917093  692782 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1122 00:52:39.417650  692782 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1122 00:52:39.917992  692782 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1122 00:52:40.417919  692782 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1122 00:52:40.917492  692782 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1122 00:52:41.417493  692782 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1122 00:52:41.917505  692782 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1122 00:52:42.417486  692782 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1122 00:52:42.917740  692782 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1122 00:52:43.417007  692782 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1122 00:52:43.917149  692782 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1122 00:52:44.416954  692782 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1122 00:52:44.917120  692782 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1122 00:52:45.417552  692782 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1122 00:52:45.917976  692782 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1122 00:52:46.417584  692782 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1122 00:52:46.917751  692782 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1122 00:52:47.417621  692782 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1122 00:52:47.624654  692782 kubeadm.go:1114] duration metric: took 11.355746286s to wait for elevateKubeSystemPrivileges
	I1122 00:52:47.624682  692782 kubeadm.go:403] duration metric: took 27.886261946s to StartCluster
	I1122 00:52:47.624698  692782 settings.go:142] acquiring lock: {Name:mk6c31eb57ec65b047b78b4e1046e03fe7cc77bb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1122 00:52:47.624759  692782 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21934-513600/kubeconfig
	I1122 00:52:47.625720  692782 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21934-513600/kubeconfig: {Name:mkcd094d8a765e5a81369c0fb33cb5e696b17bb8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1122 00:52:47.625952  692782 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1122 00:52:47.626105  692782 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1122 00:52:47.626352  692782 config.go:182] Loaded profile config "old-k8s-version-625837": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.0
	I1122 00:52:47.626388  692782 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1122 00:52:47.626455  692782 addons.go:70] Setting storage-provisioner=true in profile "old-k8s-version-625837"
	I1122 00:52:47.626470  692782 addons.go:239] Setting addon storage-provisioner=true in "old-k8s-version-625837"
	I1122 00:52:47.626490  692782 host.go:66] Checking if "old-k8s-version-625837" exists ...
	I1122 00:52:47.626980  692782 cli_runner.go:164] Run: docker container inspect old-k8s-version-625837 --format={{.State.Status}}
	I1122 00:52:47.627943  692782 addons.go:70] Setting default-storageclass=true in profile "old-k8s-version-625837"
	I1122 00:52:47.627964  692782 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "old-k8s-version-625837"
	I1122 00:52:47.628244  692782 cli_runner.go:164] Run: docker container inspect old-k8s-version-625837 --format={{.State.Status}}
	I1122 00:52:47.629546  692782 out.go:179] * Verifying Kubernetes components...
	I1122 00:52:47.631751  692782 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1122 00:52:47.664097  692782 addons.go:239] Setting addon default-storageclass=true in "old-k8s-version-625837"
	I1122 00:52:47.664137  692782 host.go:66] Checking if "old-k8s-version-625837" exists ...
	I1122 00:52:47.664556  692782 cli_runner.go:164] Run: docker container inspect old-k8s-version-625837 --format={{.State.Status}}
	I1122 00:52:47.687844  692782 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1122 00:52:47.690680  692782 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1122 00:52:47.690704  692782 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1122 00:52:47.690789  692782 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-625837
	I1122 00:52:47.713773  692782 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1122 00:52:47.713793  692782 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1122 00:52:47.713896  692782 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-625837
	I1122 00:52:47.758038  692782 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33770 SSHKeyPath:/home/jenkins/minikube-integration/21934-513600/.minikube/machines/old-k8s-version-625837/id_rsa Username:docker}
	I1122 00:52:47.759884  692782 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33770 SSHKeyPath:/home/jenkins/minikube-integration/21934-513600/.minikube/machines/old-k8s-version-625837/id_rsa Username:docker}
	I1122 00:52:48.083240  692782 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1122 00:52:48.107015  692782 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.85.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1122 00:52:48.107144  692782 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1122 00:52:48.123894  692782 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1122 00:52:48.795612  692782 start.go:977] {"host.minikube.internal": 192.168.85.1} host record injected into CoreDNS's ConfigMap
	I1122 00:52:48.797466  692782 node_ready.go:35] waiting up to 6m0s for node "old-k8s-version-625837" to be "Ready" ...
	I1122 00:52:49.159384  692782 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.03541344s)
	I1122 00:52:49.162826  692782 out.go:179] * Enabled addons: default-storageclass, storage-provisioner
	I1122 00:52:49.165839  692782 addons.go:530] duration metric: took 1.539445315s for enable addons: enabled=[default-storageclass storage-provisioner]
	I1122 00:52:49.302629  692782 kapi.go:214] "coredns" deployment in "kube-system" namespace and "old-k8s-version-625837" context rescaled to 1 replicas
	W1122 00:52:50.800770  692782 node_ready.go:57] node "old-k8s-version-625837" has "Ready":"False" status (will retry)
	W1122 00:52:53.301627  692782 node_ready.go:57] node "old-k8s-version-625837" has "Ready":"False" status (will retry)
	W1122 00:52:55.800615  692782 node_ready.go:57] node "old-k8s-version-625837" has "Ready":"False" status (will retry)
	W1122 00:52:57.800792  692782 node_ready.go:57] node "old-k8s-version-625837" has "Ready":"False" status (will retry)
	W1122 00:52:59.800926  692782 node_ready.go:57] node "old-k8s-version-625837" has "Ready":"False" status (will retry)
	W1122 00:53:01.801918  692782 node_ready.go:57] node "old-k8s-version-625837" has "Ready":"False" status (will retry)
	I1122 00:53:02.300984  692782 node_ready.go:49] node "old-k8s-version-625837" is "Ready"
	I1122 00:53:02.301019  692782 node_ready.go:38] duration metric: took 13.503467568s for node "old-k8s-version-625837" to be "Ready" ...
	I1122 00:53:02.301045  692782 api_server.go:52] waiting for apiserver process to appear ...
	I1122 00:53:02.301104  692782 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1122 00:53:02.313729  692782 api_server.go:72] duration metric: took 14.68774849s to wait for apiserver process to appear ...
	I1122 00:53:02.313762  692782 api_server.go:88] waiting for apiserver healthz status ...
	I1122 00:53:02.313782  692782 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1122 00:53:02.322277  692782 api_server.go:279] https://192.168.85.2:8443/healthz returned 200:
	ok
	I1122 00:53:02.323787  692782 api_server.go:141] control plane version: v1.28.0
	I1122 00:53:02.323812  692782 api_server.go:131] duration metric: took 10.042733ms to wait for apiserver health ...
	I1122 00:53:02.323822  692782 system_pods.go:43] waiting for kube-system pods to appear ...
	I1122 00:53:02.327502  692782 system_pods.go:59] 8 kube-system pods found
	I1122 00:53:02.327540  692782 system_pods.go:61] "coredns-5dd5756b68-6m4nr" [21a6b372-6765-44be-afb9-9dcaf8246818] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1122 00:53:02.327547  692782 system_pods.go:61] "etcd-old-k8s-version-625837" [ad180e51-5a9b-40c7-a5df-79ae88a52cdd] Running
	I1122 00:53:02.327553  692782 system_pods.go:61] "kindnet-h6vbs" [136803c6-4591-42d7-b387-7aa8c6c6b628] Running
	I1122 00:53:02.327558  692782 system_pods.go:61] "kube-apiserver-old-k8s-version-625837" [51c5da43-2565-4b7d-92bb-6cebb5a661f6] Running
	I1122 00:53:02.327563  692782 system_pods.go:61] "kube-controller-manager-old-k8s-version-625837" [14cf17d6-f0ee-4265-a046-51b9c2134e13] Running
	I1122 00:53:02.327566  692782 system_pods.go:61] "kube-proxy-zdmf6" [7b5dde4d-792c-4340-9621-ccd57f294d20] Running
	I1122 00:53:02.327570  692782 system_pods.go:61] "kube-scheduler-old-k8s-version-625837" [6afcc4bd-f6b4-462e-bc0a-9122c24132b8] Running
	I1122 00:53:02.327580  692782 system_pods.go:61] "storage-provisioner" [e45decb9-b863-4eeb-8363-e8134ea94857] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1122 00:53:02.327597  692782 system_pods.go:74] duration metric: took 3.764189ms to wait for pod list to return data ...
	I1122 00:53:02.327609  692782 default_sa.go:34] waiting for default service account to be created ...
	I1122 00:53:02.330357  692782 default_sa.go:45] found service account: "default"
	I1122 00:53:02.330384  692782 default_sa.go:55] duration metric: took 2.768452ms for default service account to be created ...
	I1122 00:53:02.330394  692782 system_pods.go:116] waiting for k8s-apps to be running ...
	I1122 00:53:02.335537  692782 system_pods.go:86] 8 kube-system pods found
	I1122 00:53:02.335571  692782 system_pods.go:89] "coredns-5dd5756b68-6m4nr" [21a6b372-6765-44be-afb9-9dcaf8246818] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1122 00:53:02.335578  692782 system_pods.go:89] "etcd-old-k8s-version-625837" [ad180e51-5a9b-40c7-a5df-79ae88a52cdd] Running
	I1122 00:53:02.335584  692782 system_pods.go:89] "kindnet-h6vbs" [136803c6-4591-42d7-b387-7aa8c6c6b628] Running
	I1122 00:53:02.335615  692782 system_pods.go:89] "kube-apiserver-old-k8s-version-625837" [51c5da43-2565-4b7d-92bb-6cebb5a661f6] Running
	I1122 00:53:02.335628  692782 system_pods.go:89] "kube-controller-manager-old-k8s-version-625837" [14cf17d6-f0ee-4265-a046-51b9c2134e13] Running
	I1122 00:53:02.335632  692782 system_pods.go:89] "kube-proxy-zdmf6" [7b5dde4d-792c-4340-9621-ccd57f294d20] Running
	I1122 00:53:02.335636  692782 system_pods.go:89] "kube-scheduler-old-k8s-version-625837" [6afcc4bd-f6b4-462e-bc0a-9122c24132b8] Running
	I1122 00:53:02.335643  692782 system_pods.go:89] "storage-provisioner" [e45decb9-b863-4eeb-8363-e8134ea94857] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1122 00:53:02.335669  692782 retry.go:31] will retry after 191.574689ms: missing components: kube-dns
	I1122 00:53:02.546800  692782 system_pods.go:86] 8 kube-system pods found
	I1122 00:53:02.546839  692782 system_pods.go:89] "coredns-5dd5756b68-6m4nr" [21a6b372-6765-44be-afb9-9dcaf8246818] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1122 00:53:02.546847  692782 system_pods.go:89] "etcd-old-k8s-version-625837" [ad180e51-5a9b-40c7-a5df-79ae88a52cdd] Running
	I1122 00:53:02.546854  692782 system_pods.go:89] "kindnet-h6vbs" [136803c6-4591-42d7-b387-7aa8c6c6b628] Running
	I1122 00:53:02.546869  692782 system_pods.go:89] "kube-apiserver-old-k8s-version-625837" [51c5da43-2565-4b7d-92bb-6cebb5a661f6] Running
	I1122 00:53:02.546877  692782 system_pods.go:89] "kube-controller-manager-old-k8s-version-625837" [14cf17d6-f0ee-4265-a046-51b9c2134e13] Running
	I1122 00:53:02.546882  692782 system_pods.go:89] "kube-proxy-zdmf6" [7b5dde4d-792c-4340-9621-ccd57f294d20] Running
	I1122 00:53:02.546889  692782 system_pods.go:89] "kube-scheduler-old-k8s-version-625837" [6afcc4bd-f6b4-462e-bc0a-9122c24132b8] Running
	I1122 00:53:02.546894  692782 system_pods.go:89] "storage-provisioner" [e45decb9-b863-4eeb-8363-e8134ea94857] Running
	I1122 00:53:02.546915  692782 retry.go:31] will retry after 365.035546ms: missing components: kube-dns
	I1122 00:53:02.916266  692782 system_pods.go:86] 8 kube-system pods found
	I1122 00:53:02.916304  692782 system_pods.go:89] "coredns-5dd5756b68-6m4nr" [21a6b372-6765-44be-afb9-9dcaf8246818] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1122 00:53:02.916312  692782 system_pods.go:89] "etcd-old-k8s-version-625837" [ad180e51-5a9b-40c7-a5df-79ae88a52cdd] Running
	I1122 00:53:02.916319  692782 system_pods.go:89] "kindnet-h6vbs" [136803c6-4591-42d7-b387-7aa8c6c6b628] Running
	I1122 00:53:02.916323  692782 system_pods.go:89] "kube-apiserver-old-k8s-version-625837" [51c5da43-2565-4b7d-92bb-6cebb5a661f6] Running
	I1122 00:53:02.916328  692782 system_pods.go:89] "kube-controller-manager-old-k8s-version-625837" [14cf17d6-f0ee-4265-a046-51b9c2134e13] Running
	I1122 00:53:02.916332  692782 system_pods.go:89] "kube-proxy-zdmf6" [7b5dde4d-792c-4340-9621-ccd57f294d20] Running
	I1122 00:53:02.916336  692782 system_pods.go:89] "kube-scheduler-old-k8s-version-625837" [6afcc4bd-f6b4-462e-bc0a-9122c24132b8] Running
	I1122 00:53:02.916340  692782 system_pods.go:89] "storage-provisioner" [e45decb9-b863-4eeb-8363-e8134ea94857] Running
	I1122 00:53:02.916354  692782 retry.go:31] will retry after 315.096719ms: missing components: kube-dns
	I1122 00:53:03.236190  692782 system_pods.go:86] 8 kube-system pods found
	I1122 00:53:03.236226  692782 system_pods.go:89] "coredns-5dd5756b68-6m4nr" [21a6b372-6765-44be-afb9-9dcaf8246818] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1122 00:53:03.236233  692782 system_pods.go:89] "etcd-old-k8s-version-625837" [ad180e51-5a9b-40c7-a5df-79ae88a52cdd] Running
	I1122 00:53:03.236239  692782 system_pods.go:89] "kindnet-h6vbs" [136803c6-4591-42d7-b387-7aa8c6c6b628] Running
	I1122 00:53:03.236244  692782 system_pods.go:89] "kube-apiserver-old-k8s-version-625837" [51c5da43-2565-4b7d-92bb-6cebb5a661f6] Running
	I1122 00:53:03.236260  692782 system_pods.go:89] "kube-controller-manager-old-k8s-version-625837" [14cf17d6-f0ee-4265-a046-51b9c2134e13] Running
	I1122 00:53:03.236268  692782 system_pods.go:89] "kube-proxy-zdmf6" [7b5dde4d-792c-4340-9621-ccd57f294d20] Running
	I1122 00:53:03.236272  692782 system_pods.go:89] "kube-scheduler-old-k8s-version-625837" [6afcc4bd-f6b4-462e-bc0a-9122c24132b8] Running
	I1122 00:53:03.236276  692782 system_pods.go:89] "storage-provisioner" [e45decb9-b863-4eeb-8363-e8134ea94857] Running
	I1122 00:53:03.236291  692782 retry.go:31] will retry after 570.239574ms: missing components: kube-dns
	I1122 00:53:03.810706  692782 system_pods.go:86] 8 kube-system pods found
	I1122 00:53:03.810737  692782 system_pods.go:89] "coredns-5dd5756b68-6m4nr" [21a6b372-6765-44be-afb9-9dcaf8246818] Running
	I1122 00:53:03.810745  692782 system_pods.go:89] "etcd-old-k8s-version-625837" [ad180e51-5a9b-40c7-a5df-79ae88a52cdd] Running
	I1122 00:53:03.810749  692782 system_pods.go:89] "kindnet-h6vbs" [136803c6-4591-42d7-b387-7aa8c6c6b628] Running
	I1122 00:53:03.810753  692782 system_pods.go:89] "kube-apiserver-old-k8s-version-625837" [51c5da43-2565-4b7d-92bb-6cebb5a661f6] Running
	I1122 00:53:03.810758  692782 system_pods.go:89] "kube-controller-manager-old-k8s-version-625837" [14cf17d6-f0ee-4265-a046-51b9c2134e13] Running
	I1122 00:53:03.810785  692782 system_pods.go:89] "kube-proxy-zdmf6" [7b5dde4d-792c-4340-9621-ccd57f294d20] Running
	I1122 00:53:03.810798  692782 system_pods.go:89] "kube-scheduler-old-k8s-version-625837" [6afcc4bd-f6b4-462e-bc0a-9122c24132b8] Running
	I1122 00:53:03.810803  692782 system_pods.go:89] "storage-provisioner" [e45decb9-b863-4eeb-8363-e8134ea94857] Running
	I1122 00:53:03.810811  692782 system_pods.go:126] duration metric: took 1.480411103s to wait for k8s-apps to be running ...
	I1122 00:53:03.810822  692782 system_svc.go:44] waiting for kubelet service to be running ....
	I1122 00:53:03.810898  692782 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1122 00:53:03.824396  692782 system_svc.go:56] duration metric: took 13.565797ms WaitForService to wait for kubelet
	I1122 00:53:03.824425  692782 kubeadm.go:587] duration metric: took 16.198449142s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1122 00:53:03.824445  692782 node_conditions.go:102] verifying NodePressure condition ...
	I1122 00:53:03.827293  692782 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1122 00:53:03.827329  692782 node_conditions.go:123] node cpu capacity is 2
	I1122 00:53:03.827343  692782 node_conditions.go:105] duration metric: took 2.892461ms to run NodePressure ...
	I1122 00:53:03.827371  692782 start.go:242] waiting for startup goroutines ...
	I1122 00:53:03.827384  692782 start.go:247] waiting for cluster config update ...
	I1122 00:53:03.827420  692782 start.go:256] writing updated cluster config ...
	I1122 00:53:03.827717  692782 ssh_runner.go:195] Run: rm -f paused
	I1122 00:53:03.831259  692782 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1122 00:53:03.835713  692782 pod_ready.go:83] waiting for pod "coredns-5dd5756b68-6m4nr" in "kube-system" namespace to be "Ready" or be gone ...
	I1122 00:53:03.841161  692782 pod_ready.go:94] pod "coredns-5dd5756b68-6m4nr" is "Ready"
	I1122 00:53:03.841188  692782 pod_ready.go:86] duration metric: took 5.446667ms for pod "coredns-5dd5756b68-6m4nr" in "kube-system" namespace to be "Ready" or be gone ...
	I1122 00:53:03.844256  692782 pod_ready.go:83] waiting for pod "etcd-old-k8s-version-625837" in "kube-system" namespace to be "Ready" or be gone ...
	I1122 00:53:03.849377  692782 pod_ready.go:94] pod "etcd-old-k8s-version-625837" is "Ready"
	I1122 00:53:03.849408  692782 pod_ready.go:86] duration metric: took 5.129614ms for pod "etcd-old-k8s-version-625837" in "kube-system" namespace to be "Ready" or be gone ...
	I1122 00:53:03.852723  692782 pod_ready.go:83] waiting for pod "kube-apiserver-old-k8s-version-625837" in "kube-system" namespace to be "Ready" or be gone ...
	I1122 00:53:03.857506  692782 pod_ready.go:94] pod "kube-apiserver-old-k8s-version-625837" is "Ready"
	I1122 00:53:03.857532  692782 pod_ready.go:86] duration metric: took 4.782539ms for pod "kube-apiserver-old-k8s-version-625837" in "kube-system" namespace to be "Ready" or be gone ...
	I1122 00:53:03.861092  692782 pod_ready.go:83] waiting for pod "kube-controller-manager-old-k8s-version-625837" in "kube-system" namespace to be "Ready" or be gone ...
	I1122 00:53:04.235468  692782 pod_ready.go:94] pod "kube-controller-manager-old-k8s-version-625837" is "Ready"
	I1122 00:53:04.235497  692782 pod_ready.go:86] duration metric: took 374.383393ms for pod "kube-controller-manager-old-k8s-version-625837" in "kube-system" namespace to be "Ready" or be gone ...
	I1122 00:53:04.436249  692782 pod_ready.go:83] waiting for pod "kube-proxy-zdmf6" in "kube-system" namespace to be "Ready" or be gone ...
	I1122 00:53:04.835442  692782 pod_ready.go:94] pod "kube-proxy-zdmf6" is "Ready"
	I1122 00:53:04.835469  692782 pod_ready.go:86] duration metric: took 399.193759ms for pod "kube-proxy-zdmf6" in "kube-system" namespace to be "Ready" or be gone ...
	I1122 00:53:05.036507  692782 pod_ready.go:83] waiting for pod "kube-scheduler-old-k8s-version-625837" in "kube-system" namespace to be "Ready" or be gone ...
	I1122 00:53:05.435095  692782 pod_ready.go:94] pod "kube-scheduler-old-k8s-version-625837" is "Ready"
	I1122 00:53:05.435123  692782 pod_ready.go:86] duration metric: took 398.588222ms for pod "kube-scheduler-old-k8s-version-625837" in "kube-system" namespace to be "Ready" or be gone ...
	I1122 00:53:05.435136  692782 pod_ready.go:40] duration metric: took 1.603836287s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1122 00:53:05.498221  692782 start.go:628] kubectl: 1.33.2, cluster: 1.28.0 (minor skew: 5)
	I1122 00:53:05.501480  692782 out.go:203] 
	W1122 00:53:05.504585  692782 out.go:285] ! /usr/local/bin/kubectl is version 1.33.2, which may have incompatibilities with Kubernetes 1.28.0.
	I1122 00:53:05.508427  692782 out.go:179]   - Want kubectl v1.28.0? Try 'minikube kubectl -- get pods -A'
	I1122 00:53:05.511325  692782 out.go:179] * Done! kubectl is now configured to use "old-k8s-version-625837" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Nov 22 00:53:02 old-k8s-version-625837 crio[836]: time="2025-11-22T00:53:02.523709672Z" level=info msg="Created container 6a36c59fad964c5b7ee733b78d9d03f2cb0a0068a356f4a7ea2206e541a00681: kube-system/coredns-5dd5756b68-6m4nr/coredns" id=e0ed21ad-57b7-44f0-933f-3fc68a3a06b1 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 22 00:53:02 old-k8s-version-625837 crio[836]: time="2025-11-22T00:53:02.524712014Z" level=info msg="Starting container: 6a36c59fad964c5b7ee733b78d9d03f2cb0a0068a356f4a7ea2206e541a00681" id=3fb02863-f00c-4922-8e2e-1c2c73e1b0a1 name=/runtime.v1.RuntimeService/StartContainer
	Nov 22 00:53:02 old-k8s-version-625837 crio[836]: time="2025-11-22T00:53:02.527776638Z" level=info msg="Started container" PID=1929 containerID=6a36c59fad964c5b7ee733b78d9d03f2cb0a0068a356f4a7ea2206e541a00681 description=kube-system/coredns-5dd5756b68-6m4nr/coredns id=3fb02863-f00c-4922-8e2e-1c2c73e1b0a1 name=/runtime.v1.RuntimeService/StartContainer sandboxID=23fe387bca20db2ce5d279e003ffa41021522ab04597a476aa3fff31fdc8d447
	Nov 22 00:53:06 old-k8s-version-625837 crio[836]: time="2025-11-22T00:53:06.027285279Z" level=info msg="Running pod sandbox: default/busybox/POD" id=6a849c17-0fb9-4c93-9a36-a07438d43933 name=/runtime.v1.RuntimeService/RunPodSandbox
	Nov 22 00:53:06 old-k8s-version-625837 crio[836]: time="2025-11-22T00:53:06.027382351Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 22 00:53:06 old-k8s-version-625837 crio[836]: time="2025-11-22T00:53:06.032964276Z" level=info msg="Got pod network &{Name:busybox Namespace:default ID:2f8abc46e864cd56c7e7d7b884afd9884bdf27ae1aee04be6904a8e00145e595 UID:770c3edc-3b43-4aa9-b57c-6884dc11b4dc NetNS:/var/run/netns/99926f30-e6f4-41de-969c-75b1e98bcc10 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0x40016aa8f0}] Aliases:map[]}"
	Nov 22 00:53:06 old-k8s-version-625837 crio[836]: time="2025-11-22T00:53:06.033141337Z" level=info msg="Adding pod default_busybox to CNI network \"kindnet\" (type=ptp)"
	Nov 22 00:53:06 old-k8s-version-625837 crio[836]: time="2025-11-22T00:53:06.046191129Z" level=info msg="Got pod network &{Name:busybox Namespace:default ID:2f8abc46e864cd56c7e7d7b884afd9884bdf27ae1aee04be6904a8e00145e595 UID:770c3edc-3b43-4aa9-b57c-6884dc11b4dc NetNS:/var/run/netns/99926f30-e6f4-41de-969c-75b1e98bcc10 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0x40016aa8f0}] Aliases:map[]}"
	Nov 22 00:53:06 old-k8s-version-625837 crio[836]: time="2025-11-22T00:53:06.046529277Z" level=info msg="Checking pod default_busybox for CNI network kindnet (type=ptp)"
	Nov 22 00:53:06 old-k8s-version-625837 crio[836]: time="2025-11-22T00:53:06.050523712Z" level=info msg="Ran pod sandbox 2f8abc46e864cd56c7e7d7b884afd9884bdf27ae1aee04be6904a8e00145e595 with infra container: default/busybox/POD" id=6a849c17-0fb9-4c93-9a36-a07438d43933 name=/runtime.v1.RuntimeService/RunPodSandbox
	Nov 22 00:53:06 old-k8s-version-625837 crio[836]: time="2025-11-22T00:53:06.053442511Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=a57b6ebc-10e5-486a-be46-8b199be5e87f name=/runtime.v1.ImageService/ImageStatus
	Nov 22 00:53:06 old-k8s-version-625837 crio[836]: time="2025-11-22T00:53:06.053572435Z" level=info msg="Image gcr.io/k8s-minikube/busybox:1.28.4-glibc not found" id=a57b6ebc-10e5-486a-be46-8b199be5e87f name=/runtime.v1.ImageService/ImageStatus
	Nov 22 00:53:06 old-k8s-version-625837 crio[836]: time="2025-11-22T00:53:06.053607453Z" level=info msg="Neither image nor artfiact gcr.io/k8s-minikube/busybox:1.28.4-glibc found" id=a57b6ebc-10e5-486a-be46-8b199be5e87f name=/runtime.v1.ImageService/ImageStatus
	Nov 22 00:53:06 old-k8s-version-625837 crio[836]: time="2025-11-22T00:53:06.054374094Z" level=info msg="Pulling image: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=81283b2f-2ebf-4d01-96fb-34a69a29b34d name=/runtime.v1.ImageService/PullImage
	Nov 22 00:53:06 old-k8s-version-625837 crio[836]: time="2025-11-22T00:53:06.059714401Z" level=info msg="Trying to access \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\""
	Nov 22 00:53:08 old-k8s-version-625837 crio[836]: time="2025-11-22T00:53:08.190022696Z" level=info msg="Pulled image: gcr.io/k8s-minikube/busybox@sha256:580b0aa58b210f512f818b7b7ef4f63c803f7a8cd6baf571b1462b79f7b7719e" id=81283b2f-2ebf-4d01-96fb-34a69a29b34d name=/runtime.v1.ImageService/PullImage
	Nov 22 00:53:08 old-k8s-version-625837 crio[836]: time="2025-11-22T00:53:08.190984957Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=7f73bfa5-b6cb-468e-9a37-2dd397541c4a name=/runtime.v1.ImageService/ImageStatus
	Nov 22 00:53:08 old-k8s-version-625837 crio[836]: time="2025-11-22T00:53:08.192470312Z" level=info msg="Creating container: default/busybox/busybox" id=a90c95a9-ffbf-46c1-8f73-bc921e121150 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 22 00:53:08 old-k8s-version-625837 crio[836]: time="2025-11-22T00:53:08.192606029Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 22 00:53:08 old-k8s-version-625837 crio[836]: time="2025-11-22T00:53:08.197590909Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 22 00:53:08 old-k8s-version-625837 crio[836]: time="2025-11-22T00:53:08.198107456Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 22 00:53:08 old-k8s-version-625837 crio[836]: time="2025-11-22T00:53:08.214873808Z" level=info msg="Created container 8f60712c3f9ab807d8e271e6732353697be5b0ca663c43900311be4bb23ad6b3: default/busybox/busybox" id=a90c95a9-ffbf-46c1-8f73-bc921e121150 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 22 00:53:08 old-k8s-version-625837 crio[836]: time="2025-11-22T00:53:08.216030139Z" level=info msg="Starting container: 8f60712c3f9ab807d8e271e6732353697be5b0ca663c43900311be4bb23ad6b3" id=5d568bf6-e67d-4317-84a7-83188f9fecab name=/runtime.v1.RuntimeService/StartContainer
	Nov 22 00:53:08 old-k8s-version-625837 crio[836]: time="2025-11-22T00:53:08.218411501Z" level=info msg="Started container" PID=1981 containerID=8f60712c3f9ab807d8e271e6732353697be5b0ca663c43900311be4bb23ad6b3 description=default/busybox/busybox id=5d568bf6-e67d-4317-84a7-83188f9fecab name=/runtime.v1.RuntimeService/StartContainer sandboxID=2f8abc46e864cd56c7e7d7b884afd9884bdf27ae1aee04be6904a8e00145e595
	Nov 22 00:53:14 old-k8s-version-625837 crio[836]: time="2025-11-22T00:53:14.919151649Z" level=error msg="Unhandled Error: unable to upgrade websocket connection: websocket server finished before becoming ready (logger=\"UnhandledError\")"
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                              NAMESPACE
	8f60712c3f9ab       gcr.io/k8s-minikube/busybox@sha256:580b0aa58b210f512f818b7b7ef4f63c803f7a8cd6baf571b1462b79f7b7719e   8 seconds ago       Running             busybox                   0                   2f8abc46e864c       busybox                                          default
	6a36c59fad964       97e04611ad43405a2e5863ae17c6f1bc9181bdefdaa78627c432ef754a4eb108                                      13 seconds ago      Running             coredns                   0                   23fe387bca20d       coredns-5dd5756b68-6m4nr                         kube-system
	62df0c51d0727       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                      13 seconds ago      Running             storage-provisioner       0                   07d5d2cb57e3b       storage-provisioner                              kube-system
	e2913badb9e78       docker.io/kindest/kindnetd@sha256:2bdc3188f2ddc8e54841f69ef900a8dde1280057c97500f966a7ef31364021f1    25 seconds ago      Running             kindnet-cni               0                   2b0df3718e22f       kindnet-h6vbs                                    kube-system
	a341239db06d8       940f54a5bcae9dd4c97844fa36d12cc5d9078cffd5e677ad0df1528c12f3240d                                      27 seconds ago      Running             kube-proxy                0                   34df36eac7adc       kube-proxy-zdmf6                                 kube-system
	a141301646d60       9cdd6470f48c8b127530b7ce6ea4b3524137984481e48bcde619735890840ace                                      48 seconds ago      Running             etcd                      0                   8f7e0374d0338       etcd-old-k8s-version-625837                      kube-system
	336f30fdffab3       46cc66ccc7c19b4b30625b0aa4e178792add2385659205d7c6fcbd05d78c23e5                                      48 seconds ago      Running             kube-controller-manager   0                   61420d3962f81       kube-controller-manager-old-k8s-version-625837   kube-system
	3eadaf1862a0c       762dce4090c5f4789bb5dbb933d5b50bc1a2357d7739bbce30d949820e5a38ee                                      48 seconds ago      Running             kube-scheduler            0                   ba66dadf3d267       kube-scheduler-old-k8s-version-625837            kube-system
	3cb4133290da7       00543d2fe5d71095984891a0609ee504b81f9d72a69a0ad02039d4e135213766                                      48 seconds ago      Running             kube-apiserver            0                   f4a8346e39a8a       kube-apiserver-old-k8s-version-625837            kube-system
	
	
	==> coredns [6a36c59fad964c5b7ee733b78d9d03f2cb0a0068a356f4a7ea2206e541a00681] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 8aa94104b4dae56b00431f7362ac05b997af2246775de35dc2eb361b0707b2fa7199f9ddfdba27fdef1331b76d09c41700f6cb5d00836dabab7c0df8e651283f
	CoreDNS-1.10.1
	linux/arm64, go1.20, 055b2c3
	[INFO] 127.0.0.1:39973 - 34763 "HINFO IN 264587621451967734.1490998280874455493. udp 56 false 512" NXDOMAIN qr,rd,ra 56 0.005356126s
	
	
	==> describe nodes <==
	Name:               old-k8s-version-625837
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=old-k8s-version-625837
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=299bbe887a12c40541707cc636234f35f4ff1785
	                    minikube.k8s.io/name=old-k8s-version-625837
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_11_22T00_52_36_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 22 Nov 2025 00:52:31 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  old-k8s-version-625837
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 22 Nov 2025 00:53:16 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 22 Nov 2025 00:53:06 +0000   Sat, 22 Nov 2025 00:52:28 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 22 Nov 2025 00:53:06 +0000   Sat, 22 Nov 2025 00:52:28 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 22 Nov 2025 00:53:06 +0000   Sat, 22 Nov 2025 00:52:28 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 22 Nov 2025 00:53:06 +0000   Sat, 22 Nov 2025 00:53:02 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.85.2
	  Hostname:    old-k8s-version-625837
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	System Info:
	  Machine ID:                 c92eea4d3c03c0156e01fcf8691e3907
	  System UUID:                dcff3a74-c051-4bbe-bac8-1863a477231a
	  Boot ID:                    72ac7385-472f-47d1-a23e-bd80468d6e09
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.2
	  Kubelet Version:            v1.28.0
	  Kube-Proxy Version:         v1.28.0
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                              CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                              ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         11s
	  kube-system                 coredns-5dd5756b68-6m4nr                          100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     29s
	  kube-system                 etcd-old-k8s-version-625837                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         41s
	  kube-system                 kindnet-h6vbs                                     100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      29s
	  kube-system                 kube-apiserver-old-k8s-version-625837             250m (12%)    0 (0%)      0 (0%)           0 (0%)         41s
	  kube-system                 kube-controller-manager-old-k8s-version-625837    200m (10%)    0 (0%)      0 (0%)           0 (0%)         44s
	  kube-system                 kube-proxy-zdmf6                                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         29s
	  kube-system                 kube-scheduler-old-k8s-version-625837             100m (5%)     0 (0%)      0 (0%)           0 (0%)         41s
	  kube-system                 storage-provisioner                               0 (0%)        0 (0%)      0 (0%)           0 (0%)         27s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  100m (5%)
	  memory             220Mi (2%)  220Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 27s                kube-proxy       
	  Normal  NodeHasSufficientMemory  49s (x8 over 49s)  kubelet          Node old-k8s-version-625837 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    49s (x8 over 49s)  kubelet          Node old-k8s-version-625837 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     49s (x8 over 49s)  kubelet          Node old-k8s-version-625837 status is now: NodeHasSufficientPID
	  Normal  Starting                 41s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  41s                kubelet          Node old-k8s-version-625837 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    41s                kubelet          Node old-k8s-version-625837 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     41s                kubelet          Node old-k8s-version-625837 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           29s                node-controller  Node old-k8s-version-625837 event: Registered Node old-k8s-version-625837 in Controller
	  Normal  NodeReady                14s                kubelet          Node old-k8s-version-625837 status is now: NodeReady
	
	
	==> dmesg <==
	[Nov22 00:25] overlayfs: idmapped layers are currently not supported
	[Nov22 00:26] overlayfs: idmapped layers are currently not supported
	[Nov22 00:31] overlayfs: idmapped layers are currently not supported
	[ +30.712010] overlayfs: idmapped layers are currently not supported
	[Nov22 00:32] overlayfs: idmapped layers are currently not supported
	[Nov22 00:33] overlayfs: idmapped layers are currently not supported
	[Nov22 00:35] overlayfs: idmapped layers are currently not supported
	[Nov22 00:36] overlayfs: idmapped layers are currently not supported
	[ +18.168104] overlayfs: idmapped layers are currently not supported
	[Nov22 00:37] overlayfs: idmapped layers are currently not supported
	[ +56.322609] overlayfs: idmapped layers are currently not supported
	[Nov22 00:38] overlayfs: idmapped layers are currently not supported
	[Nov22 00:39] overlayfs: idmapped layers are currently not supported
	[ +23.174928] overlayfs: idmapped layers are currently not supported
	[Nov22 00:41] overlayfs: idmapped layers are currently not supported
	[Nov22 00:42] overlayfs: idmapped layers are currently not supported
	[Nov22 00:44] overlayfs: idmapped layers are currently not supported
	[Nov22 00:45] overlayfs: idmapped layers are currently not supported
	[Nov22 00:46] overlayfs: idmapped layers are currently not supported
	[Nov22 00:48] overlayfs: idmapped layers are currently not supported
	[Nov22 00:50] overlayfs: idmapped layers are currently not supported
	[Nov22 00:51] overlayfs: idmapped layers are currently not supported
	[ +11.900293] overlayfs: idmapped layers are currently not supported
	[ +28.922055] overlayfs: idmapped layers are currently not supported
	[Nov22 00:52] overlayfs: idmapped layers are currently not supported
	
	
	==> etcd [a141301646d60bf8dccec9c48cf536a9660c04ff39e23a17f2e356a5632d1bfb] <==
	{"level":"info","ts":"2025-11-22T00:52:28.062958Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed switched to configuration voters=(11459225503572592365)"}
	{"level":"info","ts":"2025-11-22T00:52:28.066571Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"68eaea490fab4e05","local-member-id":"9f0758e1c58a86ed","added-peer-id":"9f0758e1c58a86ed","added-peer-peer-urls":["https://192.168.85.2:2380"]}
	{"level":"info","ts":"2025-11-22T00:52:28.06781Z","caller":"embed/etcd.go:726","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2025-11-22T00:52:28.067952Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.168.85.2:2380"}
	{"level":"info","ts":"2025-11-22T00:52:28.070747Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.85.2:2380"}
	{"level":"info","ts":"2025-11-22T00:52:28.074673Z","caller":"embed/etcd.go:278","msg":"now serving peer/client/metrics","local-member-id":"9f0758e1c58a86ed","initial-advertise-peer-urls":["https://192.168.85.2:2380"],"listen-peer-urls":["https://192.168.85.2:2380"],"advertise-client-urls":["https://192.168.85.2:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.85.2:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2025-11-22T00:52:28.074845Z","caller":"embed/etcd.go:855","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2025-11-22T00:52:28.325845Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed is starting a new election at term 1"}
	{"level":"info","ts":"2025-11-22T00:52:28.325953Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed became pre-candidate at term 1"}
	{"level":"info","ts":"2025-11-22T00:52:28.326001Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed received MsgPreVoteResp from 9f0758e1c58a86ed at term 1"}
	{"level":"info","ts":"2025-11-22T00:52:28.326048Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed became candidate at term 2"}
	{"level":"info","ts":"2025-11-22T00:52:28.326078Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed received MsgVoteResp from 9f0758e1c58a86ed at term 2"}
	{"level":"info","ts":"2025-11-22T00:52:28.326113Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed became leader at term 2"}
	{"level":"info","ts":"2025-11-22T00:52:28.326144Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 9f0758e1c58a86ed elected leader 9f0758e1c58a86ed at term 2"}
	{"level":"info","ts":"2025-11-22T00:52:28.327475Z","caller":"etcdserver/server.go:2571","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2025-11-22T00:52:28.328518Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"9f0758e1c58a86ed","local-member-attributes":"{Name:old-k8s-version-625837 ClientURLs:[https://192.168.85.2:2379]}","request-path":"/0/members/9f0758e1c58a86ed/attributes","cluster-id":"68eaea490fab4e05","publish-timeout":"7s"}
	{"level":"info","ts":"2025-11-22T00:52:28.328597Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-11-22T00:52:28.329299Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"68eaea490fab4e05","local-member-id":"9f0758e1c58a86ed","cluster-version":"3.5"}
	{"level":"info","ts":"2025-11-22T00:52:28.332153Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2025-11-22T00:52:28.332218Z","caller":"etcdserver/server.go:2595","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2025-11-22T00:52:28.332443Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-11-22T00:52:28.332808Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2025-11-22T00:52:28.333556Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.85.2:2379"}
	{"level":"info","ts":"2025-11-22T00:52:28.334446Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2025-11-22T00:52:28.334493Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	
	
	==> kernel <==
	 00:53:16 up  5:35,  0 user,  load average: 2.45, 3.22, 2.44
	Linux old-k8s-version-625837 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [e2913badb9e78c1dc96a617a4342a3017554ba1c96ed273a9662cada5649734d] <==
	I1122 00:52:51.317047       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1122 00:52:51.317255       1 main.go:139] hostIP = 192.168.85.2
	podIP = 192.168.85.2
	I1122 00:52:51.317371       1 main.go:148] setting mtu 1500 for CNI 
	I1122 00:52:51.317389       1 main.go:178] kindnetd IP family: "ipv4"
	I1122 00:52:51.317403       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-11-22T00:52:51Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1122 00:52:51.520711       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1122 00:52:51.520883       1 controller.go:381] "Waiting for informer caches to sync"
	I1122 00:52:51.520970       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1122 00:52:51.521346       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1122 00:52:51.809875       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1122 00:52:51.809965       1 metrics.go:72] Registering metrics
	I1122 00:52:51.810055       1 controller.go:711] "Syncing nftables rules"
	I1122 00:53:01.520001       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1122 00:53:01.520140       1 main.go:301] handling current node
	I1122 00:53:11.520878       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1122 00:53:11.521035       1 main.go:301] handling current node
	
	
	==> kube-apiserver [3cb4133290da7f7a630418ed34e29800dc77436fd23fd604222619580d35a16d] <==
	I1122 00:52:31.827999       1 shared_informer.go:318] Caches are synced for node_authorizer
	I1122 00:52:31.835445       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I1122 00:52:31.835917       1 apf_controller.go:377] Running API Priority and Fairness config worker
	I1122 00:52:31.835937       1 apf_controller.go:380] Running API Priority and Fairness periodic rebalancing process
	I1122 00:52:31.840851       1 shared_informer.go:318] Caches are synced for crd-autoregister
	I1122 00:52:31.841273       1 aggregator.go:166] initial CRD sync complete...
	I1122 00:52:31.841297       1 autoregister_controller.go:141] Starting autoregister controller
	I1122 00:52:31.841303       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1122 00:52:31.841310       1 cache.go:39] Caches are synced for autoregister controller
	I1122 00:52:31.843901       1 controller.go:624] quota admission added evaluator for: leases.coordination.k8s.io
	I1122 00:52:32.448218       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I1122 00:52:32.452791       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I1122 00:52:32.452813       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1122 00:52:33.089776       1 controller.go:624] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1122 00:52:33.134297       1 controller.go:624] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1122 00:52:33.288078       1 alloc.go:330] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W1122 00:52:33.294902       1 lease.go:263] Resetting endpoints for master service "kubernetes" to [192.168.85.2]
	I1122 00:52:33.296026       1 controller.go:624] quota admission added evaluator for: endpoints
	I1122 00:52:33.300719       1 controller.go:624] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1122 00:52:33.611509       1 controller.go:624] quota admission added evaluator for: serviceaccounts
	I1122 00:52:35.136521       1 controller.go:624] quota admission added evaluator for: deployments.apps
	I1122 00:52:35.151419       1 alloc.go:330] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I1122 00:52:35.173049       1 controller.go:624] quota admission added evaluator for: daemonsets.apps
	I1122 00:52:47.561501       1 controller.go:624] quota admission added evaluator for: replicasets.apps
	I1122 00:52:47.834107       1 controller.go:624] quota admission added evaluator for: controllerrevisions.apps
	
	
	==> kube-controller-manager [336f30fdffab34b3e6bfc023577e980912462a768349fc9b6b17eff30f396dd6] <==
	I1122 00:52:47.622481       1 event.go:307] "Event occurred" object="kube-system/kube-controller-manager-old-k8s-version-625837" fieldPath="" kind="Pod" apiVersion="v1" type="Warning" reason="NodeNotReady" message="Node is not ready"
	I1122 00:52:47.624493       1 shared_informer.go:318] Caches are synced for resource quota
	I1122 00:52:47.660617       1 shared_informer.go:318] Caches are synced for endpoint_slice
	I1122 00:52:47.660667       1 shared_informer.go:318] Caches are synced for resource quota
	I1122 00:52:47.955772       1 event.go:307] "Event occurred" object="kube-system/coredns-5dd5756b68" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-5dd5756b68-6m4nr"
	I1122 00:52:47.992732       1 event.go:307] "Event occurred" object="kube-system/kube-proxy" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kube-proxy-zdmf6"
	I1122 00:52:47.992756       1 event.go:307] "Event occurred" object="kube-system/kindnet" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kindnet-h6vbs"
	I1122 00:52:48.002765       1 shared_informer.go:318] Caches are synced for garbage collector
	I1122 00:52:48.010736       1 shared_informer.go:318] Caches are synced for garbage collector
	I1122 00:52:48.010775       1 garbagecollector.go:166] "All resource monitors have synced. Proceeding to collect garbage"
	I1122 00:52:48.060673       1 event.go:307] "Event occurred" object="kube-system/coredns-5dd5756b68" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-5dd5756b68-jpg5h"
	I1122 00:52:48.110371       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="510.04692ms"
	I1122 00:52:48.138936       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="28.510491ms"
	I1122 00:52:48.140015       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="79.998µs"
	I1122 00:52:48.827510       1 event.go:307] "Event occurred" object="kube-system/coredns" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled down replica set coredns-5dd5756b68 to 1 from 2"
	I1122 00:52:48.870566       1 event.go:307] "Event occurred" object="kube-system/coredns-5dd5756b68" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulDelete" message="Deleted pod: coredns-5dd5756b68-jpg5h"
	I1122 00:52:48.899795       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="73.05084ms"
	I1122 00:52:48.927659       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="27.81123ms"
	I1122 00:52:48.927765       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="54.595µs"
	I1122 00:53:02.102310       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="112.99µs"
	I1122 00:53:02.130860       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="116.435µs"
	I1122 00:53:02.550691       1 node_lifecycle_controller.go:1048] "Controller detected that some Nodes are Ready. Exiting master disruption mode"
	I1122 00:53:03.519475       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="83.583µs"
	I1122 00:53:03.560634       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="15.963414ms"
	I1122 00:53:03.560724       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="51.945µs"
	
	
	==> kube-proxy [a341239db06d8c871ad56cf50ff64d48114fc24fdcb86b4b0c032d2046ab6661] <==
	I1122 00:52:48.607768       1 server_others.go:69] "Using iptables proxy"
	I1122 00:52:48.647265       1 node.go:141] Successfully retrieved node IP: 192.168.85.2
	I1122 00:52:48.726297       1 server.go:632] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1122 00:52:48.734573       1 server_others.go:152] "Using iptables Proxier"
	I1122 00:52:48.734613       1 server_others.go:421] "Detect-local-mode set to ClusterCIDR, but no cluster CIDR for family" ipFamily="IPv6"
	I1122 00:52:48.734620       1 server_others.go:438] "Defaulting to no-op detect-local"
	I1122 00:52:48.734650       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I1122 00:52:48.734839       1 server.go:846] "Version info" version="v1.28.0"
	I1122 00:52:48.734849       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1122 00:52:48.735764       1 config.go:188] "Starting service config controller"
	I1122 00:52:48.735789       1 shared_informer.go:311] Waiting for caches to sync for service config
	I1122 00:52:48.735815       1 config.go:97] "Starting endpoint slice config controller"
	I1122 00:52:48.735819       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I1122 00:52:48.742514       1 config.go:315] "Starting node config controller"
	I1122 00:52:48.742538       1 shared_informer.go:311] Waiting for caches to sync for node config
	I1122 00:52:48.837582       1 shared_informer.go:318] Caches are synced for service config
	I1122 00:52:48.837635       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I1122 00:52:48.844851       1 shared_informer.go:318] Caches are synced for node config
	
	
	==> kube-scheduler [3eadaf1862a0cb27b21e1574e19f639ac166fc733eb156b56a5291bfc07eba94] <==
	W1122 00:52:32.299536       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E1122 00:52:32.301219       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W1122 00:52:32.299597       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E1122 00:52:32.301455       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W1122 00:52:32.299648       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E1122 00:52:32.301600       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W1122 00:52:32.299694       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E1122 00:52:32.301692       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W1122 00:52:32.299731       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E1122 00:52:32.301939       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W1122 00:52:32.299806       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E1122 00:52:32.302069       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W1122 00:52:32.302470       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E1122 00:52:32.302535       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W1122 00:52:32.299876       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E1122 00:52:32.302908       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W1122 00:52:32.299924       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E1122 00:52:32.304632       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W1122 00:52:32.299971       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E1122 00:52:32.305857       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W1122 00:52:32.300941       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W1122 00:52:32.306104       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E1122 00:52:32.307560       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E1122 00:52:32.307705       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	I1122 00:52:33.383732       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Nov 22 00:52:47 old-k8s-version-625837 kubelet[1356]: I1122 00:52:47.493074    1356 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Nov 22 00:52:48 old-k8s-version-625837 kubelet[1356]: I1122 00:52:48.067722    1356 topology_manager.go:215] "Topology Admit Handler" podUID="7b5dde4d-792c-4340-9621-ccd57f294d20" podNamespace="kube-system" podName="kube-proxy-zdmf6"
	Nov 22 00:52:48 old-k8s-version-625837 kubelet[1356]: I1122 00:52:48.086090    1356 topology_manager.go:215] "Topology Admit Handler" podUID="136803c6-4591-42d7-b387-7aa8c6c6b628" podNamespace="kube-system" podName="kindnet-h6vbs"
	Nov 22 00:52:48 old-k8s-version-625837 kubelet[1356]: I1122 00:52:48.224358    1356 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rj5hj\" (UniqueName: \"kubernetes.io/projected/136803c6-4591-42d7-b387-7aa8c6c6b628-kube-api-access-rj5hj\") pod \"kindnet-h6vbs\" (UID: \"136803c6-4591-42d7-b387-7aa8c6c6b628\") " pod="kube-system/kindnet-h6vbs"
	Nov 22 00:52:48 old-k8s-version-625837 kubelet[1356]: I1122 00:52:48.224424    1356 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/7b5dde4d-792c-4340-9621-ccd57f294d20-xtables-lock\") pod \"kube-proxy-zdmf6\" (UID: \"7b5dde4d-792c-4340-9621-ccd57f294d20\") " pod="kube-system/kube-proxy-zdmf6"
	Nov 22 00:52:48 old-k8s-version-625837 kubelet[1356]: I1122 00:52:48.224452    1356 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/136803c6-4591-42d7-b387-7aa8c6c6b628-lib-modules\") pod \"kindnet-h6vbs\" (UID: \"136803c6-4591-42d7-b387-7aa8c6c6b628\") " pod="kube-system/kindnet-h6vbs"
	Nov 22 00:52:48 old-k8s-version-625837 kubelet[1356]: I1122 00:52:48.224508    1356 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/7b5dde4d-792c-4340-9621-ccd57f294d20-lib-modules\") pod \"kube-proxy-zdmf6\" (UID: \"7b5dde4d-792c-4340-9621-ccd57f294d20\") " pod="kube-system/kube-proxy-zdmf6"
	Nov 22 00:52:48 old-k8s-version-625837 kubelet[1356]: I1122 00:52:48.224537    1356 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/136803c6-4591-42d7-b387-7aa8c6c6b628-cni-cfg\") pod \"kindnet-h6vbs\" (UID: \"136803c6-4591-42d7-b387-7aa8c6c6b628\") " pod="kube-system/kindnet-h6vbs"
	Nov 22 00:52:48 old-k8s-version-625837 kubelet[1356]: I1122 00:52:48.224667    1356 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/7b5dde4d-792c-4340-9621-ccd57f294d20-kube-proxy\") pod \"kube-proxy-zdmf6\" (UID: \"7b5dde4d-792c-4340-9621-ccd57f294d20\") " pod="kube-system/kube-proxy-zdmf6"
	Nov 22 00:52:48 old-k8s-version-625837 kubelet[1356]: I1122 00:52:48.224707    1356 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/136803c6-4591-42d7-b387-7aa8c6c6b628-xtables-lock\") pod \"kindnet-h6vbs\" (UID: \"136803c6-4591-42d7-b387-7aa8c6c6b628\") " pod="kube-system/kindnet-h6vbs"
	Nov 22 00:52:48 old-k8s-version-625837 kubelet[1356]: I1122 00:52:48.224828    1356 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-65979\" (UniqueName: \"kubernetes.io/projected/7b5dde4d-792c-4340-9621-ccd57f294d20-kube-api-access-65979\") pod \"kube-proxy-zdmf6\" (UID: \"7b5dde4d-792c-4340-9621-ccd57f294d20\") " pod="kube-system/kube-proxy-zdmf6"
	Nov 22 00:52:49 old-k8s-version-625837 kubelet[1356]: I1122 00:52:49.490370    1356 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-proxy-zdmf6" podStartSLOduration=2.4903294799999998 podCreationTimestamp="2025-11-22 00:52:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-22 00:52:49.490284919 +0000 UTC m=+14.385619052" watchObservedRunningTime="2025-11-22 00:52:49.49032948 +0000 UTC m=+14.385663613"
	Nov 22 00:52:55 old-k8s-version-625837 kubelet[1356]: I1122 00:52:55.338885    1356 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kindnet-h6vbs" podStartSLOduration=5.548140494 podCreationTimestamp="2025-11-22 00:52:47 +0000 UTC" firstStartedPulling="2025-11-22 00:52:48.433505238 +0000 UTC m=+13.328839363" lastFinishedPulling="2025-11-22 00:52:51.224189201 +0000 UTC m=+16.119523326" observedRunningTime="2025-11-22 00:52:51.486710304 +0000 UTC m=+16.382044437" watchObservedRunningTime="2025-11-22 00:52:55.338824457 +0000 UTC m=+20.234158581"
	Nov 22 00:53:02 old-k8s-version-625837 kubelet[1356]: I1122 00:53:02.062777    1356 kubelet_node_status.go:493] "Fast updating node status as it just became ready"
	Nov 22 00:53:02 old-k8s-version-625837 kubelet[1356]: I1122 00:53:02.096565    1356 topology_manager.go:215] "Topology Admit Handler" podUID="e45decb9-b863-4eeb-8363-e8134ea94857" podNamespace="kube-system" podName="storage-provisioner"
	Nov 22 00:53:02 old-k8s-version-625837 kubelet[1356]: I1122 00:53:02.099710    1356 topology_manager.go:215] "Topology Admit Handler" podUID="21a6b372-6765-44be-afb9-9dcaf8246818" podNamespace="kube-system" podName="coredns-5dd5756b68-6m4nr"
	Nov 22 00:53:02 old-k8s-version-625837 kubelet[1356]: I1122 00:53:02.229432    1356 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6l42t\" (UniqueName: \"kubernetes.io/projected/e45decb9-b863-4eeb-8363-e8134ea94857-kube-api-access-6l42t\") pod \"storage-provisioner\" (UID: \"e45decb9-b863-4eeb-8363-e8134ea94857\") " pod="kube-system/storage-provisioner"
	Nov 22 00:53:02 old-k8s-version-625837 kubelet[1356]: I1122 00:53:02.229649    1356 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qdblb\" (UniqueName: \"kubernetes.io/projected/21a6b372-6765-44be-afb9-9dcaf8246818-kube-api-access-qdblb\") pod \"coredns-5dd5756b68-6m4nr\" (UID: \"21a6b372-6765-44be-afb9-9dcaf8246818\") " pod="kube-system/coredns-5dd5756b68-6m4nr"
	Nov 22 00:53:02 old-k8s-version-625837 kubelet[1356]: I1122 00:53:02.229690    1356 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/e45decb9-b863-4eeb-8363-e8134ea94857-tmp\") pod \"storage-provisioner\" (UID: \"e45decb9-b863-4eeb-8363-e8134ea94857\") " pod="kube-system/storage-provisioner"
	Nov 22 00:53:02 old-k8s-version-625837 kubelet[1356]: I1122 00:53:02.229723    1356 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/21a6b372-6765-44be-afb9-9dcaf8246818-config-volume\") pod \"coredns-5dd5756b68-6m4nr\" (UID: \"21a6b372-6765-44be-afb9-9dcaf8246818\") " pod="kube-system/coredns-5dd5756b68-6m4nr"
	Nov 22 00:53:02 old-k8s-version-625837 kubelet[1356]: W1122 00:53:02.460627    1356 manager.go:1159] Failed to process watch event {EventType:0 Name:/docker/c1b8e95ff95e3c8d00f04e99efeff1b9aae77957e1730bef176997569aa985cb/crio-23fe387bca20db2ce5d279e003ffa41021522ab04597a476aa3fff31fdc8d447 WatchSource:0}: Error finding container 23fe387bca20db2ce5d279e003ffa41021522ab04597a476aa3fff31fdc8d447: Status 404 returned error can't find the container with id 23fe387bca20db2ce5d279e003ffa41021522ab04597a476aa3fff31fdc8d447
	Nov 22 00:53:02 old-k8s-version-625837 kubelet[1356]: I1122 00:53:02.513313    1356 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/storage-provisioner" podStartSLOduration=13.5132627 podCreationTimestamp="2025-11-22 00:52:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-22 00:53:02.512817692 +0000 UTC m=+27.408151816" watchObservedRunningTime="2025-11-22 00:53:02.5132627 +0000 UTC m=+27.408596833"
	Nov 22 00:53:03 old-k8s-version-625837 kubelet[1356]: I1122 00:53:03.518172    1356 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-5dd5756b68-6m4nr" podStartSLOduration=16.518119368 podCreationTimestamp="2025-11-22 00:52:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-22 00:53:03.517406421 +0000 UTC m=+28.412740562" watchObservedRunningTime="2025-11-22 00:53:03.518119368 +0000 UTC m=+28.413453493"
	Nov 22 00:53:05 old-k8s-version-625837 kubelet[1356]: I1122 00:53:05.724893    1356 topology_manager.go:215] "Topology Admit Handler" podUID="770c3edc-3b43-4aa9-b57c-6884dc11b4dc" podNamespace="default" podName="busybox"
	Nov 22 00:53:05 old-k8s-version-625837 kubelet[1356]: I1122 00:53:05.855643    1356 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rbh75\" (UniqueName: \"kubernetes.io/projected/770c3edc-3b43-4aa9-b57c-6884dc11b4dc-kube-api-access-rbh75\") pod \"busybox\" (UID: \"770c3edc-3b43-4aa9-b57c-6884dc11b4dc\") " pod="default/busybox"
	
	
	==> storage-provisioner [62df0c51d0727a1e6a8ac277d8fc1b6bd2b37aacd3b2818516e3ff41ae3718c0] <==
	I1122 00:53:02.478041       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1122 00:53:02.495892       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1122 00:53:02.495941       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I1122 00:53:02.509058       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1122 00:53:02.510313       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_old-k8s-version-625837_8a6e4610-a33f-47a4-a9c4-2913324e2cf9!
	I1122 00:53:02.516275       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"829114ab-116e-46a2-b9b8-eeaca50c29a6", APIVersion:"v1", ResourceVersion:"445", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' old-k8s-version-625837_8a6e4610-a33f-47a4-a9c4-2913324e2cf9 became leader
	I1122 00:53:02.711479       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_old-k8s-version-625837_8a6e4610-a33f-47a4-a9c4-2913324e2cf9!
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-625837 -n old-k8s-version-625837
helpers_test.go:269: (dbg) Run:  kubectl --context old-k8s-version-625837 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (2.47s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Pause (6.58s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 pause -p old-k8s-version-625837 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-arm64 pause -p old-k8s-version-625837 --alsologtostderr -v=1: exit status 80 (1.779166135s)

                                                
                                                
-- stdout --
	* Pausing node old-k8s-version-625837 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1122 00:54:32.989134  698582 out.go:360] Setting OutFile to fd 1 ...
	I1122 00:54:32.989251  698582 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1122 00:54:32.989262  698582 out.go:374] Setting ErrFile to fd 2...
	I1122 00:54:32.989266  698582 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1122 00:54:32.989537  698582 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21934-513600/.minikube/bin
	I1122 00:54:32.989768  698582 out.go:368] Setting JSON to false
	I1122 00:54:32.989795  698582 mustload.go:66] Loading cluster: old-k8s-version-625837
	I1122 00:54:32.990242  698582 config.go:182] Loaded profile config "old-k8s-version-625837": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.0
	I1122 00:54:32.990707  698582 cli_runner.go:164] Run: docker container inspect old-k8s-version-625837 --format={{.State.Status}}
	I1122 00:54:33.010187  698582 host.go:66] Checking if "old-k8s-version-625837" exists ...
	I1122 00:54:33.010514  698582 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1122 00:54:33.070998  698582 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:2 ContainersRunning:2 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:49 OomKillDisable:true NGoroutines:62 SystemTime:2025-11-22 00:54:33.061014842 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1122 00:54:33.071705  698582 pause.go:60] "namespaces" [kube-system kubernetes-dashboard istio-operator]="keys" map[addons:[] all:%!s(bool=false) apiserver-ips:[] apiserver-name:minikubeCA apiserver-names:[] apiserver-port:%!s(int=8443) auto-pause-interval:1m0s auto-update-drivers:%!s(bool=true) base-image:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763588073-21934@sha256:19d3da0413e1bfa354cbb88004c6796f8e9772a083e0230b0f6e50212ee04c7e binary-mirror: bootstrapper:kubeadm cache-images:%!s(bool=true) cancel-scheduled:%!s(bool=false) cert-expiration:26280h0m0s cni: container-runtime: cpus:2 cri-socket: delete-on-failure:%!s(bool=false) disable-coredns-log:%!s(bool=false) disable-driver-mounts:%!s(bool=false) disable-metrics:%!s(bool=false) disable-optimizations:%!s(bool=false) disk-size:20000mb dns-domain:cluster.local dns-proxy:%!s(bool=false) docker-env:[] docker-opt:[] download-only:%!s(bool=false) driver: dry-run:%!s(bool=false) embed-certs:%!s(bool=false) embedcerts:%!s(bool=false) enable-default-
cni:%!s(bool=false) extra-config: extra-disks:%!s(int=0) feature-gates: force:%!s(bool=false) force-systemd:%!s(bool=false) gpus: ha:%!s(bool=false) host-dns-resolver:%!s(bool=true) host-only-cidr:192.168.59.1/24 host-only-nic-type:virtio hyperkit-vpnkit-sock: hyperkit-vsock-ports:[] hyperv-external-adapter: hyperv-use-external-switch:%!s(bool=false) hyperv-virtual-switch: image-mirror-country: image-repository: insecure-registry:[] install-addons:%!s(bool=true) interactive:%!s(bool=true) iso-url:[https://storage.googleapis.com/minikube-builds/iso/21924/minikube-v1.37.0-1763503576-21924-arm64.iso https://github.com/kubernetes/minikube/releases/download/v1.37.0-1763503576-21924/minikube-v1.37.0-1763503576-21924-arm64.iso https://kubernetes.oss-cn-hangzhou.aliyuncs.com/minikube/iso/minikube-v1.37.0-1763503576-21924-arm64.iso] keep-context:%!s(bool=false) keep-context-active:%!s(bool=false) kubernetes-version: kvm-gpu:%!s(bool=false) kvm-hidden:%!s(bool=false) kvm-network:default kvm-numa-count:%!s(int=1) kvm-qe
mu-uri:qemu:///system listen-address: maxauditentries:%!s(int=1000) memory: mount:%!s(bool=false) mount-9p-version:9p2000.L mount-gid:docker mount-ip: mount-msize:%!s(int=262144) mount-options:[] mount-port:0 mount-string: mount-type:9p mount-uid:docker namespace:default nat-nic-type:virtio native-ssh:%!s(bool=true) network: network-plugin: nfs-share:[] nfs-shares-root:/nfsshares no-kubernetes:%!s(bool=false) no-vtx-check:%!s(bool=false) nodes:%!s(int=1) output:text ports:[] preload:%!s(bool=true) profile:old-k8s-version-625837 purge:%!s(bool=false) qemu-firmware-path: registry-mirror:[] reminderwaitperiodinhours:%!s(int=24) rootless:%!s(bool=false) schedule:0s service-cluster-ip-range:10.96.0.0/12 skip-audit:%!s(bool=false) socket-vmnet-client-path: socket-vmnet-path: ssh-ip-address: ssh-key: ssh-port:%!s(int=22) ssh-user:root static-ip: subnet: trace: user: uuid: vm:%!s(bool=false) vm-driver: wait:[apiserver system_pods] wait-timeout:6m0s wantnonedriverwarning:%!s(bool=true) wantupdatenotification:%!s(bool=
true) wantvirtualboxdriverwarning:%!s(bool=true)]="(MISSING)"
	I1122 00:54:33.076338  698582 out.go:179] * Pausing node old-k8s-version-625837 ... 
	I1122 00:54:33.078720  698582 host.go:66] Checking if "old-k8s-version-625837" exists ...
	I1122 00:54:33.079082  698582 ssh_runner.go:195] Run: systemctl --version
	I1122 00:54:33.079130  698582 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-625837
	I1122 00:54:33.099508  698582 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33775 SSHKeyPath:/home/jenkins/minikube-integration/21934-513600/.minikube/machines/old-k8s-version-625837/id_rsa Username:docker}
	I1122 00:54:33.201473  698582 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1122 00:54:33.223405  698582 pause.go:52] kubelet running: true
	I1122 00:54:33.223524  698582 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1122 00:54:33.473332  698582 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1122 00:54:33.473448  698582 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1122 00:54:33.557992  698582 cri.go:89] found id: "ba7fcf01e100d9075f965f94ff668899f01eaaea6cf6c057437439123135dbae"
	I1122 00:54:33.558015  698582 cri.go:89] found id: "862c22e09e90a6b8d8c4549584f6e46b25d2206c9d9578169b2675c47e337141"
	I1122 00:54:33.558021  698582 cri.go:89] found id: "a05ae1ca90b871431d6a63387000b3a0fc2d30bdc217ce9cd70319e940e72234"
	I1122 00:54:33.558026  698582 cri.go:89] found id: "5a851503d8f86845f2821dfa2135db11f6512a59f26a300ecd84a35339db4496"
	I1122 00:54:33.558030  698582 cri.go:89] found id: "b36da003b1235361d4b8a4e7e49cab04a763af242e9b15f7fb361a03edb9e4c8"
	I1122 00:54:33.558034  698582 cri.go:89] found id: "9deafbe8687dcd224ab5e480cfefa5cc596bb04d62aab6f5da3083aca07488e8"
	I1122 00:54:33.558037  698582 cri.go:89] found id: "a1d1d67ba75cb36995d73540a0a298366b4b32ccbfda1a424c21b0b86506d11d"
	I1122 00:54:33.558040  698582 cri.go:89] found id: "4c192c23a5c2cb8d4827103c705875b67426f13e7541c3c230c0bacb6b6f0ca9"
	I1122 00:54:33.558043  698582 cri.go:89] found id: "fee91ee3c441411dbcd777ca8d5095cfd10e8a67a33ad4caf348ae63ec865a72"
	I1122 00:54:33.558074  698582 cri.go:89] found id: "d5df7182fcdae9d1252372617e3f730dce9069a240030f68c42a96f8a784beb0"
	I1122 00:54:33.558089  698582 cri.go:89] found id: "c68e57a00d3a76b7ae45ba0f8dd0b5bd690d691009ae2395dd8a7a8d4b3955db"
	I1122 00:54:33.558093  698582 cri.go:89] found id: ""
	I1122 00:54:33.558159  698582 ssh_runner.go:195] Run: sudo runc list -f json
	I1122 00:54:33.581639  698582 retry.go:31] will retry after 218.297743ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-22T00:54:33Z" level=error msg="open /run/runc: no such file or directory"
	I1122 00:54:33.801110  698582 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1122 00:54:33.815771  698582 pause.go:52] kubelet running: false
	I1122 00:54:33.815849  698582 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1122 00:54:33.989712  698582 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1122 00:54:33.989830  698582 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1122 00:54:34.064002  698582 cri.go:89] found id: "ba7fcf01e100d9075f965f94ff668899f01eaaea6cf6c057437439123135dbae"
	I1122 00:54:34.064023  698582 cri.go:89] found id: "862c22e09e90a6b8d8c4549584f6e46b25d2206c9d9578169b2675c47e337141"
	I1122 00:54:34.064028  698582 cri.go:89] found id: "a05ae1ca90b871431d6a63387000b3a0fc2d30bdc217ce9cd70319e940e72234"
	I1122 00:54:34.064032  698582 cri.go:89] found id: "5a851503d8f86845f2821dfa2135db11f6512a59f26a300ecd84a35339db4496"
	I1122 00:54:34.064036  698582 cri.go:89] found id: "b36da003b1235361d4b8a4e7e49cab04a763af242e9b15f7fb361a03edb9e4c8"
	I1122 00:54:34.064040  698582 cri.go:89] found id: "9deafbe8687dcd224ab5e480cfefa5cc596bb04d62aab6f5da3083aca07488e8"
	I1122 00:54:34.064043  698582 cri.go:89] found id: "a1d1d67ba75cb36995d73540a0a298366b4b32ccbfda1a424c21b0b86506d11d"
	I1122 00:54:34.064046  698582 cri.go:89] found id: "4c192c23a5c2cb8d4827103c705875b67426f13e7541c3c230c0bacb6b6f0ca9"
	I1122 00:54:34.064048  698582 cri.go:89] found id: "fee91ee3c441411dbcd777ca8d5095cfd10e8a67a33ad4caf348ae63ec865a72"
	I1122 00:54:34.064055  698582 cri.go:89] found id: "d5df7182fcdae9d1252372617e3f730dce9069a240030f68c42a96f8a784beb0"
	I1122 00:54:34.064058  698582 cri.go:89] found id: "c68e57a00d3a76b7ae45ba0f8dd0b5bd690d691009ae2395dd8a7a8d4b3955db"
	I1122 00:54:34.064062  698582 cri.go:89] found id: ""
	I1122 00:54:34.064148  698582 ssh_runner.go:195] Run: sudo runc list -f json
	I1122 00:54:34.076374  698582 retry.go:31] will retry after 347.908912ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-22T00:54:34Z" level=error msg="open /run/runc: no such file or directory"
	I1122 00:54:34.425045  698582 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1122 00:54:34.438940  698582 pause.go:52] kubelet running: false
	I1122 00:54:34.439006  698582 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1122 00:54:34.621891  698582 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1122 00:54:34.621969  698582 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1122 00:54:34.689746  698582 cri.go:89] found id: "ba7fcf01e100d9075f965f94ff668899f01eaaea6cf6c057437439123135dbae"
	I1122 00:54:34.689905  698582 cri.go:89] found id: "862c22e09e90a6b8d8c4549584f6e46b25d2206c9d9578169b2675c47e337141"
	I1122 00:54:34.689926  698582 cri.go:89] found id: "a05ae1ca90b871431d6a63387000b3a0fc2d30bdc217ce9cd70319e940e72234"
	I1122 00:54:34.689945  698582 cri.go:89] found id: "5a851503d8f86845f2821dfa2135db11f6512a59f26a300ecd84a35339db4496"
	I1122 00:54:34.689973  698582 cri.go:89] found id: "b36da003b1235361d4b8a4e7e49cab04a763af242e9b15f7fb361a03edb9e4c8"
	I1122 00:54:34.689996  698582 cri.go:89] found id: "9deafbe8687dcd224ab5e480cfefa5cc596bb04d62aab6f5da3083aca07488e8"
	I1122 00:54:34.690014  698582 cri.go:89] found id: "a1d1d67ba75cb36995d73540a0a298366b4b32ccbfda1a424c21b0b86506d11d"
	I1122 00:54:34.690031  698582 cri.go:89] found id: "4c192c23a5c2cb8d4827103c705875b67426f13e7541c3c230c0bacb6b6f0ca9"
	I1122 00:54:34.690047  698582 cri.go:89] found id: "fee91ee3c441411dbcd777ca8d5095cfd10e8a67a33ad4caf348ae63ec865a72"
	I1122 00:54:34.690086  698582 cri.go:89] found id: "d5df7182fcdae9d1252372617e3f730dce9069a240030f68c42a96f8a784beb0"
	I1122 00:54:34.690106  698582 cri.go:89] found id: "c68e57a00d3a76b7ae45ba0f8dd0b5bd690d691009ae2395dd8a7a8d4b3955db"
	I1122 00:54:34.690122  698582 cri.go:89] found id: ""
	I1122 00:54:34.690201  698582 ssh_runner.go:195] Run: sudo runc list -f json
	I1122 00:54:34.704285  698582 out.go:203] 
	W1122 00:54:34.705764  698582 out.go:285] X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-22T00:54:34Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-22T00:54:34Z" level=error msg="open /run/runc: no such file or directory"
	
	W1122 00:54:34.705786  698582 out.go:285] * 
	* 
	W1122 00:54:34.713271  698582 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1122 00:54:34.715015  698582 out.go:203] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:309: out/minikube-linux-arm64 pause -p old-k8s-version-625837 --alsologtostderr -v=1 failed: exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/Pause]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/Pause]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect old-k8s-version-625837
helpers_test.go:243: (dbg) docker inspect old-k8s-version-625837:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "c1b8e95ff95e3c8d00f04e99efeff1b9aae77957e1730bef176997569aa985cb",
	        "Created": "2025-11-22T00:52:11.631298738Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 696484,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-11-22T00:53:29.990117235Z",
	            "FinishedAt": "2025-11-22T00:53:29.137902629Z"
	        },
	        "Image": "sha256:97061622515a85470dad13cb046ad5fc5021fcd2b05aa921a620abb52951cd5d",
	        "ResolvConfPath": "/var/lib/docker/containers/c1b8e95ff95e3c8d00f04e99efeff1b9aae77957e1730bef176997569aa985cb/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/c1b8e95ff95e3c8d00f04e99efeff1b9aae77957e1730bef176997569aa985cb/hostname",
	        "HostsPath": "/var/lib/docker/containers/c1b8e95ff95e3c8d00f04e99efeff1b9aae77957e1730bef176997569aa985cb/hosts",
	        "LogPath": "/var/lib/docker/containers/c1b8e95ff95e3c8d00f04e99efeff1b9aae77957e1730bef176997569aa985cb/c1b8e95ff95e3c8d00f04e99efeff1b9aae77957e1730bef176997569aa985cb-json.log",
	        "Name": "/old-k8s-version-625837",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "old-k8s-version-625837:/var",
	                "/lib/modules:/lib/modules:ro"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "old-k8s-version-625837",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "c1b8e95ff95e3c8d00f04e99efeff1b9aae77957e1730bef176997569aa985cb",
	                "LowerDir": "/var/lib/docker/overlay2/d5742c1c24516207890a7b6e13b848d2f42ff607041b29dcbd0346c7d43d472d-init/diff:/var/lib/docker/overlay2/7e8788c6de692bc1c3758a2bb2c4b8da0fbba26855f855c0f3b655bfbdd92f8e/diff",
	                "MergedDir": "/var/lib/docker/overlay2/d5742c1c24516207890a7b6e13b848d2f42ff607041b29dcbd0346c7d43d472d/merged",
	                "UpperDir": "/var/lib/docker/overlay2/d5742c1c24516207890a7b6e13b848d2f42ff607041b29dcbd0346c7d43d472d/diff",
	                "WorkDir": "/var/lib/docker/overlay2/d5742c1c24516207890a7b6e13b848d2f42ff607041b29dcbd0346c7d43d472d/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "old-k8s-version-625837",
	                "Source": "/var/lib/docker/volumes/old-k8s-version-625837/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "old-k8s-version-625837",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763588073-21934@sha256:19d3da0413e1bfa354cbb88004c6796f8e9772a083e0230b0f6e50212ee04c7e",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "old-k8s-version-625837",
	                "name.minikube.sigs.k8s.io": "old-k8s-version-625837",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "6f56587011a9bd4086d696d587f02af720d3b10a2deffb9d9ed4024f5dbca0be",
	            "SandboxKey": "/var/run/docker/netns/6f56587011a9",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33775"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33776"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33779"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33777"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33778"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "old-k8s-version-625837": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.85.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "16:d6:5e:39:84:2f",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "ee4ddaee680d222041a033cf4edb5764a7a32b1715bb1145e84ad0704600fbeb",
	                    "EndpointID": "edf096993a8a45f8965e7cfd56a32f4513cecf21afe5933b0c0b49bc9d1f45f9",
	                    "Gateway": "192.168.85.1",
	                    "IPAddress": "192.168.85.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "old-k8s-version-625837",
	                        "c1b8e95ff95e"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-625837 -n old-k8s-version-625837
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-625837 -n old-k8s-version-625837: exit status 2 (366.678632ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestStartStop/group/old-k8s-version/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/Pause]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p old-k8s-version-625837 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p old-k8s-version-625837 logs -n 25: (1.498972741s)
helpers_test.go:260: TestStartStop/group/old-k8s-version/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────
────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │          PROFILE          │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────
────┤
	│ ssh     │ -p cilium-163229 sudo systemctl status containerd --all --full --no-pager                                                                                                                                                                     │ cilium-163229             │ jenkins │ v1.37.0 │ 22 Nov 25 00:50 UTC │                     │
	│ ssh     │ -p cilium-163229 sudo systemctl cat containerd --no-pager                                                                                                                                                                                     │ cilium-163229             │ jenkins │ v1.37.0 │ 22 Nov 25 00:50 UTC │                     │
	│ ssh     │ -p cilium-163229 sudo cat /lib/systemd/system/containerd.service                                                                                                                                                                              │ cilium-163229             │ jenkins │ v1.37.0 │ 22 Nov 25 00:50 UTC │                     │
	│ ssh     │ -p cilium-163229 sudo cat /etc/containerd/config.toml                                                                                                                                                                                         │ cilium-163229             │ jenkins │ v1.37.0 │ 22 Nov 25 00:50 UTC │                     │
	│ ssh     │ -p cilium-163229 sudo containerd config dump                                                                                                                                                                                                  │ cilium-163229             │ jenkins │ v1.37.0 │ 22 Nov 25 00:50 UTC │                     │
	│ ssh     │ -p cilium-163229 sudo systemctl status crio --all --full --no-pager                                                                                                                                                                           │ cilium-163229             │ jenkins │ v1.37.0 │ 22 Nov 25 00:50 UTC │                     │
	│ ssh     │ -p cilium-163229 sudo systemctl cat crio --no-pager                                                                                                                                                                                           │ cilium-163229             │ jenkins │ v1.37.0 │ 22 Nov 25 00:50 UTC │                     │
	│ ssh     │ -p cilium-163229 sudo find /etc/crio -type f -exec sh -c 'echo {}; cat {}' \;                                                                                                                                                                 │ cilium-163229             │ jenkins │ v1.37.0 │ 22 Nov 25 00:50 UTC │                     │
	│ ssh     │ -p cilium-163229 sudo crio config                                                                                                                                                                                                             │ cilium-163229             │ jenkins │ v1.37.0 │ 22 Nov 25 00:50 UTC │                     │
	│ delete  │ -p cilium-163229                                                                                                                                                                                                                              │ cilium-163229             │ jenkins │ v1.37.0 │ 22 Nov 25 00:50 UTC │ 22 Nov 25 00:50 UTC │
	│ start   │ -p force-systemd-env-634519 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio                                                                                                                                    │ force-systemd-env-634519  │ jenkins │ v1.37.0 │ 22 Nov 25 00:50 UTC │ 22 Nov 25 00:51 UTC │
	│ delete  │ -p kubernetes-upgrade-134864                                                                                                                                                                                                                  │ kubernetes-upgrade-134864 │ jenkins │ v1.37.0 │ 22 Nov 25 00:50 UTC │ 22 Nov 25 00:51 UTC │
	│ start   │ -p cert-expiration-621390 --memory=3072 --cert-expiration=3m --driver=docker  --container-runtime=crio                                                                                                                                        │ cert-expiration-621390    │ jenkins │ v1.37.0 │ 22 Nov 25 00:51 UTC │ 22 Nov 25 00:51 UTC │
	│ delete  │ -p force-systemd-env-634519                                                                                                                                                                                                                   │ force-systemd-env-634519  │ jenkins │ v1.37.0 │ 22 Nov 25 00:51 UTC │ 22 Nov 25 00:51 UTC │
	│ start   │ -p cert-options-002126 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=crio                     │ cert-options-002126       │ jenkins │ v1.37.0 │ 22 Nov 25 00:51 UTC │ 22 Nov 25 00:52 UTC │
	│ ssh     │ cert-options-002126 ssh openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt                                                                                                                                                   │ cert-options-002126       │ jenkins │ v1.37.0 │ 22 Nov 25 00:52 UTC │ 22 Nov 25 00:52 UTC │
	│ ssh     │ -p cert-options-002126 -- sudo cat /etc/kubernetes/admin.conf                                                                                                                                                                                 │ cert-options-002126       │ jenkins │ v1.37.0 │ 22 Nov 25 00:52 UTC │ 22 Nov 25 00:52 UTC │
	│ delete  │ -p cert-options-002126                                                                                                                                                                                                                        │ cert-options-002126       │ jenkins │ v1.37.0 │ 22 Nov 25 00:52 UTC │ 22 Nov 25 00:52 UTC │
	│ start   │ -p old-k8s-version-625837 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-625837    │ jenkins │ v1.37.0 │ 22 Nov 25 00:52 UTC │ 22 Nov 25 00:53 UTC │
	│ addons  │ enable metrics-server -p old-k8s-version-625837 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                  │ old-k8s-version-625837    │ jenkins │ v1.37.0 │ 22 Nov 25 00:53 UTC │                     │
	│ stop    │ -p old-k8s-version-625837 --alsologtostderr -v=3                                                                                                                                                                                              │ old-k8s-version-625837    │ jenkins │ v1.37.0 │ 22 Nov 25 00:53 UTC │ 22 Nov 25 00:53 UTC │
	│ addons  │ enable dashboard -p old-k8s-version-625837 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                             │ old-k8s-version-625837    │ jenkins │ v1.37.0 │ 22 Nov 25 00:53 UTC │ 22 Nov 25 00:53 UTC │
	│ start   │ -p old-k8s-version-625837 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-625837    │ jenkins │ v1.37.0 │ 22 Nov 25 00:53 UTC │ 22 Nov 25 00:54 UTC │
	│ image   │ old-k8s-version-625837 image list --format=json                                                                                                                                                                                               │ old-k8s-version-625837    │ jenkins │ v1.37.0 │ 22 Nov 25 00:54 UTC │ 22 Nov 25 00:54 UTC │
	│ pause   │ -p old-k8s-version-625837 --alsologtostderr -v=1                                                                                                                                                                                              │ old-k8s-version-625837    │ jenkins │ v1.37.0 │ 22 Nov 25 00:54 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────
────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/22 00:53:29
	Running on machine: ip-172-31-21-244
	Binary: Built with gc go1.25.3 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1122 00:53:29.697656  696355 out.go:360] Setting OutFile to fd 1 ...
	I1122 00:53:29.697849  696355 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1122 00:53:29.697863  696355 out.go:374] Setting ErrFile to fd 2...
	I1122 00:53:29.697870  696355 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1122 00:53:29.698236  696355 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21934-513600/.minikube/bin
	I1122 00:53:29.698708  696355 out.go:368] Setting JSON to false
	I1122 00:53:29.699709  696355 start.go:133] hostinfo: {"hostname":"ip-172-31-21-244","uptime":20126,"bootTime":1763752684,"procs":180,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"da8ac1fd-6236-412a-a346-95873c98230d"}
	I1122 00:53:29.699805  696355 start.go:143] virtualization:  
	I1122 00:53:29.703181  696355 out.go:179] * [old-k8s-version-625837] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1122 00:53:29.707173  696355 out.go:179]   - MINIKUBE_LOCATION=21934
	I1122 00:53:29.707285  696355 notify.go:221] Checking for updates...
	I1122 00:53:29.713140  696355 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1122 00:53:29.716252  696355 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21934-513600/kubeconfig
	I1122 00:53:29.719246  696355 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21934-513600/.minikube
	I1122 00:53:29.722262  696355 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1122 00:53:29.725332  696355 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1122 00:53:29.728838  696355 config.go:182] Loaded profile config "old-k8s-version-625837": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.0
	I1122 00:53:29.732277  696355 out.go:179] * Kubernetes 1.34.1 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.34.1
	I1122 00:53:29.735143  696355 driver.go:422] Setting default libvirt URI to qemu:///system
	I1122 00:53:29.762590  696355 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1122 00:53:29.762751  696355 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1122 00:53:29.828334  696355 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-11-22 00:53:29.818324442 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1122 00:53:29.828442  696355 docker.go:319] overlay module found
	I1122 00:53:29.831506  696355 out.go:179] * Using the docker driver based on existing profile
	I1122 00:53:29.834326  696355 start.go:309] selected driver: docker
	I1122 00:53:29.834344  696355 start.go:930] validating driver "docker" against &{Name:old-k8s-version-625837 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763588073-21934@sha256:19d3da0413e1bfa354cbb88004c6796f8e9772a083e0230b0f6e50212ee04c7e Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:old-k8s-version-625837 Namespace:default APIServerHAVIP: AP
IServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 Mo
untType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1122 00:53:29.834444  696355 start.go:941] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1122 00:53:29.835166  696355 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1122 00:53:29.899720  696355 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-11-22 00:53:29.890852845 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1122 00:53:29.900060  696355 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1122 00:53:29.900094  696355 cni.go:84] Creating CNI manager for ""
	I1122 00:53:29.900150  696355 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1122 00:53:29.900188  696355 start.go:353] cluster config:
	{Name:old-k8s-version-625837 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763588073-21934@sha256:19d3da0413e1bfa354cbb88004c6796f8e9772a083e0230b0f6e50212ee04c7e Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:old-k8s-version-625837 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local
ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetri
cs:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1122 00:53:29.905135  696355 out.go:179] * Starting "old-k8s-version-625837" primary control-plane node in "old-k8s-version-625837" cluster
	I1122 00:53:29.908029  696355 cache.go:134] Beginning downloading kic base image for docker with crio
	I1122 00:53:29.911076  696355 out.go:179] * Pulling base image v0.0.48-1763588073-21934 ...
	I1122 00:53:29.914017  696355 preload.go:188] Checking if preload exists for k8s version v1.28.0 and runtime crio
	I1122 00:53:29.914076  696355 preload.go:203] Found local preload: /home/jenkins/minikube-integration/21934-513600/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-arm64.tar.lz4
	I1122 00:53:29.914115  696355 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763588073-21934@sha256:19d3da0413e1bfa354cbb88004c6796f8e9772a083e0230b0f6e50212ee04c7e in local docker daemon
	I1122 00:53:29.914118  696355 cache.go:65] Caching tarball of preloaded images
	I1122 00:53:29.914299  696355 preload.go:238] Found /home/jenkins/minikube-integration/21934-513600/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1122 00:53:29.914313  696355 cache.go:68] Finished verifying existence of preloaded tar for v1.28.0 on crio
	I1122 00:53:29.914454  696355 profile.go:143] Saving config to /home/jenkins/minikube-integration/21934-513600/.minikube/profiles/old-k8s-version-625837/config.json ...
	I1122 00:53:29.938399  696355 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763588073-21934@sha256:19d3da0413e1bfa354cbb88004c6796f8e9772a083e0230b0f6e50212ee04c7e in local docker daemon, skipping pull
	I1122 00:53:29.938418  696355 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763588073-21934@sha256:19d3da0413e1bfa354cbb88004c6796f8e9772a083e0230b0f6e50212ee04c7e exists in daemon, skipping load
	I1122 00:53:29.938440  696355 cache.go:243] Successfully downloaded all kic artifacts
	I1122 00:53:29.938464  696355 start.go:360] acquireMachinesLock for old-k8s-version-625837: {Name:mk3a3c501372daeff07fa7d5836846284b6136f5 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1122 00:53:29.938533  696355 start.go:364] duration metric: took 46.161µs to acquireMachinesLock for "old-k8s-version-625837"
	I1122 00:53:29.938560  696355 start.go:96] Skipping create...Using existing machine configuration
	I1122 00:53:29.938565  696355 fix.go:54] fixHost starting: 
	I1122 00:53:29.938822  696355 cli_runner.go:164] Run: docker container inspect old-k8s-version-625837 --format={{.State.Status}}
	I1122 00:53:29.955983  696355 fix.go:112] recreateIfNeeded on old-k8s-version-625837: state=Stopped err=<nil>
	W1122 00:53:29.956013  696355 fix.go:138] unexpected machine state, will restart: <nil>
	I1122 00:53:29.959255  696355 out.go:252] * Restarting existing docker container for "old-k8s-version-625837" ...
	I1122 00:53:29.959340  696355 cli_runner.go:164] Run: docker start old-k8s-version-625837
	I1122 00:53:30.250776  696355 cli_runner.go:164] Run: docker container inspect old-k8s-version-625837 --format={{.State.Status}}
	I1122 00:53:30.272388  696355 kic.go:430] container "old-k8s-version-625837" state is running.
	I1122 00:53:30.272775  696355 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-625837
	I1122 00:53:30.299982  696355 profile.go:143] Saving config to /home/jenkins/minikube-integration/21934-513600/.minikube/profiles/old-k8s-version-625837/config.json ...
	I1122 00:53:30.300216  696355 machine.go:94] provisionDockerMachine start ...
	I1122 00:53:30.300282  696355 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-625837
	I1122 00:53:30.321638  696355 main.go:143] libmachine: Using SSH client type: native
	I1122 00:53:30.321977  696355 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33775 <nil> <nil>}
	I1122 00:53:30.321988  696355 main.go:143] libmachine: About to run SSH command:
	hostname
	I1122 00:53:30.322572  696355 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I1122 00:53:33.461668  696355 main.go:143] libmachine: SSH cmd err, output: <nil>: old-k8s-version-625837
	
	I1122 00:53:33.461743  696355 ubuntu.go:182] provisioning hostname "old-k8s-version-625837"
	I1122 00:53:33.461871  696355 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-625837
	I1122 00:53:33.480356  696355 main.go:143] libmachine: Using SSH client type: native
	I1122 00:53:33.480678  696355 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33775 <nil> <nil>}
	I1122 00:53:33.480696  696355 main.go:143] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-625837 && echo "old-k8s-version-625837" | sudo tee /etc/hostname
	I1122 00:53:33.632441  696355 main.go:143] libmachine: SSH cmd err, output: <nil>: old-k8s-version-625837
	
	I1122 00:53:33.632524  696355 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-625837
	I1122 00:53:33.650814  696355 main.go:143] libmachine: Using SSH client type: native
	I1122 00:53:33.651144  696355 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33775 <nil> <nil>}
	I1122 00:53:33.651171  696355 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-625837' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-625837/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-625837' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1122 00:53:33.790057  696355 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1122 00:53:33.790086  696355 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21934-513600/.minikube CaCertPath:/home/jenkins/minikube-integration/21934-513600/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21934-513600/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21934-513600/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21934-513600/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21934-513600/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21934-513600/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21934-513600/.minikube}
	I1122 00:53:33.790130  696355 ubuntu.go:190] setting up certificates
	I1122 00:53:33.790153  696355 provision.go:84] configureAuth start
	I1122 00:53:33.790221  696355 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-625837
	I1122 00:53:33.807142  696355 provision.go:143] copyHostCerts
	I1122 00:53:33.807213  696355 exec_runner.go:144] found /home/jenkins/minikube-integration/21934-513600/.minikube/ca.pem, removing ...
	I1122 00:53:33.807235  696355 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21934-513600/.minikube/ca.pem
	I1122 00:53:33.807331  696355 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21934-513600/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21934-513600/.minikube/ca.pem (1078 bytes)
	I1122 00:53:33.807444  696355 exec_runner.go:144] found /home/jenkins/minikube-integration/21934-513600/.minikube/cert.pem, removing ...
	I1122 00:53:33.807456  696355 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21934-513600/.minikube/cert.pem
	I1122 00:53:33.807485  696355 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21934-513600/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21934-513600/.minikube/cert.pem (1123 bytes)
	I1122 00:53:33.807554  696355 exec_runner.go:144] found /home/jenkins/minikube-integration/21934-513600/.minikube/key.pem, removing ...
	I1122 00:53:33.807563  696355 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21934-513600/.minikube/key.pem
	I1122 00:53:33.807587  696355 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21934-513600/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21934-513600/.minikube/key.pem (1675 bytes)
	I1122 00:53:33.807646  696355 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21934-513600/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21934-513600/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21934-513600/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-625837 san=[127.0.0.1 192.168.85.2 localhost minikube old-k8s-version-625837]
	I1122 00:53:33.889343  696355 provision.go:177] copyRemoteCerts
	I1122 00:53:33.889411  696355 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1122 00:53:33.889449  696355 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-625837
	I1122 00:53:33.905891  696355 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33775 SSHKeyPath:/home/jenkins/minikube-integration/21934-513600/.minikube/machines/old-k8s-version-625837/id_rsa Username:docker}
	I1122 00:53:34.007368  696355 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21934-513600/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1122 00:53:34.026346  696355 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21934-513600/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I1122 00:53:34.044897  696355 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21934-513600/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1122 00:53:34.063378  696355 provision.go:87] duration metric: took 273.200251ms to configureAuth
	I1122 00:53:34.063406  696355 ubuntu.go:206] setting minikube options for container-runtime
	I1122 00:53:34.063636  696355 config.go:182] Loaded profile config "old-k8s-version-625837": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.0
	I1122 00:53:34.063743  696355 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-625837
	I1122 00:53:34.081275  696355 main.go:143] libmachine: Using SSH client type: native
	I1122 00:53:34.081589  696355 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33775 <nil> <nil>}
	I1122 00:53:34.081617  696355 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1122 00:53:34.427144  696355 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1122 00:53:34.427176  696355 machine.go:97] duration metric: took 4.126931098s to provisionDockerMachine
	I1122 00:53:34.427188  696355 start.go:293] postStartSetup for "old-k8s-version-625837" (driver="docker")
	I1122 00:53:34.427198  696355 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1122 00:53:34.427280  696355 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1122 00:53:34.427329  696355 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-625837
	I1122 00:53:34.445667  696355 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33775 SSHKeyPath:/home/jenkins/minikube-integration/21934-513600/.minikube/machines/old-k8s-version-625837/id_rsa Username:docker}
	I1122 00:53:34.550920  696355 ssh_runner.go:195] Run: cat /etc/os-release
	I1122 00:53:34.554353  696355 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1122 00:53:34.554384  696355 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1122 00:53:34.554396  696355 filesync.go:126] Scanning /home/jenkins/minikube-integration/21934-513600/.minikube/addons for local assets ...
	I1122 00:53:34.554449  696355 filesync.go:126] Scanning /home/jenkins/minikube-integration/21934-513600/.minikube/files for local assets ...
	I1122 00:53:34.554527  696355 filesync.go:149] local asset: /home/jenkins/minikube-integration/21934-513600/.minikube/files/etc/ssl/certs/5169372.pem -> 5169372.pem in /etc/ssl/certs
	I1122 00:53:34.554655  696355 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1122 00:53:34.562199  696355 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21934-513600/.minikube/files/etc/ssl/certs/5169372.pem --> /etc/ssl/certs/5169372.pem (1708 bytes)
	I1122 00:53:34.579236  696355 start.go:296] duration metric: took 152.033338ms for postStartSetup
	I1122 00:53:34.579333  696355 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1122 00:53:34.579372  696355 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-625837
	I1122 00:53:34.597049  696355 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33775 SSHKeyPath:/home/jenkins/minikube-integration/21934-513600/.minikube/machines/old-k8s-version-625837/id_rsa Username:docker}
	I1122 00:53:34.694795  696355 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1122 00:53:34.699535  696355 fix.go:56] duration metric: took 4.760962861s for fixHost
	I1122 00:53:34.699557  696355 start.go:83] releasing machines lock for "old-k8s-version-625837", held for 4.761014904s
	I1122 00:53:34.699624  696355 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-625837
	I1122 00:53:34.716716  696355 ssh_runner.go:195] Run: cat /version.json
	I1122 00:53:34.716781  696355 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-625837
	I1122 00:53:34.717025  696355 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1122 00:53:34.717081  696355 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-625837
	I1122 00:53:34.742252  696355 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33775 SSHKeyPath:/home/jenkins/minikube-integration/21934-513600/.minikube/machines/old-k8s-version-625837/id_rsa Username:docker}
	I1122 00:53:34.755425  696355 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33775 SSHKeyPath:/home/jenkins/minikube-integration/21934-513600/.minikube/machines/old-k8s-version-625837/id_rsa Username:docker}
	I1122 00:53:34.845547  696355 ssh_runner.go:195] Run: systemctl --version
	I1122 00:53:34.935097  696355 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1122 00:53:34.971248  696355 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1122 00:53:34.975496  696355 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1122 00:53:34.975596  696355 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1122 00:53:34.983356  696355 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1122 00:53:34.983381  696355 start.go:496] detecting cgroup driver to use...
	I1122 00:53:34.983442  696355 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1122 00:53:34.983507  696355 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1122 00:53:34.998054  696355 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1122 00:53:35.014608  696355 docker.go:218] disabling cri-docker service (if available) ...
	I1122 00:53:35.014690  696355 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1122 00:53:35.031562  696355 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1122 00:53:35.044873  696355 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1122 00:53:35.169101  696355 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1122 00:53:35.294188  696355 docker.go:234] disabling docker service ...
	I1122 00:53:35.294311  696355 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1122 00:53:35.309557  696355 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1122 00:53:35.323343  696355 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1122 00:53:35.451744  696355 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1122 00:53:35.585104  696355 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1122 00:53:35.598655  696355 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1122 00:53:35.613007  696355 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I1122 00:53:35.613068  696355 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1122 00:53:35.622917  696355 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1122 00:53:35.622994  696355 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1122 00:53:35.631909  696355 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1122 00:53:35.640796  696355 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1122 00:53:35.649455  696355 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1122 00:53:35.657414  696355 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1122 00:53:35.666963  696355 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1122 00:53:35.675178  696355 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1122 00:53:35.683783  696355 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1122 00:53:35.691553  696355 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1122 00:53:35.699261  696355 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1122 00:53:35.809355  696355 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1122 00:53:35.969563  696355 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1122 00:53:35.969681  696355 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1122 00:53:35.973526  696355 start.go:564] Will wait 60s for crictl version
	I1122 00:53:35.973628  696355 ssh_runner.go:195] Run: which crictl
	I1122 00:53:35.977378  696355 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1122 00:53:36.002921  696355 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.2
	RuntimeApiVersion:  v1
	I1122 00:53:36.003111  696355 ssh_runner.go:195] Run: crio --version
	I1122 00:53:36.039558  696355 ssh_runner.go:195] Run: crio --version
	I1122 00:53:36.073377  696355 out.go:179] * Preparing Kubernetes v1.28.0 on CRI-O 1.34.2 ...
	I1122 00:53:36.076290  696355 cli_runner.go:164] Run: docker network inspect old-k8s-version-625837 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1122 00:53:36.093114  696355 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I1122 00:53:36.097235  696355 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.85.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1122 00:53:36.107366  696355 kubeadm.go:884] updating cluster {Name:old-k8s-version-625837 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763588073-21934@sha256:19d3da0413e1bfa354cbb88004c6796f8e9772a083e0230b0f6e50212ee04c7e Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:old-k8s-version-625837 Namespace:default APIServerHAVIP: APIServerName:minik
ubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountU
ID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1122 00:53:36.107489  696355 preload.go:188] Checking if preload exists for k8s version v1.28.0 and runtime crio
	I1122 00:53:36.107555  696355 ssh_runner.go:195] Run: sudo crictl images --output json
	I1122 00:53:36.139679  696355 crio.go:514] all images are preloaded for cri-o runtime.
	I1122 00:53:36.139699  696355 crio.go:433] Images already preloaded, skipping extraction
	I1122 00:53:36.139754  696355 ssh_runner.go:195] Run: sudo crictl images --output json
	I1122 00:53:36.172161  696355 crio.go:514] all images are preloaded for cri-o runtime.
	I1122 00:53:36.172237  696355 cache_images.go:86] Images are preloaded, skipping loading
	I1122 00:53:36.172263  696355 kubeadm.go:935] updating node { 192.168.85.2 8443 v1.28.0 crio true true} ...
	I1122 00:53:36.172386  696355 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=old-k8s-version-625837 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.0 ClusterName:old-k8s-version-625837 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1122 00:53:36.172492  696355 ssh_runner.go:195] Run: crio config
	I1122 00:53:36.226526  696355 cni.go:84] Creating CNI manager for ""
	I1122 00:53:36.226551  696355 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1122 00:53:36.226572  696355 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1122 00:53:36.226595  696355 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8443 KubernetesVersion:v1.28.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-625837 NodeName:old-k8s-version-625837 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPo
dPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1122 00:53:36.226728  696355 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "old-k8s-version-625837"
	  kubeletExtraArgs:
	    node-ip: 192.168.85.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1122 00:53:36.226806  696355 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.0
	I1122 00:53:36.234625  696355 binaries.go:51] Found k8s binaries, skipping transfer
	I1122 00:53:36.234700  696355 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1122 00:53:36.242195  696355 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (372 bytes)
	I1122 00:53:36.254508  696355 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1122 00:53:36.267247  696355 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2160 bytes)
	I1122 00:53:36.279661  696355 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I1122 00:53:36.283167  696355 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1122 00:53:36.292180  696355 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1122 00:53:36.404805  696355 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1122 00:53:36.420577  696355 certs.go:69] Setting up /home/jenkins/minikube-integration/21934-513600/.minikube/profiles/old-k8s-version-625837 for IP: 192.168.85.2
	I1122 00:53:36.420595  696355 certs.go:195] generating shared ca certs ...
	I1122 00:53:36.420611  696355 certs.go:227] acquiring lock for ca certs: {Name:mkaf4c79493334cb2058b349c36be7473837f9b7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1122 00:53:36.420758  696355 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21934-513600/.minikube/ca.key
	I1122 00:53:36.420815  696355 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21934-513600/.minikube/proxy-client-ca.key
	I1122 00:53:36.420827  696355 certs.go:257] generating profile certs ...
	I1122 00:53:36.420921  696355 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21934-513600/.minikube/profiles/old-k8s-version-625837/client.key
	I1122 00:53:36.420989  696355 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21934-513600/.minikube/profiles/old-k8s-version-625837/apiserver.key.4c41b9ba
	I1122 00:53:36.421035  696355 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21934-513600/.minikube/profiles/old-k8s-version-625837/proxy-client.key
	I1122 00:53:36.421154  696355 certs.go:484] found cert: /home/jenkins/minikube-integration/21934-513600/.minikube/certs/516937.pem (1338 bytes)
	W1122 00:53:36.421194  696355 certs.go:480] ignoring /home/jenkins/minikube-integration/21934-513600/.minikube/certs/516937_empty.pem, impossibly tiny 0 bytes
	I1122 00:53:36.421206  696355 certs.go:484] found cert: /home/jenkins/minikube-integration/21934-513600/.minikube/certs/ca-key.pem (1679 bytes)
	I1122 00:53:36.421233  696355 certs.go:484] found cert: /home/jenkins/minikube-integration/21934-513600/.minikube/certs/ca.pem (1078 bytes)
	I1122 00:53:36.421260  696355 certs.go:484] found cert: /home/jenkins/minikube-integration/21934-513600/.minikube/certs/cert.pem (1123 bytes)
	I1122 00:53:36.421291  696355 certs.go:484] found cert: /home/jenkins/minikube-integration/21934-513600/.minikube/certs/key.pem (1675 bytes)
	I1122 00:53:36.421348  696355 certs.go:484] found cert: /home/jenkins/minikube-integration/21934-513600/.minikube/files/etc/ssl/certs/5169372.pem (1708 bytes)
	I1122 00:53:36.422066  696355 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21934-513600/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1122 00:53:36.451965  696355 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21934-513600/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1122 00:53:36.478467  696355 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21934-513600/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1122 00:53:36.501700  696355 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21934-513600/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1122 00:53:36.524576  696355 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21934-513600/.minikube/profiles/old-k8s-version-625837/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I1122 00:53:36.559570  696355 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21934-513600/.minikube/profiles/old-k8s-version-625837/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1122 00:53:36.587133  696355 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21934-513600/.minikube/profiles/old-k8s-version-625837/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1122 00:53:36.608409  696355 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21934-513600/.minikube/profiles/old-k8s-version-625837/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1122 00:53:36.628460  696355 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21934-513600/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1122 00:53:36.648541  696355 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21934-513600/.minikube/certs/516937.pem --> /usr/share/ca-certificates/516937.pem (1338 bytes)
	I1122 00:53:36.667123  696355 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21934-513600/.minikube/files/etc/ssl/certs/5169372.pem --> /usr/share/ca-certificates/5169372.pem (1708 bytes)
	I1122 00:53:36.688315  696355 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1122 00:53:36.701886  696355 ssh_runner.go:195] Run: openssl version
	I1122 00:53:36.707935  696355 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/516937.pem && ln -fs /usr/share/ca-certificates/516937.pem /etc/ssl/certs/516937.pem"
	I1122 00:53:36.716483  696355 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/516937.pem
	I1122 00:53:36.719999  696355 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov 21 23:55 /usr/share/ca-certificates/516937.pem
	I1122 00:53:36.720062  696355 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/516937.pem
	I1122 00:53:36.773298  696355 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/516937.pem /etc/ssl/certs/51391683.0"
	I1122 00:53:36.782359  696355 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/5169372.pem && ln -fs /usr/share/ca-certificates/5169372.pem /etc/ssl/certs/5169372.pem"
	I1122 00:53:36.790199  696355 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/5169372.pem
	I1122 00:53:36.793916  696355 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov 21 23:55 /usr/share/ca-certificates/5169372.pem
	I1122 00:53:36.793998  696355 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/5169372.pem
	I1122 00:53:36.834819  696355 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/5169372.pem /etc/ssl/certs/3ec20f2e.0"
	I1122 00:53:36.842941  696355 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1122 00:53:36.851284  696355 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1122 00:53:36.855072  696355 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov 21 23:48 /usr/share/ca-certificates/minikubeCA.pem
	I1122 00:53:36.855170  696355 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1122 00:53:36.896087  696355 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1122 00:53:36.904069  696355 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1122 00:53:36.908144  696355 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1122 00:53:36.953861  696355 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1122 00:53:36.995851  696355 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1122 00:53:37.040021  696355 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1122 00:53:37.083671  696355 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1122 00:53:37.132073  696355 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1122 00:53:37.199857  696355 kubeadm.go:401] StartCluster: {Name:old-k8s-version-625837 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763588073-21934@sha256:19d3da0413e1bfa354cbb88004c6796f8e9772a083e0230b0f6e50212ee04c7e Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:old-k8s-version-625837 Namespace:default APIServerHAVIP: APIServerName:minikube
CA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:
docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1122 00:53:37.200009  696355 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1122 00:53:37.200099  696355 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1122 00:53:37.260727  696355 cri.go:89] found id: "9deafbe8687dcd224ab5e480cfefa5cc596bb04d62aab6f5da3083aca07488e8"
	I1122 00:53:37.260801  696355 cri.go:89] found id: "a1d1d67ba75cb36995d73540a0a298366b4b32ccbfda1a424c21b0b86506d11d"
	I1122 00:53:37.260821  696355 cri.go:89] found id: "4c192c23a5c2cb8d4827103c705875b67426f13e7541c3c230c0bacb6b6f0ca9"
	I1122 00:53:37.260844  696355 cri.go:89] found id: "fee91ee3c441411dbcd777ca8d5095cfd10e8a67a33ad4caf348ae63ec865a72"
	I1122 00:53:37.260862  696355 cri.go:89] found id: ""
	I1122 00:53:37.260934  696355 ssh_runner.go:195] Run: sudo runc list -f json
	W1122 00:53:37.280060  696355 kubeadm.go:408] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-22T00:53:37Z" level=error msg="open /run/runc: no such file or directory"
	I1122 00:53:37.280188  696355 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1122 00:53:37.299957  696355 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1122 00:53:37.300024  696355 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1122 00:53:37.300091  696355 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1122 00:53:37.308635  696355 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1122 00:53:37.309243  696355 kubeconfig.go:47] verify endpoint returned: get endpoint: "old-k8s-version-625837" does not appear in /home/jenkins/minikube-integration/21934-513600/kubeconfig
	I1122 00:53:37.309550  696355 kubeconfig.go:62] /home/jenkins/minikube-integration/21934-513600/kubeconfig needs updating (will repair): [kubeconfig missing "old-k8s-version-625837" cluster setting kubeconfig missing "old-k8s-version-625837" context setting]
	I1122 00:53:37.310188  696355 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21934-513600/kubeconfig: {Name:mkcd094d8a765e5a81369c0fb33cb5e696b17bb8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1122 00:53:37.311841  696355 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1122 00:53:37.322910  696355 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.85.2
	I1122 00:53:37.322983  696355 kubeadm.go:602] duration metric: took 22.938954ms to restartPrimaryControlPlane
	I1122 00:53:37.323009  696355 kubeadm.go:403] duration metric: took 123.163317ms to StartCluster
	I1122 00:53:37.323041  696355 settings.go:142] acquiring lock: {Name:mk6c31eb57ec65b047b78b4e1046e03fe7cc77bb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1122 00:53:37.323123  696355 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21934-513600/kubeconfig
	I1122 00:53:37.324824  696355 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21934-513600/kubeconfig: {Name:mkcd094d8a765e5a81369c0fb33cb5e696b17bb8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1122 00:53:37.325090  696355 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1122 00:53:37.325244  696355 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1122 00:53:37.325583  696355 addons.go:70] Setting storage-provisioner=true in profile "old-k8s-version-625837"
	I1122 00:53:37.325612  696355 addons.go:239] Setting addon storage-provisioner=true in "old-k8s-version-625837"
	W1122 00:53:37.325831  696355 addons.go:248] addon storage-provisioner should already be in state true
	I1122 00:53:37.325861  696355 host.go:66] Checking if "old-k8s-version-625837" exists ...
	I1122 00:53:37.326344  696355 cli_runner.go:164] Run: docker container inspect old-k8s-version-625837 --format={{.State.Status}}
	I1122 00:53:37.325818  696355 addons.go:70] Setting dashboard=true in profile "old-k8s-version-625837"
	I1122 00:53:37.326546  696355 addons.go:239] Setting addon dashboard=true in "old-k8s-version-625837"
	W1122 00:53:37.326598  696355 addons.go:248] addon dashboard should already be in state true
	I1122 00:53:37.326634  696355 host.go:66] Checking if "old-k8s-version-625837" exists ...
	I1122 00:53:37.327066  696355 cli_runner.go:164] Run: docker container inspect old-k8s-version-625837 --format={{.State.Status}}
	I1122 00:53:37.325411  696355 config.go:182] Loaded profile config "old-k8s-version-625837": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.0
	I1122 00:53:37.328499  696355 addons.go:70] Setting default-storageclass=true in profile "old-k8s-version-625837"
	I1122 00:53:37.328513  696355 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "old-k8s-version-625837"
	I1122 00:53:37.328758  696355 cli_runner.go:164] Run: docker container inspect old-k8s-version-625837 --format={{.State.Status}}
	I1122 00:53:37.336238  696355 out.go:179] * Verifying Kubernetes components...
	I1122 00:53:37.339551  696355 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1122 00:53:37.382729  696355 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1122 00:53:37.386530  696355 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1122 00:53:37.386556  696355 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1122 00:53:37.386627  696355 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-625837
	I1122 00:53:37.399786  696355 addons.go:239] Setting addon default-storageclass=true in "old-k8s-version-625837"
	W1122 00:53:37.401945  696355 addons.go:248] addon default-storageclass should already be in state true
	I1122 00:53:37.401979  696355 host.go:66] Checking if "old-k8s-version-625837" exists ...
	I1122 00:53:37.402423  696355 cli_runner.go:164] Run: docker container inspect old-k8s-version-625837 --format={{.State.Status}}
	I1122 00:53:37.401916  696355 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	I1122 00:53:37.407835  696355 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1122 00:53:37.410725  696355 addons.go:436] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1122 00:53:37.410752  696355 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1122 00:53:37.410824  696355 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-625837
	I1122 00:53:37.454996  696355 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33775 SSHKeyPath:/home/jenkins/minikube-integration/21934-513600/.minikube/machines/old-k8s-version-625837/id_rsa Username:docker}
	I1122 00:53:37.463863  696355 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33775 SSHKeyPath:/home/jenkins/minikube-integration/21934-513600/.minikube/machines/old-k8s-version-625837/id_rsa Username:docker}
	I1122 00:53:37.481513  696355 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1122 00:53:37.481537  696355 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1122 00:53:37.481600  696355 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-625837
	I1122 00:53:37.508574  696355 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33775 SSHKeyPath:/home/jenkins/minikube-integration/21934-513600/.minikube/machines/old-k8s-version-625837/id_rsa Username:docker}
	I1122 00:53:37.714037  696355 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1122 00:53:37.735196  696355 node_ready.go:35] waiting up to 6m0s for node "old-k8s-version-625837" to be "Ready" ...
	I1122 00:53:37.753575  696355 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1122 00:53:37.782948  696355 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1122 00:53:37.783014  696355 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1122 00:53:37.803165  696355 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1122 00:53:37.842360  696355 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1122 00:53:37.842426  696355 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1122 00:53:37.905142  696355 addons.go:436] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1122 00:53:37.905208  696355 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1122 00:53:37.975265  696355 addons.go:436] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1122 00:53:37.975341  696355 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1122 00:53:38.033772  696355 addons.go:436] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1122 00:53:38.033862  696355 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1122 00:53:38.083095  696355 addons.go:436] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1122 00:53:38.083169  696355 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1122 00:53:38.116056  696355 addons.go:436] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1122 00:53:38.116128  696355 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1122 00:53:38.130784  696355 addons.go:436] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1122 00:53:38.130850  696355 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1122 00:53:38.150400  696355 addons.go:436] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1122 00:53:38.150472  696355 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1122 00:53:38.171587  696355 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1122 00:53:43.045982  696355 node_ready.go:49] node "old-k8s-version-625837" is "Ready"
	I1122 00:53:43.046008  696355 node_ready.go:38] duration metric: took 5.310783791s for node "old-k8s-version-625837" to be "Ready" ...
	I1122 00:53:43.046022  696355 api_server.go:52] waiting for apiserver process to appear ...
	I1122 00:53:43.046082  696355 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1122 00:53:44.435967  696355 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (6.682313868s)
	I1122 00:53:44.436042  696355 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (6.632812115s)
	I1122 00:53:45.082530  696355 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (6.910842058s)
	I1122 00:53:45.082722  696355 ssh_runner.go:235] Completed: sudo pgrep -xnf kube-apiserver.*minikube.*: (2.036628124s)
	I1122 00:53:45.082740  696355 api_server.go:72] duration metric: took 7.757307049s to wait for apiserver process to appear ...
	I1122 00:53:45.082747  696355 api_server.go:88] waiting for apiserver healthz status ...
	I1122 00:53:45.082765  696355 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1122 00:53:45.085907  696355 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p old-k8s-version-625837 addons enable metrics-server
	
	I1122 00:53:45.088977  696355 out.go:179] * Enabled addons: storage-provisioner, default-storageclass, dashboard
	I1122 00:53:45.092165  696355 addons.go:530] duration metric: took 7.766905414s for enable addons: enabled=[storage-provisioner default-storageclass dashboard]
	I1122 00:53:45.094096  696355 api_server.go:279] https://192.168.85.2:8443/healthz returned 200:
	ok
	I1122 00:53:45.095830  696355 api_server.go:141] control plane version: v1.28.0
	I1122 00:53:45.095869  696355 api_server.go:131] duration metric: took 13.114446ms to wait for apiserver health ...
	I1122 00:53:45.095882  696355 system_pods.go:43] waiting for kube-system pods to appear ...
	I1122 00:53:45.100332  696355 system_pods.go:59] 8 kube-system pods found
	I1122 00:53:45.100377  696355 system_pods.go:61] "coredns-5dd5756b68-6m4nr" [21a6b372-6765-44be-afb9-9dcaf8246818] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1122 00:53:45.100388  696355 system_pods.go:61] "etcd-old-k8s-version-625837" [ad180e51-5a9b-40c7-a5df-79ae88a52cdd] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1122 00:53:45.100394  696355 system_pods.go:61] "kindnet-h6vbs" [136803c6-4591-42d7-b387-7aa8c6c6b628] Running
	I1122 00:53:45.100403  696355 system_pods.go:61] "kube-apiserver-old-k8s-version-625837" [51c5da43-2565-4b7d-92bb-6cebb5a661f6] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1122 00:53:45.100411  696355 system_pods.go:61] "kube-controller-manager-old-k8s-version-625837" [14cf17d6-f0ee-4265-a046-51b9c2134e13] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1122 00:53:45.100416  696355 system_pods.go:61] "kube-proxy-zdmf6" [7b5dde4d-792c-4340-9621-ccd57f294d20] Running
	I1122 00:53:45.100423  696355 system_pods.go:61] "kube-scheduler-old-k8s-version-625837" [6afcc4bd-f6b4-462e-bc0a-9122c24132b8] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1122 00:53:45.100430  696355 system_pods.go:61] "storage-provisioner" [e45decb9-b863-4eeb-8363-e8134ea94857] Running
	I1122 00:53:45.100437  696355 system_pods.go:74] duration metric: took 4.547838ms to wait for pod list to return data ...
	I1122 00:53:45.100455  696355 default_sa.go:34] waiting for default service account to be created ...
	I1122 00:53:45.103648  696355 default_sa.go:45] found service account: "default"
	I1122 00:53:45.103682  696355 default_sa.go:55] duration metric: took 3.218466ms for default service account to be created ...
	I1122 00:53:45.103694  696355 system_pods.go:116] waiting for k8s-apps to be running ...
	I1122 00:53:45.111174  696355 system_pods.go:86] 8 kube-system pods found
	I1122 00:53:45.111232  696355 system_pods.go:89] "coredns-5dd5756b68-6m4nr" [21a6b372-6765-44be-afb9-9dcaf8246818] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1122 00:53:45.111246  696355 system_pods.go:89] "etcd-old-k8s-version-625837" [ad180e51-5a9b-40c7-a5df-79ae88a52cdd] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1122 00:53:45.111253  696355 system_pods.go:89] "kindnet-h6vbs" [136803c6-4591-42d7-b387-7aa8c6c6b628] Running
	I1122 00:53:45.111262  696355 system_pods.go:89] "kube-apiserver-old-k8s-version-625837" [51c5da43-2565-4b7d-92bb-6cebb5a661f6] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1122 00:53:45.111270  696355 system_pods.go:89] "kube-controller-manager-old-k8s-version-625837" [14cf17d6-f0ee-4265-a046-51b9c2134e13] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1122 00:53:45.111284  696355 system_pods.go:89] "kube-proxy-zdmf6" [7b5dde4d-792c-4340-9621-ccd57f294d20] Running
	I1122 00:53:45.111291  696355 system_pods.go:89] "kube-scheduler-old-k8s-version-625837" [6afcc4bd-f6b4-462e-bc0a-9122c24132b8] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1122 00:53:45.111307  696355 system_pods.go:89] "storage-provisioner" [e45decb9-b863-4eeb-8363-e8134ea94857] Running
	I1122 00:53:45.111317  696355 system_pods.go:126] duration metric: took 7.614694ms to wait for k8s-apps to be running ...
	I1122 00:53:45.111326  696355 system_svc.go:44] waiting for kubelet service to be running ....
	I1122 00:53:45.111399  696355 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1122 00:53:45.127889  696355 system_svc.go:56] duration metric: took 16.552032ms WaitForService to wait for kubelet
	I1122 00:53:45.127999  696355 kubeadm.go:587] duration metric: took 7.802562823s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1122 00:53:45.128050  696355 node_conditions.go:102] verifying NodePressure condition ...
	I1122 00:53:45.132634  696355 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1122 00:53:45.132673  696355 node_conditions.go:123] node cpu capacity is 2
	I1122 00:53:45.132688  696355 node_conditions.go:105] duration metric: took 4.614396ms to run NodePressure ...
	I1122 00:53:45.132701  696355 start.go:242] waiting for startup goroutines ...
	I1122 00:53:45.132709  696355 start.go:247] waiting for cluster config update ...
	I1122 00:53:45.132736  696355 start.go:256] writing updated cluster config ...
	I1122 00:53:45.133054  696355 ssh_runner.go:195] Run: rm -f paused
	I1122 00:53:45.139640  696355 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1122 00:53:45.146437  696355 pod_ready.go:83] waiting for pod "coredns-5dd5756b68-6m4nr" in "kube-system" namespace to be "Ready" or be gone ...
	W1122 00:53:47.152389  696355 pod_ready.go:104] pod "coredns-5dd5756b68-6m4nr" is not "Ready", error: <nil>
	W1122 00:53:49.651800  696355 pod_ready.go:104] pod "coredns-5dd5756b68-6m4nr" is not "Ready", error: <nil>
	W1122 00:53:51.652114  696355 pod_ready.go:104] pod "coredns-5dd5756b68-6m4nr" is not "Ready", error: <nil>
	W1122 00:53:54.152935  696355 pod_ready.go:104] pod "coredns-5dd5756b68-6m4nr" is not "Ready", error: <nil>
	W1122 00:53:56.155182  696355 pod_ready.go:104] pod "coredns-5dd5756b68-6m4nr" is not "Ready", error: <nil>
	W1122 00:53:58.653110  696355 pod_ready.go:104] pod "coredns-5dd5756b68-6m4nr" is not "Ready", error: <nil>
	W1122 00:54:01.154598  696355 pod_ready.go:104] pod "coredns-5dd5756b68-6m4nr" is not "Ready", error: <nil>
	W1122 00:54:03.656114  696355 pod_ready.go:104] pod "coredns-5dd5756b68-6m4nr" is not "Ready", error: <nil>
	W1122 00:54:06.154435  696355 pod_ready.go:104] pod "coredns-5dd5756b68-6m4nr" is not "Ready", error: <nil>
	W1122 00:54:08.652833  696355 pod_ready.go:104] pod "coredns-5dd5756b68-6m4nr" is not "Ready", error: <nil>
	W1122 00:54:10.653012  696355 pod_ready.go:104] pod "coredns-5dd5756b68-6m4nr" is not "Ready", error: <nil>
	W1122 00:54:13.152850  696355 pod_ready.go:104] pod "coredns-5dd5756b68-6m4nr" is not "Ready", error: <nil>
	W1122 00:54:15.153474  696355 pod_ready.go:104] pod "coredns-5dd5756b68-6m4nr" is not "Ready", error: <nil>
	W1122 00:54:17.652410  696355 pod_ready.go:104] pod "coredns-5dd5756b68-6m4nr" is not "Ready", error: <nil>
	I1122 00:54:19.652733  696355 pod_ready.go:94] pod "coredns-5dd5756b68-6m4nr" is "Ready"
	I1122 00:54:19.652761  696355 pod_ready.go:86] duration metric: took 34.506275345s for pod "coredns-5dd5756b68-6m4nr" in "kube-system" namespace to be "Ready" or be gone ...
	I1122 00:54:19.656053  696355 pod_ready.go:83] waiting for pod "etcd-old-k8s-version-625837" in "kube-system" namespace to be "Ready" or be gone ...
	I1122 00:54:19.661033  696355 pod_ready.go:94] pod "etcd-old-k8s-version-625837" is "Ready"
	I1122 00:54:19.661062  696355 pod_ready.go:86] duration metric: took 4.98104ms for pod "etcd-old-k8s-version-625837" in "kube-system" namespace to be "Ready" or be gone ...
	I1122 00:54:19.664123  696355 pod_ready.go:83] waiting for pod "kube-apiserver-old-k8s-version-625837" in "kube-system" namespace to be "Ready" or be gone ...
	I1122 00:54:19.669077  696355 pod_ready.go:94] pod "kube-apiserver-old-k8s-version-625837" is "Ready"
	I1122 00:54:19.669105  696355 pod_ready.go:86] duration metric: took 4.958772ms for pod "kube-apiserver-old-k8s-version-625837" in "kube-system" namespace to be "Ready" or be gone ...
	I1122 00:54:19.672355  696355 pod_ready.go:83] waiting for pod "kube-controller-manager-old-k8s-version-625837" in "kube-system" namespace to be "Ready" or be gone ...
	I1122 00:54:19.849924  696355 pod_ready.go:94] pod "kube-controller-manager-old-k8s-version-625837" is "Ready"
	I1122 00:54:19.849949  696355 pod_ready.go:86] duration metric: took 177.570389ms for pod "kube-controller-manager-old-k8s-version-625837" in "kube-system" namespace to be "Ready" or be gone ...
	I1122 00:54:20.051011  696355 pod_ready.go:83] waiting for pod "kube-proxy-zdmf6" in "kube-system" namespace to be "Ready" or be gone ...
	I1122 00:54:20.450336  696355 pod_ready.go:94] pod "kube-proxy-zdmf6" is "Ready"
	I1122 00:54:20.450364  696355 pod_ready.go:86] duration metric: took 399.327808ms for pod "kube-proxy-zdmf6" in "kube-system" namespace to be "Ready" or be gone ...
	I1122 00:54:20.650341  696355 pod_ready.go:83] waiting for pod "kube-scheduler-old-k8s-version-625837" in "kube-system" namespace to be "Ready" or be gone ...
	I1122 00:54:21.049863  696355 pod_ready.go:94] pod "kube-scheduler-old-k8s-version-625837" is "Ready"
	I1122 00:54:21.049890  696355 pod_ready.go:86] duration metric: took 399.520984ms for pod "kube-scheduler-old-k8s-version-625837" in "kube-system" namespace to be "Ready" or be gone ...
	I1122 00:54:21.049902  696355 pod_ready.go:40] duration metric: took 35.910213843s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1122 00:54:21.101608  696355 start.go:628] kubectl: 1.33.2, cluster: 1.28.0 (minor skew: 5)
	I1122 00:54:21.104722  696355 out.go:203] 
	W1122 00:54:21.107769  696355 out.go:285] ! /usr/local/bin/kubectl is version 1.33.2, which may have incompatibilities with Kubernetes 1.28.0.
	I1122 00:54:21.110616  696355 out.go:179]   - Want kubectl v1.28.0? Try 'minikube kubectl -- get pods -A'
	I1122 00:54:21.113351  696355 out.go:179] * Done! kubectl is now configured to use "old-k8s-version-625837" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Nov 22 00:54:21 old-k8s-version-625837 crio[653]: time="2025-11-22T00:54:21.630800861Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 22 00:54:21 old-k8s-version-625837 crio[653]: time="2025-11-22T00:54:21.64028348Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 22 00:54:21 old-k8s-version-625837 crio[653]: time="2025-11-22T00:54:21.640932831Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 22 00:54:21 old-k8s-version-625837 crio[653]: time="2025-11-22T00:54:21.657621032Z" level=info msg="Created container d5df7182fcdae9d1252372617e3f730dce9069a240030f68c42a96f8a784beb0: kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-l5hnj/dashboard-metrics-scraper" id=f92a0282-e6ca-43d9-a661-2d2ed176a2c9 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 22 00:54:21 old-k8s-version-625837 crio[653]: time="2025-11-22T00:54:21.662320406Z" level=info msg="Starting container: d5df7182fcdae9d1252372617e3f730dce9069a240030f68c42a96f8a784beb0" id=8fcc2171-fa09-4bb6-bd79-b0d028db0270 name=/runtime.v1.RuntimeService/StartContainer
	Nov 22 00:54:21 old-k8s-version-625837 crio[653]: time="2025-11-22T00:54:21.664605631Z" level=info msg="Started container" PID=1666 containerID=d5df7182fcdae9d1252372617e3f730dce9069a240030f68c42a96f8a784beb0 description=kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-l5hnj/dashboard-metrics-scraper id=8fcc2171-fa09-4bb6-bd79-b0d028db0270 name=/runtime.v1.RuntimeService/StartContainer sandboxID=958c2701ff983f0ad7298f1e136c428203606ac2a8085f04ca54797af8f4a1b8
	Nov 22 00:54:21 old-k8s-version-625837 conmon[1664]: conmon d5df7182fcdae9d12523 <ninfo>: container 1666 exited with status 1
	Nov 22 00:54:21 old-k8s-version-625837 crio[653]: time="2025-11-22T00:54:21.950919043Z" level=info msg="Removing container: 058de021acd50f7260a63c0cbbdce1ea31f652c5c53d3110bd9e0698e868d29d" id=115e9859-dcfb-4477-96d4-7cb95727789e name=/runtime.v1.RuntimeService/RemoveContainer
	Nov 22 00:54:21 old-k8s-version-625837 crio[653]: time="2025-11-22T00:54:21.962044247Z" level=info msg="Error loading conmon cgroup of container 058de021acd50f7260a63c0cbbdce1ea31f652c5c53d3110bd9e0698e868d29d: cgroup deleted" id=115e9859-dcfb-4477-96d4-7cb95727789e name=/runtime.v1.RuntimeService/RemoveContainer
	Nov 22 00:54:21 old-k8s-version-625837 crio[653]: time="2025-11-22T00:54:21.968863895Z" level=info msg="Removed container 058de021acd50f7260a63c0cbbdce1ea31f652c5c53d3110bd9e0698e868d29d: kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-l5hnj/dashboard-metrics-scraper" id=115e9859-dcfb-4477-96d4-7cb95727789e name=/runtime.v1.RuntimeService/RemoveContainer
	Nov 22 00:54:24 old-k8s-version-625837 crio[653]: time="2025-11-22T00:54:24.526774505Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 22 00:54:24 old-k8s-version-625837 crio[653]: time="2025-11-22T00:54:24.533401298Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 22 00:54:24 old-k8s-version-625837 crio[653]: time="2025-11-22T00:54:24.533441994Z" level=info msg="Updated default CNI network name to kindnet"
	Nov 22 00:54:24 old-k8s-version-625837 crio[653]: time="2025-11-22T00:54:24.533465821Z" level=info msg="CNI monitoring event WRITE         \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 22 00:54:24 old-k8s-version-625837 crio[653]: time="2025-11-22T00:54:24.542203018Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 22 00:54:24 old-k8s-version-625837 crio[653]: time="2025-11-22T00:54:24.542370446Z" level=info msg="Updated default CNI network name to kindnet"
	Nov 22 00:54:24 old-k8s-version-625837 crio[653]: time="2025-11-22T00:54:24.542451888Z" level=info msg="CNI monitoring event WRITE         \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 22 00:54:24 old-k8s-version-625837 crio[653]: time="2025-11-22T00:54:24.547856111Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 22 00:54:24 old-k8s-version-625837 crio[653]: time="2025-11-22T00:54:24.547887881Z" level=info msg="Updated default CNI network name to kindnet"
	Nov 22 00:54:24 old-k8s-version-625837 crio[653]: time="2025-11-22T00:54:24.547910937Z" level=info msg="CNI monitoring event RENAME        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 22 00:54:24 old-k8s-version-625837 crio[653]: time="2025-11-22T00:54:24.551559002Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 22 00:54:24 old-k8s-version-625837 crio[653]: time="2025-11-22T00:54:24.551596219Z" level=info msg="Updated default CNI network name to kindnet"
	Nov 22 00:54:24 old-k8s-version-625837 crio[653]: time="2025-11-22T00:54:24.551619538Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist\" ← \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 22 00:54:24 old-k8s-version-625837 crio[653]: time="2025-11-22T00:54:24.554904192Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 22 00:54:24 old-k8s-version-625837 crio[653]: time="2025-11-22T00:54:24.554936511Z" level=info msg="Updated default CNI network name to kindnet"
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                      CREATED             STATE               NAME                        ATTEMPT             POD ID              POD                                              NAMESPACE
	d5df7182fcdae       a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7                                           14 seconds ago      Exited              dashboard-metrics-scraper   2                   958c2701ff983       dashboard-metrics-scraper-5f989dc9cf-l5hnj       kubernetes-dashboard
	ba7fcf01e100d       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                           20 seconds ago      Running             storage-provisioner         2                   c5bc79f65de3c       storage-provisioner                              kube-system
	c68e57a00d3a7       docker.io/kubernetesui/dashboard@sha256:5c52c60663b473628bd98e4ffee7a747ef1f88d8c7bcee957b089fb3f61bdedf   29 seconds ago      Running             kubernetes-dashboard        0                   42302c17730cd       kubernetes-dashboard-8694d4445c-kp26b            kubernetes-dashboard
	654cb4e625cf5       1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c                                           51 seconds ago      Running             busybox                     1                   e759bcdaf4d5e       busybox                                          default
	862c22e09e90a       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c                                           51 seconds ago      Running             kindnet-cni                 1                   0076beb33e255       kindnet-h6vbs                                    kube-system
	a05ae1ca90b87       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                           51 seconds ago      Exited              storage-provisioner         1                   c5bc79f65de3c       storage-provisioner                              kube-system
	5a851503d8f86       97e04611ad43405a2e5863ae17c6f1bc9181bdefdaa78627c432ef754a4eb108                                           51 seconds ago      Running             coredns                     1                   9b53dbefdfb34       coredns-5dd5756b68-6m4nr                         kube-system
	b36da003b1235       940f54a5bcae9dd4c97844fa36d12cc5d9078cffd5e677ad0df1528c12f3240d                                           51 seconds ago      Running             kube-proxy                  1                   cf63710883e16       kube-proxy-zdmf6                                 kube-system
	9deafbe8687dc       762dce4090c5f4789bb5dbb933d5b50bc1a2357d7739bbce30d949820e5a38ee                                           58 seconds ago      Running             kube-scheduler              1                   767cd807d2aff       kube-scheduler-old-k8s-version-625837            kube-system
	a1d1d67ba75cb       9cdd6470f48c8b127530b7ce6ea4b3524137984481e48bcde619735890840ace                                           58 seconds ago      Running             etcd                        1                   c4858b026f61e       etcd-old-k8s-version-625837                      kube-system
	4c192c23a5c2c       00543d2fe5d71095984891a0609ee504b81f9d72a69a0ad02039d4e135213766                                           58 seconds ago      Running             kube-apiserver              1                   618ca9425648e       kube-apiserver-old-k8s-version-625837            kube-system
	fee91ee3c4414       46cc66ccc7c19b4b30625b0aa4e178792add2385659205d7c6fcbd05d78c23e5                                           58 seconds ago      Running             kube-controller-manager     1                   4a0e390229e94       kube-controller-manager-old-k8s-version-625837   kube-system
	
	
	==> coredns [5a851503d8f86845f2821dfa2135db11f6512a59f26a300ecd84a35339db4496] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 8aa94104b4dae56b00431f7362ac05b997af2246775de35dc2eb361b0707b2fa7199f9ddfdba27fdef1331b76d09c41700f6cb5d00836dabab7c0df8e651283f
	CoreDNS-1.10.1
	linux/arm64, go1.20, 055b2c3
	[INFO] 127.0.0.1:41536 - 56119 "HINFO IN 2345663300153241260.7465411533950033970. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.023952332s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[WARNING] plugin/kubernetes: Kubernetes API connection failure: Get "https://10.96.0.1:443/version": dial tcp 10.96.0.1:443: i/o timeout
	
	
	==> describe nodes <==
	Name:               old-k8s-version-625837
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=old-k8s-version-625837
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=299bbe887a12c40541707cc636234f35f4ff1785
	                    minikube.k8s.io/name=old-k8s-version-625837
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_11_22T00_52_36_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 22 Nov 2025 00:52:31 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  old-k8s-version-625837
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 22 Nov 2025 00:54:23 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 22 Nov 2025 00:54:13 +0000   Sat, 22 Nov 2025 00:52:28 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 22 Nov 2025 00:54:13 +0000   Sat, 22 Nov 2025 00:52:28 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 22 Nov 2025 00:54:13 +0000   Sat, 22 Nov 2025 00:52:28 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 22 Nov 2025 00:54:13 +0000   Sat, 22 Nov 2025 00:53:02 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.85.2
	  Hostname:    old-k8s-version-625837
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	System Info:
	  Machine ID:                 c92eea4d3c03c0156e01fcf8691e3907
	  System UUID:                dcff3a74-c051-4bbe-bac8-1863a477231a
	  Boot ID:                    72ac7385-472f-47d1-a23e-bd80468d6e09
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.2
	  Kubelet Version:            v1.28.0
	  Kube-Proxy Version:         v1.28.0
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                              CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                              ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         91s
	  kube-system                 coredns-5dd5756b68-6m4nr                          100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     109s
	  kube-system                 etcd-old-k8s-version-625837                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         2m1s
	  kube-system                 kindnet-h6vbs                                     100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      109s
	  kube-system                 kube-apiserver-old-k8s-version-625837             250m (12%)    0 (0%)      0 (0%)           0 (0%)         2m1s
	  kube-system                 kube-controller-manager-old-k8s-version-625837    200m (10%)    0 (0%)      0 (0%)           0 (0%)         2m4s
	  kube-system                 kube-proxy-zdmf6                                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         109s
	  kube-system                 kube-scheduler-old-k8s-version-625837             100m (5%)     0 (0%)      0 (0%)           0 (0%)         2m1s
	  kube-system                 storage-provisioner                               0 (0%)        0 (0%)      0 (0%)           0 (0%)         107s
	  kubernetes-dashboard        dashboard-metrics-scraper-5f989dc9cf-l5hnj        0 (0%)        0 (0%)      0 (0%)           0 (0%)         40s
	  kubernetes-dashboard        kubernetes-dashboard-8694d4445c-kp26b             0 (0%)        0 (0%)      0 (0%)           0 (0%)         40s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  100m (5%)
	  memory             220Mi (2%)  220Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                  From             Message
	  ----    ------                   ----                 ----             -------
	  Normal  Starting                 107s                 kube-proxy       
	  Normal  Starting                 51s                  kube-proxy       
	  Normal  NodeHasSufficientMemory  2m9s (x8 over 2m9s)  kubelet          Node old-k8s-version-625837 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    2m9s (x8 over 2m9s)  kubelet          Node old-k8s-version-625837 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     2m9s (x8 over 2m9s)  kubelet          Node old-k8s-version-625837 status is now: NodeHasSufficientPID
	  Normal  NodeHasSufficientPID     2m1s                 kubelet          Node old-k8s-version-625837 status is now: NodeHasSufficientPID
	  Normal  NodeHasSufficientMemory  2m1s                 kubelet          Node old-k8s-version-625837 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    2m1s                 kubelet          Node old-k8s-version-625837 status is now: NodeHasNoDiskPressure
	  Normal  Starting                 2m1s                 kubelet          Starting kubelet.
	  Normal  RegisteredNode           109s                 node-controller  Node old-k8s-version-625837 event: Registered Node old-k8s-version-625837 in Controller
	  Normal  NodeReady                94s                  kubelet          Node old-k8s-version-625837 status is now: NodeReady
	  Normal  Starting                 60s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  60s (x8 over 60s)    kubelet          Node old-k8s-version-625837 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    60s (x8 over 60s)    kubelet          Node old-k8s-version-625837 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     60s (x8 over 60s)    kubelet          Node old-k8s-version-625837 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           41s                  node-controller  Node old-k8s-version-625837 event: Registered Node old-k8s-version-625837 in Controller
	
	
	==> dmesg <==
	[Nov22 00:26] overlayfs: idmapped layers are currently not supported
	[Nov22 00:31] overlayfs: idmapped layers are currently not supported
	[ +30.712010] overlayfs: idmapped layers are currently not supported
	[Nov22 00:32] overlayfs: idmapped layers are currently not supported
	[Nov22 00:33] overlayfs: idmapped layers are currently not supported
	[Nov22 00:35] overlayfs: idmapped layers are currently not supported
	[Nov22 00:36] overlayfs: idmapped layers are currently not supported
	[ +18.168104] overlayfs: idmapped layers are currently not supported
	[Nov22 00:37] overlayfs: idmapped layers are currently not supported
	[ +56.322609] overlayfs: idmapped layers are currently not supported
	[Nov22 00:38] overlayfs: idmapped layers are currently not supported
	[Nov22 00:39] overlayfs: idmapped layers are currently not supported
	[ +23.174928] overlayfs: idmapped layers are currently not supported
	[Nov22 00:41] overlayfs: idmapped layers are currently not supported
	[Nov22 00:42] overlayfs: idmapped layers are currently not supported
	[Nov22 00:44] overlayfs: idmapped layers are currently not supported
	[Nov22 00:45] overlayfs: idmapped layers are currently not supported
	[Nov22 00:46] overlayfs: idmapped layers are currently not supported
	[Nov22 00:48] overlayfs: idmapped layers are currently not supported
	[Nov22 00:50] overlayfs: idmapped layers are currently not supported
	[Nov22 00:51] overlayfs: idmapped layers are currently not supported
	[ +11.900293] overlayfs: idmapped layers are currently not supported
	[ +28.922055] overlayfs: idmapped layers are currently not supported
	[Nov22 00:52] overlayfs: idmapped layers are currently not supported
	[Nov22 00:53] overlayfs: idmapped layers are currently not supported
	
	
	==> etcd [a1d1d67ba75cb36995d73540a0a298366b4b32ccbfda1a424c21b0b86506d11d] <==
	{"level":"info","ts":"2025-11-22T00:53:37.798615Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	{"level":"info","ts":"2025-11-22T00:53:37.798651Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2025-11-22T00:53:37.798912Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed switched to configuration voters=(11459225503572592365)"}
	{"level":"info","ts":"2025-11-22T00:53:37.799022Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"68eaea490fab4e05","local-member-id":"9f0758e1c58a86ed","added-peer-id":"9f0758e1c58a86ed","added-peer-peer-urls":["https://192.168.85.2:2380"]}
	{"level":"info","ts":"2025-11-22T00:53:37.799145Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"68eaea490fab4e05","local-member-id":"9f0758e1c58a86ed","cluster-version":"3.5"}
	{"level":"info","ts":"2025-11-22T00:53:37.799451Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2025-11-22T00:53:37.838111Z","caller":"embed/etcd.go:726","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2025-11-22T00:53:37.83827Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.168.85.2:2380"}
	{"level":"info","ts":"2025-11-22T00:53:37.838281Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.85.2:2380"}
	{"level":"info","ts":"2025-11-22T00:53:37.838846Z","caller":"embed/etcd.go:278","msg":"now serving peer/client/metrics","local-member-id":"9f0758e1c58a86ed","initial-advertise-peer-urls":["https://192.168.85.2:2380"],"listen-peer-urls":["https://192.168.85.2:2380"],"advertise-client-urls":["https://192.168.85.2:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.85.2:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2025-11-22T00:53:37.83887Z","caller":"embed/etcd.go:855","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2025-11-22T00:53:38.751928Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed is starting a new election at term 2"}
	{"level":"info","ts":"2025-11-22T00:53:38.751978Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed became pre-candidate at term 2"}
	{"level":"info","ts":"2025-11-22T00:53:38.75201Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed received MsgPreVoteResp from 9f0758e1c58a86ed at term 2"}
	{"level":"info","ts":"2025-11-22T00:53:38.752023Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed became candidate at term 3"}
	{"level":"info","ts":"2025-11-22T00:53:38.752032Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed received MsgVoteResp from 9f0758e1c58a86ed at term 3"}
	{"level":"info","ts":"2025-11-22T00:53:38.752042Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed became leader at term 3"}
	{"level":"info","ts":"2025-11-22T00:53:38.752061Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 9f0758e1c58a86ed elected leader 9f0758e1c58a86ed at term 3"}
	{"level":"info","ts":"2025-11-22T00:53:38.758215Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"9f0758e1c58a86ed","local-member-attributes":"{Name:old-k8s-version-625837 ClientURLs:[https://192.168.85.2:2379]}","request-path":"/0/members/9f0758e1c58a86ed/attributes","cluster-id":"68eaea490fab4e05","publish-timeout":"7s"}
	{"level":"info","ts":"2025-11-22T00:53:38.758265Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-11-22T00:53:38.761262Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2025-11-22T00:53:38.761568Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-11-22T00:53:38.774094Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.85.2:2379"}
	{"level":"info","ts":"2025-11-22T00:53:38.774173Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2025-11-22T00:53:38.774188Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	
	
	==> kernel <==
	 00:54:36 up  5:36,  0 user,  load average: 2.39, 3.02, 2.43
	Linux old-k8s-version-625837 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [862c22e09e90a6b8d8c4549584f6e46b25d2206c9d9578169b2675c47e337141] <==
	I1122 00:53:44.331392       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1122 00:53:44.331638       1 main.go:139] hostIP = 192.168.85.2
	podIP = 192.168.85.2
	I1122 00:53:44.331827       1 main.go:148] setting mtu 1500 for CNI 
	I1122 00:53:44.331850       1 main.go:178] kindnetd IP family: "ipv4"
	I1122 00:53:44.331878       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-11-22T00:53:44Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1122 00:53:44.525582       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1122 00:53:44.525674       1 controller.go:381] "Waiting for informer caches to sync"
	I1122 00:53:44.525774       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1122 00:53:44.525934       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	E1122 00:54:14.526429       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.96.0.1:443/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Node"
	E1122 00:54:14.526429       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Pod: Get \"https://10.96.0.1:443/api/v1/pods?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Pod"
	E1122 00:54:14.526541       1 reflector.go:200] "Failed to watch" err="failed to list *v1.NetworkPolicy: Get \"https://10.96.0.1:443/apis/networking.k8s.io/v1/networkpolicies?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.NetworkPolicy"
	E1122 00:54:14.527781       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Namespace: Get \"https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Namespace"
	I1122 00:54:16.026659       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1122 00:54:16.026687       1 metrics.go:72] Registering metrics
	I1122 00:54:16.026754       1 controller.go:711] "Syncing nftables rules"
	I1122 00:54:24.526451       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1122 00:54:24.526528       1 main.go:301] handling current node
	I1122 00:54:34.530732       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1122 00:54:34.530774       1 main.go:301] handling current node
	
	
	==> kube-apiserver [4c192c23a5c2cb8d4827103c705875b67426f13e7541c3c230c0bacb6b6f0ca9] <==
	I1122 00:53:43.015516       1 shared_informer.go:318] Caches are synced for node_authorizer
	I1122 00:53:43.068230       1 controller.go:624] quota admission added evaluator for: leases.coordination.k8s.io
	I1122 00:53:43.074098       1 apf_controller.go:377] Running API Priority and Fairness config worker
	I1122 00:53:43.074199       1 apf_controller.go:380] Running API Priority and Fairness periodic rebalancing process
	I1122 00:53:43.074335       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1122 00:53:43.080556       1 shared_informer.go:318] Caches are synced for configmaps
	I1122 00:53:43.080659       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I1122 00:53:43.081329       1 shared_informer.go:318] Caches are synced for cluster_authentication_trust_controller
	I1122 00:53:43.081353       1 shared_informer.go:318] Caches are synced for crd-autoregister
	I1122 00:53:43.083344       1 aggregator.go:166] initial CRD sync complete...
	I1122 00:53:43.083436       1 autoregister_controller.go:141] Starting autoregister controller
	I1122 00:53:43.083465       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1122 00:53:43.083604       1 cache.go:39] Caches are synced for autoregister controller
	E1122 00:53:43.129714       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I1122 00:53:43.685908       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1122 00:53:44.880932       1 controller.go:624] quota admission added evaluator for: namespaces
	I1122 00:53:44.922456       1 controller.go:624] quota admission added evaluator for: deployments.apps
	I1122 00:53:44.954300       1 controller.go:624] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1122 00:53:44.965680       1 controller.go:624] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1122 00:53:44.974500       1 controller.go:624] quota admission added evaluator for: serviceaccounts
	I1122 00:53:45.046745       1 alloc.go:330] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.105.27.187"}
	I1122 00:53:45.071912       1 alloc.go:330] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.96.1.58"}
	I1122 00:53:56.251708       1 controller.go:624] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1122 00:53:56.453951       1 controller.go:624] quota admission added evaluator for: endpoints
	I1122 00:53:56.504809       1 controller.go:624] quota admission added evaluator for: replicasets.apps
	
	
	==> kube-controller-manager [fee91ee3c441411dbcd777ca8d5095cfd10e8a67a33ad4caf348ae63ec865a72] <==
	I1122 00:53:56.140486       1 shared_informer.go:318] Caches are synced for resource quota
	I1122 00:53:56.144539       1 shared_informer.go:318] Caches are synced for deployment
	I1122 00:53:56.512438       1 event.go:307] "Event occurred" object="kubernetes-dashboard/dashboard-metrics-scraper" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set dashboard-metrics-scraper-5f989dc9cf to 1"
	I1122 00:53:56.520254       1 event.go:307] "Event occurred" object="kubernetes-dashboard/kubernetes-dashboard" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set kubernetes-dashboard-8694d4445c to 1"
	I1122 00:53:56.553614       1 event.go:307] "Event occurred" object="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: dashboard-metrics-scraper-5f989dc9cf-l5hnj"
	I1122 00:53:56.553654       1 event.go:307] "Event occurred" object="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kubernetes-dashboard-8694d4445c-kp26b"
	I1122 00:53:56.565637       1 shared_informer.go:318] Caches are synced for garbage collector
	I1122 00:53:56.571727       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="51.340559ms"
	I1122 00:53:56.572649       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="61.208185ms"
	I1122 00:53:56.595118       1 shared_informer.go:318] Caches are synced for garbage collector
	I1122 00:53:56.595208       1 garbagecollector.go:166] "All resource monitors have synced. Proceeding to collect garbage"
	I1122 00:53:56.600814       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="27.717588ms"
	I1122 00:53:56.601024       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="47.728µs"
	I1122 00:53:56.603169       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="29.777171ms"
	I1122 00:53:56.603340       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="74.475µs"
	I1122 00:53:56.620368       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="75.739µs"
	I1122 00:54:01.910272       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="100.321µs"
	I1122 00:54:02.920463       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="54.915µs"
	I1122 00:54:03.922594       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="74.459µs"
	I1122 00:54:06.939553       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="14.237408ms"
	I1122 00:54:06.939728       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="53.036µs"
	I1122 00:54:19.606180       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="17.185253ms"
	I1122 00:54:19.606418       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="119.537µs"
	I1122 00:54:21.966807       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="83.198µs"
	I1122 00:54:26.881897       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="92.485µs"
	
	
	==> kube-proxy [b36da003b1235361d4b8a4e7e49cab04a763af242e9b15f7fb361a03edb9e4c8] <==
	I1122 00:53:44.457752       1 server_others.go:69] "Using iptables proxy"
	I1122 00:53:44.479853       1 node.go:141] Successfully retrieved node IP: 192.168.85.2
	I1122 00:53:44.499092       1 server.go:632] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1122 00:53:44.504001       1 server_others.go:152] "Using iptables Proxier"
	I1122 00:53:44.504041       1 server_others.go:421] "Detect-local-mode set to ClusterCIDR, but no cluster CIDR for family" ipFamily="IPv6"
	I1122 00:53:44.504050       1 server_others.go:438] "Defaulting to no-op detect-local"
	I1122 00:53:44.504080       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I1122 00:53:44.504294       1 server.go:846] "Version info" version="v1.28.0"
	I1122 00:53:44.504522       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1122 00:53:44.505252       1 config.go:188] "Starting service config controller"
	I1122 00:53:44.505276       1 shared_informer.go:311] Waiting for caches to sync for service config
	I1122 00:53:44.505292       1 config.go:97] "Starting endpoint slice config controller"
	I1122 00:53:44.505296       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I1122 00:53:44.505880       1 config.go:315] "Starting node config controller"
	I1122 00:53:44.505889       1 shared_informer.go:311] Waiting for caches to sync for node config
	I1122 00:53:44.606104       1 shared_informer.go:318] Caches are synced for node config
	I1122 00:53:44.606142       1 shared_informer.go:318] Caches are synced for service config
	I1122 00:53:44.606168       1 shared_informer.go:318] Caches are synced for endpoint slice config
	
	
	==> kube-scheduler [9deafbe8687dcd224ab5e480cfefa5cc596bb04d62aab6f5da3083aca07488e8] <==
	I1122 00:53:40.682527       1 serving.go:348] Generated self-signed cert in-memory
	W1122 00:53:42.847335       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1122 00:53:42.847435       1 authentication.go:368] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1122 00:53:42.847468       1 authentication.go:369] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1122 00:53:42.847498       1 authentication.go:370] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1122 00:53:43.019354       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.28.0"
	I1122 00:53:43.019454       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1122 00:53:43.026053       1 secure_serving.go:210] Serving securely on 127.0.0.1:10259
	I1122 00:53:43.026281       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1122 00:53:43.026341       1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I1122 00:53:43.026382       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I1122 00:53:43.127223       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Nov 22 00:53:56 old-k8s-version-625837 kubelet[785]: I1122 00:53:56.646150     785 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/beaa8339-684c-498f-81c1-beb37e1977c4-tmp-volume\") pod \"dashboard-metrics-scraper-5f989dc9cf-l5hnj\" (UID: \"beaa8339-684c-498f-81c1-beb37e1977c4\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-l5hnj"
	Nov 22 00:53:56 old-k8s-version-625837 kubelet[785]: I1122 00:53:56.646177     785 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fn4xk\" (UniqueName: \"kubernetes.io/projected/8bf88fab-10f4-4b9e-9866-f2cc0cade558-kube-api-access-fn4xk\") pod \"kubernetes-dashboard-8694d4445c-kp26b\" (UID: \"8bf88fab-10f4-4b9e-9866-f2cc0cade558\") " pod="kubernetes-dashboard/kubernetes-dashboard-8694d4445c-kp26b"
	Nov 22 00:53:56 old-k8s-version-625837 kubelet[785]: I1122 00:53:56.646206     785 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mw22f\" (UniqueName: \"kubernetes.io/projected/beaa8339-684c-498f-81c1-beb37e1977c4-kube-api-access-mw22f\") pod \"dashboard-metrics-scraper-5f989dc9cf-l5hnj\" (UID: \"beaa8339-684c-498f-81c1-beb37e1977c4\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-l5hnj"
	Nov 22 00:53:56 old-k8s-version-625837 kubelet[785]: W1122 00:53:56.912102     785 manager.go:1159] Failed to process watch event {EventType:0 Name:/docker/c1b8e95ff95e3c8d00f04e99efeff1b9aae77957e1730bef176997569aa985cb/crio-958c2701ff983f0ad7298f1e136c428203606ac2a8085f04ca54797af8f4a1b8 WatchSource:0}: Error finding container 958c2701ff983f0ad7298f1e136c428203606ac2a8085f04ca54797af8f4a1b8: Status 404 returned error can't find the container with id 958c2701ff983f0ad7298f1e136c428203606ac2a8085f04ca54797af8f4a1b8
	Nov 22 00:53:56 old-k8s-version-625837 kubelet[785]: W1122 00:53:56.929243     785 manager.go:1159] Failed to process watch event {EventType:0 Name:/docker/c1b8e95ff95e3c8d00f04e99efeff1b9aae77957e1730bef176997569aa985cb/crio-42302c17730cd39cb619ef1485897b51ad645c8f1b41f2fbea23d194a073196a WatchSource:0}: Error finding container 42302c17730cd39cb619ef1485897b51ad645c8f1b41f2fbea23d194a073196a: Status 404 returned error can't find the container with id 42302c17730cd39cb619ef1485897b51ad645c8f1b41f2fbea23d194a073196a
	Nov 22 00:54:01 old-k8s-version-625837 kubelet[785]: I1122 00:54:01.893753     785 scope.go:117] "RemoveContainer" containerID="ff320e48a3a516fef9253d0cdbb9e9db7010c4d7b9c8cffa9a738b9d7614378e"
	Nov 22 00:54:02 old-k8s-version-625837 kubelet[785]: I1122 00:54:02.900327     785 scope.go:117] "RemoveContainer" containerID="ff320e48a3a516fef9253d0cdbb9e9db7010c4d7b9c8cffa9a738b9d7614378e"
	Nov 22 00:54:02 old-k8s-version-625837 kubelet[785]: I1122 00:54:02.900630     785 scope.go:117] "RemoveContainer" containerID="058de021acd50f7260a63c0cbbdce1ea31f652c5c53d3110bd9e0698e868d29d"
	Nov 22 00:54:02 old-k8s-version-625837 kubelet[785]: E1122 00:54:02.900927     785 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-l5hnj_kubernetes-dashboard(beaa8339-684c-498f-81c1-beb37e1977c4)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-l5hnj" podUID="beaa8339-684c-498f-81c1-beb37e1977c4"
	Nov 22 00:54:03 old-k8s-version-625837 kubelet[785]: I1122 00:54:03.904527     785 scope.go:117] "RemoveContainer" containerID="058de021acd50f7260a63c0cbbdce1ea31f652c5c53d3110bd9e0698e868d29d"
	Nov 22 00:54:03 old-k8s-version-625837 kubelet[785]: E1122 00:54:03.904805     785 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-l5hnj_kubernetes-dashboard(beaa8339-684c-498f-81c1-beb37e1977c4)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-l5hnj" podUID="beaa8339-684c-498f-81c1-beb37e1977c4"
	Nov 22 00:54:06 old-k8s-version-625837 kubelet[785]: I1122 00:54:06.865494     785 scope.go:117] "RemoveContainer" containerID="058de021acd50f7260a63c0cbbdce1ea31f652c5c53d3110bd9e0698e868d29d"
	Nov 22 00:54:06 old-k8s-version-625837 kubelet[785]: E1122 00:54:06.865830     785 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-l5hnj_kubernetes-dashboard(beaa8339-684c-498f-81c1-beb37e1977c4)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-l5hnj" podUID="beaa8339-684c-498f-81c1-beb37e1977c4"
	Nov 22 00:54:14 old-k8s-version-625837 kubelet[785]: I1122 00:54:14.929059     785 scope.go:117] "RemoveContainer" containerID="a05ae1ca90b871431d6a63387000b3a0fc2d30bdc217ce9cd70319e940e72234"
	Nov 22 00:54:14 old-k8s-version-625837 kubelet[785]: I1122 00:54:14.963945     785 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kubernetes-dashboard/kubernetes-dashboard-8694d4445c-kp26b" podStartSLOduration=9.908889409 podCreationTimestamp="2025-11-22 00:53:56 +0000 UTC" firstStartedPulling="2025-11-22 00:53:56.933440852 +0000 UTC m=+20.508574084" lastFinishedPulling="2025-11-22 00:54:05.988426258 +0000 UTC m=+29.563559482" observedRunningTime="2025-11-22 00:54:06.928332717 +0000 UTC m=+30.503465949" watchObservedRunningTime="2025-11-22 00:54:14.963874807 +0000 UTC m=+38.539008031"
	Nov 22 00:54:21 old-k8s-version-625837 kubelet[785]: I1122 00:54:21.627276     785 scope.go:117] "RemoveContainer" containerID="058de021acd50f7260a63c0cbbdce1ea31f652c5c53d3110bd9e0698e868d29d"
	Nov 22 00:54:21 old-k8s-version-625837 kubelet[785]: I1122 00:54:21.946747     785 scope.go:117] "RemoveContainer" containerID="058de021acd50f7260a63c0cbbdce1ea31f652c5c53d3110bd9e0698e868d29d"
	Nov 22 00:54:21 old-k8s-version-625837 kubelet[785]: I1122 00:54:21.947024     785 scope.go:117] "RemoveContainer" containerID="d5df7182fcdae9d1252372617e3f730dce9069a240030f68c42a96f8a784beb0"
	Nov 22 00:54:21 old-k8s-version-625837 kubelet[785]: E1122 00:54:21.947350     785 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-l5hnj_kubernetes-dashboard(beaa8339-684c-498f-81c1-beb37e1977c4)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-l5hnj" podUID="beaa8339-684c-498f-81c1-beb37e1977c4"
	Nov 22 00:54:26 old-k8s-version-625837 kubelet[785]: I1122 00:54:26.865826     785 scope.go:117] "RemoveContainer" containerID="d5df7182fcdae9d1252372617e3f730dce9069a240030f68c42a96f8a784beb0"
	Nov 22 00:54:26 old-k8s-version-625837 kubelet[785]: E1122 00:54:26.866133     785 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-l5hnj_kubernetes-dashboard(beaa8339-684c-498f-81c1-beb37e1977c4)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-l5hnj" podUID="beaa8339-684c-498f-81c1-beb37e1977c4"
	Nov 22 00:54:33 old-k8s-version-625837 kubelet[785]: I1122 00:54:33.417176     785 dynamic_cafile_content.go:171] "Shutting down controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	Nov 22 00:54:33 old-k8s-version-625837 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Nov 22 00:54:33 old-k8s-version-625837 systemd[1]: kubelet.service: Deactivated successfully.
	Nov 22 00:54:33 old-k8s-version-625837 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	
	
	==> kubernetes-dashboard [c68e57a00d3a76b7ae45ba0f8dd0b5bd690d691009ae2395dd8a7a8d4b3955db] <==
	2025/11/22 00:54:06 Using namespace: kubernetes-dashboard
	2025/11/22 00:54:06 Using in-cluster config to connect to apiserver
	2025/11/22 00:54:06 Using secret token for csrf signing
	2025/11/22 00:54:06 Initializing csrf token from kubernetes-dashboard-csrf secret
	2025/11/22 00:54:06 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2025/11/22 00:54:06 Successful initial request to the apiserver, version: v1.28.0
	2025/11/22 00:54:06 Generating JWE encryption key
	2025/11/22 00:54:06 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2025/11/22 00:54:06 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2025/11/22 00:54:06 Initializing JWE encryption key from synchronized object
	2025/11/22 00:54:06 Creating in-cluster Sidecar client
	2025/11/22 00:54:06 Serving insecurely on HTTP port: 9090
	2025/11/22 00:54:06 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/11/22 00:54:36 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/11/22 00:54:06 Starting overwatch
	
	
	==> storage-provisioner [a05ae1ca90b871431d6a63387000b3a0fc2d30bdc217ce9cd70319e940e72234] <==
	I1122 00:53:44.303878       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1122 00:54:14.309990       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	
	
	==> storage-provisioner [ba7fcf01e100d9075f965f94ff668899f01eaaea6cf6c057437439123135dbae] <==
	I1122 00:54:14.981328       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1122 00:54:14.994094       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1122 00:54:14.994150       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I1122 00:54:32.403212       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1122 00:54:32.403660       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"829114ab-116e-46a2-b9b8-eeaca50c29a6", APIVersion:"v1", ResourceVersion:"669", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' old-k8s-version-625837_8a72e68a-4046-4038-aa83-3de9b61a4c43 became leader
	I1122 00:54:32.403779       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_old-k8s-version-625837_8a72e68a-4046-4038-aa83-3de9b61a4c43!
	I1122 00:54:32.504660       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_old-k8s-version-625837_8a72e68a-4046-4038-aa83-3de9b61a4c43!
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-625837 -n old-k8s-version-625837
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-625837 -n old-k8s-version-625837: exit status 2 (502.875866ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:269: (dbg) Run:  kubectl --context old-k8s-version-625837 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/old-k8s-version/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/Pause]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/Pause]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect old-k8s-version-625837
helpers_test.go:243: (dbg) docker inspect old-k8s-version-625837:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "c1b8e95ff95e3c8d00f04e99efeff1b9aae77957e1730bef176997569aa985cb",
	        "Created": "2025-11-22T00:52:11.631298738Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 696484,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-11-22T00:53:29.990117235Z",
	            "FinishedAt": "2025-11-22T00:53:29.137902629Z"
	        },
	        "Image": "sha256:97061622515a85470dad13cb046ad5fc5021fcd2b05aa921a620abb52951cd5d",
	        "ResolvConfPath": "/var/lib/docker/containers/c1b8e95ff95e3c8d00f04e99efeff1b9aae77957e1730bef176997569aa985cb/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/c1b8e95ff95e3c8d00f04e99efeff1b9aae77957e1730bef176997569aa985cb/hostname",
	        "HostsPath": "/var/lib/docker/containers/c1b8e95ff95e3c8d00f04e99efeff1b9aae77957e1730bef176997569aa985cb/hosts",
	        "LogPath": "/var/lib/docker/containers/c1b8e95ff95e3c8d00f04e99efeff1b9aae77957e1730bef176997569aa985cb/c1b8e95ff95e3c8d00f04e99efeff1b9aae77957e1730bef176997569aa985cb-json.log",
	        "Name": "/old-k8s-version-625837",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "old-k8s-version-625837:/var",
	                "/lib/modules:/lib/modules:ro"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "old-k8s-version-625837",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "c1b8e95ff95e3c8d00f04e99efeff1b9aae77957e1730bef176997569aa985cb",
	                "LowerDir": "/var/lib/docker/overlay2/d5742c1c24516207890a7b6e13b848d2f42ff607041b29dcbd0346c7d43d472d-init/diff:/var/lib/docker/overlay2/7e8788c6de692bc1c3758a2bb2c4b8da0fbba26855f855c0f3b655bfbdd92f8e/diff",
	                "MergedDir": "/var/lib/docker/overlay2/d5742c1c24516207890a7b6e13b848d2f42ff607041b29dcbd0346c7d43d472d/merged",
	                "UpperDir": "/var/lib/docker/overlay2/d5742c1c24516207890a7b6e13b848d2f42ff607041b29dcbd0346c7d43d472d/diff",
	                "WorkDir": "/var/lib/docker/overlay2/d5742c1c24516207890a7b6e13b848d2f42ff607041b29dcbd0346c7d43d472d/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "old-k8s-version-625837",
	                "Source": "/var/lib/docker/volumes/old-k8s-version-625837/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "old-k8s-version-625837",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763588073-21934@sha256:19d3da0413e1bfa354cbb88004c6796f8e9772a083e0230b0f6e50212ee04c7e",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "old-k8s-version-625837",
	                "name.minikube.sigs.k8s.io": "old-k8s-version-625837",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "6f56587011a9bd4086d696d587f02af720d3b10a2deffb9d9ed4024f5dbca0be",
	            "SandboxKey": "/var/run/docker/netns/6f56587011a9",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33775"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33776"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33779"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33777"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33778"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "old-k8s-version-625837": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.85.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "16:d6:5e:39:84:2f",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "ee4ddaee680d222041a033cf4edb5764a7a32b1715bb1145e84ad0704600fbeb",
	                    "EndpointID": "edf096993a8a45f8965e7cfd56a32f4513cecf21afe5933b0c0b49bc9d1f45f9",
	                    "Gateway": "192.168.85.1",
	                    "IPAddress": "192.168.85.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "old-k8s-version-625837",
	                        "c1b8e95ff95e"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-625837 -n old-k8s-version-625837
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-625837 -n old-k8s-version-625837: exit status 2 (451.647606ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestStartStop/group/old-k8s-version/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/Pause]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p old-k8s-version-625837 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p old-k8s-version-625837 logs -n 25: (1.332901872s)
helpers_test.go:260: TestStartStop/group/old-k8s-version/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────
────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │          PROFILE          │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────
────┤
	│ ssh     │ -p cilium-163229 sudo systemctl cat containerd --no-pager                                                                                                                                                                                     │ cilium-163229             │ jenkins │ v1.37.0 │ 22 Nov 25 00:50 UTC │                     │
	│ ssh     │ -p cilium-163229 sudo cat /lib/systemd/system/containerd.service                                                                                                                                                                              │ cilium-163229             │ jenkins │ v1.37.0 │ 22 Nov 25 00:50 UTC │                     │
	│ ssh     │ -p cilium-163229 sudo cat /etc/containerd/config.toml                                                                                                                                                                                         │ cilium-163229             │ jenkins │ v1.37.0 │ 22 Nov 25 00:50 UTC │                     │
	│ ssh     │ -p cilium-163229 sudo containerd config dump                                                                                                                                                                                                  │ cilium-163229             │ jenkins │ v1.37.0 │ 22 Nov 25 00:50 UTC │                     │
	│ ssh     │ -p cilium-163229 sudo systemctl status crio --all --full --no-pager                                                                                                                                                                           │ cilium-163229             │ jenkins │ v1.37.0 │ 22 Nov 25 00:50 UTC │                     │
	│ ssh     │ -p cilium-163229 sudo systemctl cat crio --no-pager                                                                                                                                                                                           │ cilium-163229             │ jenkins │ v1.37.0 │ 22 Nov 25 00:50 UTC │                     │
	│ ssh     │ -p cilium-163229 sudo find /etc/crio -type f -exec sh -c 'echo {}; cat {}' \;                                                                                                                                                                 │ cilium-163229             │ jenkins │ v1.37.0 │ 22 Nov 25 00:50 UTC │                     │
	│ ssh     │ -p cilium-163229 sudo crio config                                                                                                                                                                                                             │ cilium-163229             │ jenkins │ v1.37.0 │ 22 Nov 25 00:50 UTC │                     │
	│ delete  │ -p cilium-163229                                                                                                                                                                                                                              │ cilium-163229             │ jenkins │ v1.37.0 │ 22 Nov 25 00:50 UTC │ 22 Nov 25 00:50 UTC │
	│ start   │ -p force-systemd-env-634519 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio                                                                                                                                    │ force-systemd-env-634519  │ jenkins │ v1.37.0 │ 22 Nov 25 00:50 UTC │ 22 Nov 25 00:51 UTC │
	│ delete  │ -p kubernetes-upgrade-134864                                                                                                                                                                                                                  │ kubernetes-upgrade-134864 │ jenkins │ v1.37.0 │ 22 Nov 25 00:50 UTC │ 22 Nov 25 00:51 UTC │
	│ start   │ -p cert-expiration-621390 --memory=3072 --cert-expiration=3m --driver=docker  --container-runtime=crio                                                                                                                                        │ cert-expiration-621390    │ jenkins │ v1.37.0 │ 22 Nov 25 00:51 UTC │ 22 Nov 25 00:51 UTC │
	│ delete  │ -p force-systemd-env-634519                                                                                                                                                                                                                   │ force-systemd-env-634519  │ jenkins │ v1.37.0 │ 22 Nov 25 00:51 UTC │ 22 Nov 25 00:51 UTC │
	│ start   │ -p cert-options-002126 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=crio                     │ cert-options-002126       │ jenkins │ v1.37.0 │ 22 Nov 25 00:51 UTC │ 22 Nov 25 00:52 UTC │
	│ ssh     │ cert-options-002126 ssh openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt                                                                                                                                                   │ cert-options-002126       │ jenkins │ v1.37.0 │ 22 Nov 25 00:52 UTC │ 22 Nov 25 00:52 UTC │
	│ ssh     │ -p cert-options-002126 -- sudo cat /etc/kubernetes/admin.conf                                                                                                                                                                                 │ cert-options-002126       │ jenkins │ v1.37.0 │ 22 Nov 25 00:52 UTC │ 22 Nov 25 00:52 UTC │
	│ delete  │ -p cert-options-002126                                                                                                                                                                                                                        │ cert-options-002126       │ jenkins │ v1.37.0 │ 22 Nov 25 00:52 UTC │ 22 Nov 25 00:52 UTC │
	│ start   │ -p old-k8s-version-625837 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-625837    │ jenkins │ v1.37.0 │ 22 Nov 25 00:52 UTC │ 22 Nov 25 00:53 UTC │
	│ addons  │ enable metrics-server -p old-k8s-version-625837 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                  │ old-k8s-version-625837    │ jenkins │ v1.37.0 │ 22 Nov 25 00:53 UTC │                     │
	│ stop    │ -p old-k8s-version-625837 --alsologtostderr -v=3                                                                                                                                                                                              │ old-k8s-version-625837    │ jenkins │ v1.37.0 │ 22 Nov 25 00:53 UTC │ 22 Nov 25 00:53 UTC │
	│ addons  │ enable dashboard -p old-k8s-version-625837 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                             │ old-k8s-version-625837    │ jenkins │ v1.37.0 │ 22 Nov 25 00:53 UTC │ 22 Nov 25 00:53 UTC │
	│ start   │ -p old-k8s-version-625837 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-625837    │ jenkins │ v1.37.0 │ 22 Nov 25 00:53 UTC │ 22 Nov 25 00:54 UTC │
	│ image   │ old-k8s-version-625837 image list --format=json                                                                                                                                                                                               │ old-k8s-version-625837    │ jenkins │ v1.37.0 │ 22 Nov 25 00:54 UTC │ 22 Nov 25 00:54 UTC │
	│ pause   │ -p old-k8s-version-625837 --alsologtostderr -v=1                                                                                                                                                                                              │ old-k8s-version-625837    │ jenkins │ v1.37.0 │ 22 Nov 25 00:54 UTC │                     │
	│ start   │ -p cert-expiration-621390 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=crio                                                                                                                                     │ cert-expiration-621390    │ jenkins │ v1.37.0 │ 22 Nov 25 00:54 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────
────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/22 00:54:36
	Running on machine: ip-172-31-21-244
	Binary: Built with gc go1.25.3 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1122 00:54:36.006516  699039 out.go:360] Setting OutFile to fd 1 ...
	I1122 00:54:36.006651  699039 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1122 00:54:36.006655  699039 out.go:374] Setting ErrFile to fd 2...
	I1122 00:54:36.006659  699039 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1122 00:54:36.006932  699039 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21934-513600/.minikube/bin
	I1122 00:54:36.007557  699039 out.go:368] Setting JSON to false
	I1122 00:54:36.008735  699039 start.go:133] hostinfo: {"hostname":"ip-172-31-21-244","uptime":20192,"bootTime":1763752684,"procs":212,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"da8ac1fd-6236-412a-a346-95873c98230d"}
	I1122 00:54:36.008825  699039 start.go:143] virtualization:  
	I1122 00:54:36.014995  699039 out.go:179] * [cert-expiration-621390] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1122 00:54:36.019160  699039 notify.go:221] Checking for updates...
	I1122 00:54:36.022224  699039 out.go:179]   - MINIKUBE_LOCATION=21934
	I1122 00:54:36.025458  699039 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1122 00:54:36.028474  699039 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21934-513600/kubeconfig
	I1122 00:54:36.031417  699039 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21934-513600/.minikube
	I1122 00:54:36.034577  699039 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1122 00:54:36.037606  699039 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1122 00:54:36.041028  699039 config.go:182] Loaded profile config "cert-expiration-621390": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1122 00:54:36.041574  699039 driver.go:422] Setting default libvirt URI to qemu:///system
	I1122 00:54:36.088820  699039 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1122 00:54:36.088929  699039 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1122 00:54:36.182340  699039 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:2 ContainersRunning:2 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:49 OomKillDisable:true NGoroutines:62 SystemTime:2025-11-22 00:54:36.171098644 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1122 00:54:36.182431  699039 docker.go:319] overlay module found
	I1122 00:54:36.185417  699039 out.go:179] * Using the docker driver based on existing profile
	I1122 00:54:36.188358  699039 start.go:309] selected driver: docker
	I1122 00:54:36.188367  699039 start.go:930] validating driver "docker" against &{Name:cert-expiration-621390 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763588073-21934@sha256:19d3da0413e1bfa354cbb88004c6796f8e9772a083e0230b0f6e50212ee04c7e Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:cert-expiration-621390 Namespace:default APIServerHAVIP: A
PIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:3m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: So
cketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1122 00:54:36.188456  699039 start.go:941] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1122 00:54:36.189175  699039 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1122 00:54:36.285510  699039 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:2 ContainersRunning:2 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:49 OomKillDisable:true NGoroutines:62 SystemTime:2025-11-22 00:54:36.274025213 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1122 00:54:36.285904  699039 cni.go:84] Creating CNI manager for ""
	I1122 00:54:36.285963  699039 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1122 00:54:36.286010  699039 start.go:353] cluster config:
	{Name:cert-expiration-621390 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763588073-21934@sha256:19d3da0413e1bfa354cbb88004c6796f8e9772a083e0230b0f6e50212ee04c7e Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:cert-expiration-621390 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.loca
l ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:8760h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0
GPUs: AutoPauseInterval:1m0s}
	I1122 00:54:36.289371  699039 out.go:179] * Starting "cert-expiration-621390" primary control-plane node in "cert-expiration-621390" cluster
	I1122 00:54:36.292121  699039 cache.go:134] Beginning downloading kic base image for docker with crio
	I1122 00:54:36.294940  699039 out.go:179] * Pulling base image v0.0.48-1763588073-21934 ...
	I1122 00:54:36.297975  699039 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1122 00:54:36.298024  699039 preload.go:203] Found local preload: /home/jenkins/minikube-integration/21934-513600/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4
	I1122 00:54:36.298032  699039 cache.go:65] Caching tarball of preloaded images
	I1122 00:54:36.298047  699039 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763588073-21934@sha256:19d3da0413e1bfa354cbb88004c6796f8e9772a083e0230b0f6e50212ee04c7e in local docker daemon
	I1122 00:54:36.298113  699039 preload.go:238] Found /home/jenkins/minikube-integration/21934-513600/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1122 00:54:36.298121  699039 cache.go:68] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1122 00:54:36.298223  699039 profile.go:143] Saving config to /home/jenkins/minikube-integration/21934-513600/.minikube/profiles/cert-expiration-621390/config.json ...
	I1122 00:54:36.319919  699039 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763588073-21934@sha256:19d3da0413e1bfa354cbb88004c6796f8e9772a083e0230b0f6e50212ee04c7e in local docker daemon, skipping pull
	I1122 00:54:36.319929  699039 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763588073-21934@sha256:19d3da0413e1bfa354cbb88004c6796f8e9772a083e0230b0f6e50212ee04c7e exists in daemon, skipping load
	I1122 00:54:36.319940  699039 cache.go:243] Successfully downloaded all kic artifacts
	I1122 00:54:36.319961  699039 start.go:360] acquireMachinesLock for cert-expiration-621390: {Name:mkf4bacb3899a44914a061c8cdff29066e3341e4 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1122 00:54:36.320009  699039 start.go:364] duration metric: took 32.68µs to acquireMachinesLock for "cert-expiration-621390"
	I1122 00:54:36.320026  699039 start.go:96] Skipping create...Using existing machine configuration
	I1122 00:54:36.320030  699039 fix.go:54] fixHost starting: 
	I1122 00:54:36.320281  699039 cli_runner.go:164] Run: docker container inspect cert-expiration-621390 --format={{.State.Status}}
	I1122 00:54:36.347206  699039 fix.go:112] recreateIfNeeded on cert-expiration-621390: state=Running err=<nil>
	W1122 00:54:36.347232  699039 fix.go:138] unexpected machine state, will restart: <nil>
	
	
	==> CRI-O <==
	Nov 22 00:54:21 old-k8s-version-625837 crio[653]: time="2025-11-22T00:54:21.630800861Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 22 00:54:21 old-k8s-version-625837 crio[653]: time="2025-11-22T00:54:21.64028348Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 22 00:54:21 old-k8s-version-625837 crio[653]: time="2025-11-22T00:54:21.640932831Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 22 00:54:21 old-k8s-version-625837 crio[653]: time="2025-11-22T00:54:21.657621032Z" level=info msg="Created container d5df7182fcdae9d1252372617e3f730dce9069a240030f68c42a96f8a784beb0: kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-l5hnj/dashboard-metrics-scraper" id=f92a0282-e6ca-43d9-a661-2d2ed176a2c9 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 22 00:54:21 old-k8s-version-625837 crio[653]: time="2025-11-22T00:54:21.662320406Z" level=info msg="Starting container: d5df7182fcdae9d1252372617e3f730dce9069a240030f68c42a96f8a784beb0" id=8fcc2171-fa09-4bb6-bd79-b0d028db0270 name=/runtime.v1.RuntimeService/StartContainer
	Nov 22 00:54:21 old-k8s-version-625837 crio[653]: time="2025-11-22T00:54:21.664605631Z" level=info msg="Started container" PID=1666 containerID=d5df7182fcdae9d1252372617e3f730dce9069a240030f68c42a96f8a784beb0 description=kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-l5hnj/dashboard-metrics-scraper id=8fcc2171-fa09-4bb6-bd79-b0d028db0270 name=/runtime.v1.RuntimeService/StartContainer sandboxID=958c2701ff983f0ad7298f1e136c428203606ac2a8085f04ca54797af8f4a1b8
	Nov 22 00:54:21 old-k8s-version-625837 conmon[1664]: conmon d5df7182fcdae9d12523 <ninfo>: container 1666 exited with status 1
	Nov 22 00:54:21 old-k8s-version-625837 crio[653]: time="2025-11-22T00:54:21.950919043Z" level=info msg="Removing container: 058de021acd50f7260a63c0cbbdce1ea31f652c5c53d3110bd9e0698e868d29d" id=115e9859-dcfb-4477-96d4-7cb95727789e name=/runtime.v1.RuntimeService/RemoveContainer
	Nov 22 00:54:21 old-k8s-version-625837 crio[653]: time="2025-11-22T00:54:21.962044247Z" level=info msg="Error loading conmon cgroup of container 058de021acd50f7260a63c0cbbdce1ea31f652c5c53d3110bd9e0698e868d29d: cgroup deleted" id=115e9859-dcfb-4477-96d4-7cb95727789e name=/runtime.v1.RuntimeService/RemoveContainer
	Nov 22 00:54:21 old-k8s-version-625837 crio[653]: time="2025-11-22T00:54:21.968863895Z" level=info msg="Removed container 058de021acd50f7260a63c0cbbdce1ea31f652c5c53d3110bd9e0698e868d29d: kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-l5hnj/dashboard-metrics-scraper" id=115e9859-dcfb-4477-96d4-7cb95727789e name=/runtime.v1.RuntimeService/RemoveContainer
	Nov 22 00:54:24 old-k8s-version-625837 crio[653]: time="2025-11-22T00:54:24.526774505Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 22 00:54:24 old-k8s-version-625837 crio[653]: time="2025-11-22T00:54:24.533401298Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 22 00:54:24 old-k8s-version-625837 crio[653]: time="2025-11-22T00:54:24.533441994Z" level=info msg="Updated default CNI network name to kindnet"
	Nov 22 00:54:24 old-k8s-version-625837 crio[653]: time="2025-11-22T00:54:24.533465821Z" level=info msg="CNI monitoring event WRITE         \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 22 00:54:24 old-k8s-version-625837 crio[653]: time="2025-11-22T00:54:24.542203018Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 22 00:54:24 old-k8s-version-625837 crio[653]: time="2025-11-22T00:54:24.542370446Z" level=info msg="Updated default CNI network name to kindnet"
	Nov 22 00:54:24 old-k8s-version-625837 crio[653]: time="2025-11-22T00:54:24.542451888Z" level=info msg="CNI monitoring event WRITE         \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 22 00:54:24 old-k8s-version-625837 crio[653]: time="2025-11-22T00:54:24.547856111Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 22 00:54:24 old-k8s-version-625837 crio[653]: time="2025-11-22T00:54:24.547887881Z" level=info msg="Updated default CNI network name to kindnet"
	Nov 22 00:54:24 old-k8s-version-625837 crio[653]: time="2025-11-22T00:54:24.547910937Z" level=info msg="CNI monitoring event RENAME        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 22 00:54:24 old-k8s-version-625837 crio[653]: time="2025-11-22T00:54:24.551559002Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 22 00:54:24 old-k8s-version-625837 crio[653]: time="2025-11-22T00:54:24.551596219Z" level=info msg="Updated default CNI network name to kindnet"
	Nov 22 00:54:24 old-k8s-version-625837 crio[653]: time="2025-11-22T00:54:24.551619538Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist\" ← \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 22 00:54:24 old-k8s-version-625837 crio[653]: time="2025-11-22T00:54:24.554904192Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 22 00:54:24 old-k8s-version-625837 crio[653]: time="2025-11-22T00:54:24.554936511Z" level=info msg="Updated default CNI network name to kindnet"
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                      CREATED              STATE               NAME                        ATTEMPT             POD ID              POD                                              NAMESPACE
	d5df7182fcdae       a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7                                           16 seconds ago       Exited              dashboard-metrics-scraper   2                   958c2701ff983       dashboard-metrics-scraper-5f989dc9cf-l5hnj       kubernetes-dashboard
	ba7fcf01e100d       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                           23 seconds ago       Running             storage-provisioner         2                   c5bc79f65de3c       storage-provisioner                              kube-system
	c68e57a00d3a7       docker.io/kubernetesui/dashboard@sha256:5c52c60663b473628bd98e4ffee7a747ef1f88d8c7bcee957b089fb3f61bdedf   32 seconds ago       Running             kubernetes-dashboard        0                   42302c17730cd       kubernetes-dashboard-8694d4445c-kp26b            kubernetes-dashboard
	654cb4e625cf5       1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c                                           54 seconds ago       Running             busybox                     1                   e759bcdaf4d5e       busybox                                          default
	862c22e09e90a       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c                                           54 seconds ago       Running             kindnet-cni                 1                   0076beb33e255       kindnet-h6vbs                                    kube-system
	a05ae1ca90b87       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                           54 seconds ago       Exited              storage-provisioner         1                   c5bc79f65de3c       storage-provisioner                              kube-system
	5a851503d8f86       97e04611ad43405a2e5863ae17c6f1bc9181bdefdaa78627c432ef754a4eb108                                           54 seconds ago       Running             coredns                     1                   9b53dbefdfb34       coredns-5dd5756b68-6m4nr                         kube-system
	b36da003b1235       940f54a5bcae9dd4c97844fa36d12cc5d9078cffd5e677ad0df1528c12f3240d                                           54 seconds ago       Running             kube-proxy                  1                   cf63710883e16       kube-proxy-zdmf6                                 kube-system
	9deafbe8687dc       762dce4090c5f4789bb5dbb933d5b50bc1a2357d7739bbce30d949820e5a38ee                                           About a minute ago   Running             kube-scheduler              1                   767cd807d2aff       kube-scheduler-old-k8s-version-625837            kube-system
	a1d1d67ba75cb       9cdd6470f48c8b127530b7ce6ea4b3524137984481e48bcde619735890840ace                                           About a minute ago   Running             etcd                        1                   c4858b026f61e       etcd-old-k8s-version-625837                      kube-system
	4c192c23a5c2c       00543d2fe5d71095984891a0609ee504b81f9d72a69a0ad02039d4e135213766                                           About a minute ago   Running             kube-apiserver              1                   618ca9425648e       kube-apiserver-old-k8s-version-625837            kube-system
	fee91ee3c4414       46cc66ccc7c19b4b30625b0aa4e178792add2385659205d7c6fcbd05d78c23e5                                           About a minute ago   Running             kube-controller-manager     1                   4a0e390229e94       kube-controller-manager-old-k8s-version-625837   kube-system
	
	
	==> coredns [5a851503d8f86845f2821dfa2135db11f6512a59f26a300ecd84a35339db4496] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 8aa94104b4dae56b00431f7362ac05b997af2246775de35dc2eb361b0707b2fa7199f9ddfdba27fdef1331b76d09c41700f6cb5d00836dabab7c0df8e651283f
	CoreDNS-1.10.1
	linux/arm64, go1.20, 055b2c3
	[INFO] 127.0.0.1:41536 - 56119 "HINFO IN 2345663300153241260.7465411533950033970. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.023952332s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[WARNING] plugin/kubernetes: Kubernetes API connection failure: Get "https://10.96.0.1:443/version": dial tcp 10.96.0.1:443: i/o timeout
	
	
	==> describe nodes <==
	Name:               old-k8s-version-625837
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=old-k8s-version-625837
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=299bbe887a12c40541707cc636234f35f4ff1785
	                    minikube.k8s.io/name=old-k8s-version-625837
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_11_22T00_52_36_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 22 Nov 2025 00:52:31 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  old-k8s-version-625837
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 22 Nov 2025 00:54:23 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 22 Nov 2025 00:54:13 +0000   Sat, 22 Nov 2025 00:52:28 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 22 Nov 2025 00:54:13 +0000   Sat, 22 Nov 2025 00:52:28 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 22 Nov 2025 00:54:13 +0000   Sat, 22 Nov 2025 00:52:28 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 22 Nov 2025 00:54:13 +0000   Sat, 22 Nov 2025 00:53:02 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.85.2
	  Hostname:    old-k8s-version-625837
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	System Info:
	  Machine ID:                 c92eea4d3c03c0156e01fcf8691e3907
	  System UUID:                dcff3a74-c051-4bbe-bac8-1863a477231a
	  Boot ID:                    72ac7385-472f-47d1-a23e-bd80468d6e09
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.2
	  Kubelet Version:            v1.28.0
	  Kube-Proxy Version:         v1.28.0
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                              CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                              ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         93s
	  kube-system                 coredns-5dd5756b68-6m4nr                          100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     111s
	  kube-system                 etcd-old-k8s-version-625837                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         2m3s
	  kube-system                 kindnet-h6vbs                                     100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      111s
	  kube-system                 kube-apiserver-old-k8s-version-625837             250m (12%)    0 (0%)      0 (0%)           0 (0%)         2m3s
	  kube-system                 kube-controller-manager-old-k8s-version-625837    200m (10%)    0 (0%)      0 (0%)           0 (0%)         2m6s
	  kube-system                 kube-proxy-zdmf6                                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         111s
	  kube-system                 kube-scheduler-old-k8s-version-625837             100m (5%)     0 (0%)      0 (0%)           0 (0%)         2m3s
	  kube-system                 storage-provisioner                               0 (0%)        0 (0%)      0 (0%)           0 (0%)         109s
	  kubernetes-dashboard        dashboard-metrics-scraper-5f989dc9cf-l5hnj        0 (0%)        0 (0%)      0 (0%)           0 (0%)         42s
	  kubernetes-dashboard        kubernetes-dashboard-8694d4445c-kp26b             0 (0%)        0 (0%)      0 (0%)           0 (0%)         42s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  100m (5%)
	  memory             220Mi (2%)  220Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 109s                   kube-proxy       
	  Normal  Starting                 54s                    kube-proxy       
	  Normal  NodeHasSufficientMemory  2m11s (x8 over 2m11s)  kubelet          Node old-k8s-version-625837 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    2m11s (x8 over 2m11s)  kubelet          Node old-k8s-version-625837 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     2m11s (x8 over 2m11s)  kubelet          Node old-k8s-version-625837 status is now: NodeHasSufficientPID
	  Normal  NodeHasSufficientPID     2m3s                   kubelet          Node old-k8s-version-625837 status is now: NodeHasSufficientPID
	  Normal  NodeHasSufficientMemory  2m3s                   kubelet          Node old-k8s-version-625837 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    2m3s                   kubelet          Node old-k8s-version-625837 status is now: NodeHasNoDiskPressure
	  Normal  Starting                 2m3s                   kubelet          Starting kubelet.
	  Normal  RegisteredNode           111s                   node-controller  Node old-k8s-version-625837 event: Registered Node old-k8s-version-625837 in Controller
	  Normal  NodeReady                96s                    kubelet          Node old-k8s-version-625837 status is now: NodeReady
	  Normal  Starting                 62s                    kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  62s (x8 over 62s)      kubelet          Node old-k8s-version-625837 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    62s (x8 over 62s)      kubelet          Node old-k8s-version-625837 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     62s (x8 over 62s)      kubelet          Node old-k8s-version-625837 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           43s                    node-controller  Node old-k8s-version-625837 event: Registered Node old-k8s-version-625837 in Controller
	
	
	==> dmesg <==
	[Nov22 00:26] overlayfs: idmapped layers are currently not supported
	[Nov22 00:31] overlayfs: idmapped layers are currently not supported
	[ +30.712010] overlayfs: idmapped layers are currently not supported
	[Nov22 00:32] overlayfs: idmapped layers are currently not supported
	[Nov22 00:33] overlayfs: idmapped layers are currently not supported
	[Nov22 00:35] overlayfs: idmapped layers are currently not supported
	[Nov22 00:36] overlayfs: idmapped layers are currently not supported
	[ +18.168104] overlayfs: idmapped layers are currently not supported
	[Nov22 00:37] overlayfs: idmapped layers are currently not supported
	[ +56.322609] overlayfs: idmapped layers are currently not supported
	[Nov22 00:38] overlayfs: idmapped layers are currently not supported
	[Nov22 00:39] overlayfs: idmapped layers are currently not supported
	[ +23.174928] overlayfs: idmapped layers are currently not supported
	[Nov22 00:41] overlayfs: idmapped layers are currently not supported
	[Nov22 00:42] overlayfs: idmapped layers are currently not supported
	[Nov22 00:44] overlayfs: idmapped layers are currently not supported
	[Nov22 00:45] overlayfs: idmapped layers are currently not supported
	[Nov22 00:46] overlayfs: idmapped layers are currently not supported
	[Nov22 00:48] overlayfs: idmapped layers are currently not supported
	[Nov22 00:50] overlayfs: idmapped layers are currently not supported
	[Nov22 00:51] overlayfs: idmapped layers are currently not supported
	[ +11.900293] overlayfs: idmapped layers are currently not supported
	[ +28.922055] overlayfs: idmapped layers are currently not supported
	[Nov22 00:52] overlayfs: idmapped layers are currently not supported
	[Nov22 00:53] overlayfs: idmapped layers are currently not supported
	
	
	==> etcd [a1d1d67ba75cb36995d73540a0a298366b4b32ccbfda1a424c21b0b86506d11d] <==
	{"level":"info","ts":"2025-11-22T00:53:37.798615Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	{"level":"info","ts":"2025-11-22T00:53:37.798651Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2025-11-22T00:53:37.798912Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed switched to configuration voters=(11459225503572592365)"}
	{"level":"info","ts":"2025-11-22T00:53:37.799022Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"68eaea490fab4e05","local-member-id":"9f0758e1c58a86ed","added-peer-id":"9f0758e1c58a86ed","added-peer-peer-urls":["https://192.168.85.2:2380"]}
	{"level":"info","ts":"2025-11-22T00:53:37.799145Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"68eaea490fab4e05","local-member-id":"9f0758e1c58a86ed","cluster-version":"3.5"}
	{"level":"info","ts":"2025-11-22T00:53:37.799451Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2025-11-22T00:53:37.838111Z","caller":"embed/etcd.go:726","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2025-11-22T00:53:37.83827Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.168.85.2:2380"}
	{"level":"info","ts":"2025-11-22T00:53:37.838281Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.85.2:2380"}
	{"level":"info","ts":"2025-11-22T00:53:37.838846Z","caller":"embed/etcd.go:278","msg":"now serving peer/client/metrics","local-member-id":"9f0758e1c58a86ed","initial-advertise-peer-urls":["https://192.168.85.2:2380"],"listen-peer-urls":["https://192.168.85.2:2380"],"advertise-client-urls":["https://192.168.85.2:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.85.2:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2025-11-22T00:53:37.83887Z","caller":"embed/etcd.go:855","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2025-11-22T00:53:38.751928Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed is starting a new election at term 2"}
	{"level":"info","ts":"2025-11-22T00:53:38.751978Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed became pre-candidate at term 2"}
	{"level":"info","ts":"2025-11-22T00:53:38.75201Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed received MsgPreVoteResp from 9f0758e1c58a86ed at term 2"}
	{"level":"info","ts":"2025-11-22T00:53:38.752023Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed became candidate at term 3"}
	{"level":"info","ts":"2025-11-22T00:53:38.752032Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed received MsgVoteResp from 9f0758e1c58a86ed at term 3"}
	{"level":"info","ts":"2025-11-22T00:53:38.752042Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed became leader at term 3"}
	{"level":"info","ts":"2025-11-22T00:53:38.752061Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 9f0758e1c58a86ed elected leader 9f0758e1c58a86ed at term 3"}
	{"level":"info","ts":"2025-11-22T00:53:38.758215Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"9f0758e1c58a86ed","local-member-attributes":"{Name:old-k8s-version-625837 ClientURLs:[https://192.168.85.2:2379]}","request-path":"/0/members/9f0758e1c58a86ed/attributes","cluster-id":"68eaea490fab4e05","publish-timeout":"7s"}
	{"level":"info","ts":"2025-11-22T00:53:38.758265Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-11-22T00:53:38.761262Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2025-11-22T00:53:38.761568Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-11-22T00:53:38.774094Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.85.2:2379"}
	{"level":"info","ts":"2025-11-22T00:53:38.774173Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2025-11-22T00:53:38.774188Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	
	
	==> kernel <==
	 00:54:38 up  5:36,  0 user,  load average: 2.39, 3.02, 2.43
	Linux old-k8s-version-625837 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [862c22e09e90a6b8d8c4549584f6e46b25d2206c9d9578169b2675c47e337141] <==
	I1122 00:53:44.331392       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1122 00:53:44.331638       1 main.go:139] hostIP = 192.168.85.2
	podIP = 192.168.85.2
	I1122 00:53:44.331827       1 main.go:148] setting mtu 1500 for CNI 
	I1122 00:53:44.331850       1 main.go:178] kindnetd IP family: "ipv4"
	I1122 00:53:44.331878       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-11-22T00:53:44Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1122 00:53:44.525582       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1122 00:53:44.525674       1 controller.go:381] "Waiting for informer caches to sync"
	I1122 00:53:44.525774       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1122 00:53:44.525934       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	E1122 00:54:14.526429       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.96.0.1:443/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Node"
	E1122 00:54:14.526429       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Pod: Get \"https://10.96.0.1:443/api/v1/pods?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Pod"
	E1122 00:54:14.526541       1 reflector.go:200] "Failed to watch" err="failed to list *v1.NetworkPolicy: Get \"https://10.96.0.1:443/apis/networking.k8s.io/v1/networkpolicies?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.NetworkPolicy"
	E1122 00:54:14.527781       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Namespace: Get \"https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Namespace"
	I1122 00:54:16.026659       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1122 00:54:16.026687       1 metrics.go:72] Registering metrics
	I1122 00:54:16.026754       1 controller.go:711] "Syncing nftables rules"
	I1122 00:54:24.526451       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1122 00:54:24.526528       1 main.go:301] handling current node
	I1122 00:54:34.530732       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1122 00:54:34.530774       1 main.go:301] handling current node
	
	
	==> kube-apiserver [4c192c23a5c2cb8d4827103c705875b67426f13e7541c3c230c0bacb6b6f0ca9] <==
	I1122 00:53:43.015516       1 shared_informer.go:318] Caches are synced for node_authorizer
	I1122 00:53:43.068230       1 controller.go:624] quota admission added evaluator for: leases.coordination.k8s.io
	I1122 00:53:43.074098       1 apf_controller.go:377] Running API Priority and Fairness config worker
	I1122 00:53:43.074199       1 apf_controller.go:380] Running API Priority and Fairness periodic rebalancing process
	I1122 00:53:43.074335       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1122 00:53:43.080556       1 shared_informer.go:318] Caches are synced for configmaps
	I1122 00:53:43.080659       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I1122 00:53:43.081329       1 shared_informer.go:318] Caches are synced for cluster_authentication_trust_controller
	I1122 00:53:43.081353       1 shared_informer.go:318] Caches are synced for crd-autoregister
	I1122 00:53:43.083344       1 aggregator.go:166] initial CRD sync complete...
	I1122 00:53:43.083436       1 autoregister_controller.go:141] Starting autoregister controller
	I1122 00:53:43.083465       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1122 00:53:43.083604       1 cache.go:39] Caches are synced for autoregister controller
	E1122 00:53:43.129714       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I1122 00:53:43.685908       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1122 00:53:44.880932       1 controller.go:624] quota admission added evaluator for: namespaces
	I1122 00:53:44.922456       1 controller.go:624] quota admission added evaluator for: deployments.apps
	I1122 00:53:44.954300       1 controller.go:624] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1122 00:53:44.965680       1 controller.go:624] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1122 00:53:44.974500       1 controller.go:624] quota admission added evaluator for: serviceaccounts
	I1122 00:53:45.046745       1 alloc.go:330] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.105.27.187"}
	I1122 00:53:45.071912       1 alloc.go:330] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.96.1.58"}
	I1122 00:53:56.251708       1 controller.go:624] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1122 00:53:56.453951       1 controller.go:624] quota admission added evaluator for: endpoints
	I1122 00:53:56.504809       1 controller.go:624] quota admission added evaluator for: replicasets.apps
	
	
	==> kube-controller-manager [fee91ee3c441411dbcd777ca8d5095cfd10e8a67a33ad4caf348ae63ec865a72] <==
	I1122 00:53:56.140486       1 shared_informer.go:318] Caches are synced for resource quota
	I1122 00:53:56.144539       1 shared_informer.go:318] Caches are synced for deployment
	I1122 00:53:56.512438       1 event.go:307] "Event occurred" object="kubernetes-dashboard/dashboard-metrics-scraper" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set dashboard-metrics-scraper-5f989dc9cf to 1"
	I1122 00:53:56.520254       1 event.go:307] "Event occurred" object="kubernetes-dashboard/kubernetes-dashboard" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set kubernetes-dashboard-8694d4445c to 1"
	I1122 00:53:56.553614       1 event.go:307] "Event occurred" object="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: dashboard-metrics-scraper-5f989dc9cf-l5hnj"
	I1122 00:53:56.553654       1 event.go:307] "Event occurred" object="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kubernetes-dashboard-8694d4445c-kp26b"
	I1122 00:53:56.565637       1 shared_informer.go:318] Caches are synced for garbage collector
	I1122 00:53:56.571727       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="51.340559ms"
	I1122 00:53:56.572649       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="61.208185ms"
	I1122 00:53:56.595118       1 shared_informer.go:318] Caches are synced for garbage collector
	I1122 00:53:56.595208       1 garbagecollector.go:166] "All resource monitors have synced. Proceeding to collect garbage"
	I1122 00:53:56.600814       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="27.717588ms"
	I1122 00:53:56.601024       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="47.728µs"
	I1122 00:53:56.603169       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="29.777171ms"
	I1122 00:53:56.603340       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="74.475µs"
	I1122 00:53:56.620368       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="75.739µs"
	I1122 00:54:01.910272       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="100.321µs"
	I1122 00:54:02.920463       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="54.915µs"
	I1122 00:54:03.922594       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="74.459µs"
	I1122 00:54:06.939553       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="14.237408ms"
	I1122 00:54:06.939728       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="53.036µs"
	I1122 00:54:19.606180       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="17.185253ms"
	I1122 00:54:19.606418       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="119.537µs"
	I1122 00:54:21.966807       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="83.198µs"
	I1122 00:54:26.881897       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="92.485µs"
	
	
	==> kube-proxy [b36da003b1235361d4b8a4e7e49cab04a763af242e9b15f7fb361a03edb9e4c8] <==
	I1122 00:53:44.457752       1 server_others.go:69] "Using iptables proxy"
	I1122 00:53:44.479853       1 node.go:141] Successfully retrieved node IP: 192.168.85.2
	I1122 00:53:44.499092       1 server.go:632] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1122 00:53:44.504001       1 server_others.go:152] "Using iptables Proxier"
	I1122 00:53:44.504041       1 server_others.go:421] "Detect-local-mode set to ClusterCIDR, but no cluster CIDR for family" ipFamily="IPv6"
	I1122 00:53:44.504050       1 server_others.go:438] "Defaulting to no-op detect-local"
	I1122 00:53:44.504080       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I1122 00:53:44.504294       1 server.go:846] "Version info" version="v1.28.0"
	I1122 00:53:44.504522       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1122 00:53:44.505252       1 config.go:188] "Starting service config controller"
	I1122 00:53:44.505276       1 shared_informer.go:311] Waiting for caches to sync for service config
	I1122 00:53:44.505292       1 config.go:97] "Starting endpoint slice config controller"
	I1122 00:53:44.505296       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I1122 00:53:44.505880       1 config.go:315] "Starting node config controller"
	I1122 00:53:44.505889       1 shared_informer.go:311] Waiting for caches to sync for node config
	I1122 00:53:44.606104       1 shared_informer.go:318] Caches are synced for node config
	I1122 00:53:44.606142       1 shared_informer.go:318] Caches are synced for service config
	I1122 00:53:44.606168       1 shared_informer.go:318] Caches are synced for endpoint slice config
	
	
	==> kube-scheduler [9deafbe8687dcd224ab5e480cfefa5cc596bb04d62aab6f5da3083aca07488e8] <==
	I1122 00:53:40.682527       1 serving.go:348] Generated self-signed cert in-memory
	W1122 00:53:42.847335       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1122 00:53:42.847435       1 authentication.go:368] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1122 00:53:42.847468       1 authentication.go:369] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1122 00:53:42.847498       1 authentication.go:370] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1122 00:53:43.019354       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.28.0"
	I1122 00:53:43.019454       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1122 00:53:43.026053       1 secure_serving.go:210] Serving securely on 127.0.0.1:10259
	I1122 00:53:43.026281       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1122 00:53:43.026341       1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I1122 00:53:43.026382       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I1122 00:53:43.127223       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Nov 22 00:53:56 old-k8s-version-625837 kubelet[785]: I1122 00:53:56.646150     785 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/beaa8339-684c-498f-81c1-beb37e1977c4-tmp-volume\") pod \"dashboard-metrics-scraper-5f989dc9cf-l5hnj\" (UID: \"beaa8339-684c-498f-81c1-beb37e1977c4\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-l5hnj"
	Nov 22 00:53:56 old-k8s-version-625837 kubelet[785]: I1122 00:53:56.646177     785 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fn4xk\" (UniqueName: \"kubernetes.io/projected/8bf88fab-10f4-4b9e-9866-f2cc0cade558-kube-api-access-fn4xk\") pod \"kubernetes-dashboard-8694d4445c-kp26b\" (UID: \"8bf88fab-10f4-4b9e-9866-f2cc0cade558\") " pod="kubernetes-dashboard/kubernetes-dashboard-8694d4445c-kp26b"
	Nov 22 00:53:56 old-k8s-version-625837 kubelet[785]: I1122 00:53:56.646206     785 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mw22f\" (UniqueName: \"kubernetes.io/projected/beaa8339-684c-498f-81c1-beb37e1977c4-kube-api-access-mw22f\") pod \"dashboard-metrics-scraper-5f989dc9cf-l5hnj\" (UID: \"beaa8339-684c-498f-81c1-beb37e1977c4\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-l5hnj"
	Nov 22 00:53:56 old-k8s-version-625837 kubelet[785]: W1122 00:53:56.912102     785 manager.go:1159] Failed to process watch event {EventType:0 Name:/docker/c1b8e95ff95e3c8d00f04e99efeff1b9aae77957e1730bef176997569aa985cb/crio-958c2701ff983f0ad7298f1e136c428203606ac2a8085f04ca54797af8f4a1b8 WatchSource:0}: Error finding container 958c2701ff983f0ad7298f1e136c428203606ac2a8085f04ca54797af8f4a1b8: Status 404 returned error can't find the container with id 958c2701ff983f0ad7298f1e136c428203606ac2a8085f04ca54797af8f4a1b8
	Nov 22 00:53:56 old-k8s-version-625837 kubelet[785]: W1122 00:53:56.929243     785 manager.go:1159] Failed to process watch event {EventType:0 Name:/docker/c1b8e95ff95e3c8d00f04e99efeff1b9aae77957e1730bef176997569aa985cb/crio-42302c17730cd39cb619ef1485897b51ad645c8f1b41f2fbea23d194a073196a WatchSource:0}: Error finding container 42302c17730cd39cb619ef1485897b51ad645c8f1b41f2fbea23d194a073196a: Status 404 returned error can't find the container with id 42302c17730cd39cb619ef1485897b51ad645c8f1b41f2fbea23d194a073196a
	Nov 22 00:54:01 old-k8s-version-625837 kubelet[785]: I1122 00:54:01.893753     785 scope.go:117] "RemoveContainer" containerID="ff320e48a3a516fef9253d0cdbb9e9db7010c4d7b9c8cffa9a738b9d7614378e"
	Nov 22 00:54:02 old-k8s-version-625837 kubelet[785]: I1122 00:54:02.900327     785 scope.go:117] "RemoveContainer" containerID="ff320e48a3a516fef9253d0cdbb9e9db7010c4d7b9c8cffa9a738b9d7614378e"
	Nov 22 00:54:02 old-k8s-version-625837 kubelet[785]: I1122 00:54:02.900630     785 scope.go:117] "RemoveContainer" containerID="058de021acd50f7260a63c0cbbdce1ea31f652c5c53d3110bd9e0698e868d29d"
	Nov 22 00:54:02 old-k8s-version-625837 kubelet[785]: E1122 00:54:02.900927     785 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-l5hnj_kubernetes-dashboard(beaa8339-684c-498f-81c1-beb37e1977c4)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-l5hnj" podUID="beaa8339-684c-498f-81c1-beb37e1977c4"
	Nov 22 00:54:03 old-k8s-version-625837 kubelet[785]: I1122 00:54:03.904527     785 scope.go:117] "RemoveContainer" containerID="058de021acd50f7260a63c0cbbdce1ea31f652c5c53d3110bd9e0698e868d29d"
	Nov 22 00:54:03 old-k8s-version-625837 kubelet[785]: E1122 00:54:03.904805     785 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-l5hnj_kubernetes-dashboard(beaa8339-684c-498f-81c1-beb37e1977c4)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-l5hnj" podUID="beaa8339-684c-498f-81c1-beb37e1977c4"
	Nov 22 00:54:06 old-k8s-version-625837 kubelet[785]: I1122 00:54:06.865494     785 scope.go:117] "RemoveContainer" containerID="058de021acd50f7260a63c0cbbdce1ea31f652c5c53d3110bd9e0698e868d29d"
	Nov 22 00:54:06 old-k8s-version-625837 kubelet[785]: E1122 00:54:06.865830     785 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-l5hnj_kubernetes-dashboard(beaa8339-684c-498f-81c1-beb37e1977c4)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-l5hnj" podUID="beaa8339-684c-498f-81c1-beb37e1977c4"
	Nov 22 00:54:14 old-k8s-version-625837 kubelet[785]: I1122 00:54:14.929059     785 scope.go:117] "RemoveContainer" containerID="a05ae1ca90b871431d6a63387000b3a0fc2d30bdc217ce9cd70319e940e72234"
	Nov 22 00:54:14 old-k8s-version-625837 kubelet[785]: I1122 00:54:14.963945     785 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kubernetes-dashboard/kubernetes-dashboard-8694d4445c-kp26b" podStartSLOduration=9.908889409 podCreationTimestamp="2025-11-22 00:53:56 +0000 UTC" firstStartedPulling="2025-11-22 00:53:56.933440852 +0000 UTC m=+20.508574084" lastFinishedPulling="2025-11-22 00:54:05.988426258 +0000 UTC m=+29.563559482" observedRunningTime="2025-11-22 00:54:06.928332717 +0000 UTC m=+30.503465949" watchObservedRunningTime="2025-11-22 00:54:14.963874807 +0000 UTC m=+38.539008031"
	Nov 22 00:54:21 old-k8s-version-625837 kubelet[785]: I1122 00:54:21.627276     785 scope.go:117] "RemoveContainer" containerID="058de021acd50f7260a63c0cbbdce1ea31f652c5c53d3110bd9e0698e868d29d"
	Nov 22 00:54:21 old-k8s-version-625837 kubelet[785]: I1122 00:54:21.946747     785 scope.go:117] "RemoveContainer" containerID="058de021acd50f7260a63c0cbbdce1ea31f652c5c53d3110bd9e0698e868d29d"
	Nov 22 00:54:21 old-k8s-version-625837 kubelet[785]: I1122 00:54:21.947024     785 scope.go:117] "RemoveContainer" containerID="d5df7182fcdae9d1252372617e3f730dce9069a240030f68c42a96f8a784beb0"
	Nov 22 00:54:21 old-k8s-version-625837 kubelet[785]: E1122 00:54:21.947350     785 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-l5hnj_kubernetes-dashboard(beaa8339-684c-498f-81c1-beb37e1977c4)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-l5hnj" podUID="beaa8339-684c-498f-81c1-beb37e1977c4"
	Nov 22 00:54:26 old-k8s-version-625837 kubelet[785]: I1122 00:54:26.865826     785 scope.go:117] "RemoveContainer" containerID="d5df7182fcdae9d1252372617e3f730dce9069a240030f68c42a96f8a784beb0"
	Nov 22 00:54:26 old-k8s-version-625837 kubelet[785]: E1122 00:54:26.866133     785 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-l5hnj_kubernetes-dashboard(beaa8339-684c-498f-81c1-beb37e1977c4)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-l5hnj" podUID="beaa8339-684c-498f-81c1-beb37e1977c4"
	Nov 22 00:54:33 old-k8s-version-625837 kubelet[785]: I1122 00:54:33.417176     785 dynamic_cafile_content.go:171] "Shutting down controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	Nov 22 00:54:33 old-k8s-version-625837 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Nov 22 00:54:33 old-k8s-version-625837 systemd[1]: kubelet.service: Deactivated successfully.
	Nov 22 00:54:33 old-k8s-version-625837 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	
	
	==> kubernetes-dashboard [c68e57a00d3a76b7ae45ba0f8dd0b5bd690d691009ae2395dd8a7a8d4b3955db] <==
	2025/11/22 00:54:06 Using namespace: kubernetes-dashboard
	2025/11/22 00:54:06 Using in-cluster config to connect to apiserver
	2025/11/22 00:54:06 Using secret token for csrf signing
	2025/11/22 00:54:06 Initializing csrf token from kubernetes-dashboard-csrf secret
	2025/11/22 00:54:06 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2025/11/22 00:54:06 Successful initial request to the apiserver, version: v1.28.0
	2025/11/22 00:54:06 Generating JWE encryption key
	2025/11/22 00:54:06 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2025/11/22 00:54:06 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2025/11/22 00:54:06 Initializing JWE encryption key from synchronized object
	2025/11/22 00:54:06 Creating in-cluster Sidecar client
	2025/11/22 00:54:06 Serving insecurely on HTTP port: 9090
	2025/11/22 00:54:06 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/11/22 00:54:36 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/11/22 00:54:06 Starting overwatch
	
	
	==> storage-provisioner [a05ae1ca90b871431d6a63387000b3a0fc2d30bdc217ce9cd70319e940e72234] <==
	I1122 00:53:44.303878       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1122 00:54:14.309990       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	
	
	==> storage-provisioner [ba7fcf01e100d9075f965f94ff668899f01eaaea6cf6c057437439123135dbae] <==
	I1122 00:54:14.981328       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1122 00:54:14.994094       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1122 00:54:14.994150       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I1122 00:54:32.403212       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1122 00:54:32.403660       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"829114ab-116e-46a2-b9b8-eeaca50c29a6", APIVersion:"v1", ResourceVersion:"669", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' old-k8s-version-625837_8a72e68a-4046-4038-aa83-3de9b61a4c43 became leader
	I1122 00:54:32.403779       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_old-k8s-version-625837_8a72e68a-4046-4038-aa83-3de9b61a4c43!
	I1122 00:54:32.504660       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_old-k8s-version-625837_8a72e68a-4046-4038-aa83-3de9b61a4c43!
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-625837 -n old-k8s-version-625837
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-625837 -n old-k8s-version-625837: exit status 2 (348.584802ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:269: (dbg) Run:  kubectl --context old-k8s-version-625837 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/old-k8s-version/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestStartStop/group/old-k8s-version/serial/Pause (6.58s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (2.64s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p no-preload-165130 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Non-zero exit: out/minikube-linux-arm64 addons enable metrics-server -p no-preload-165130 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: exit status 11 (309.094042ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-22T00:56:11Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_2bafae6fa40fec163538f94366e390b0317a8b15_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:205: failed to enable an addon post-stop. args "out/minikube-linux-arm64 addons enable metrics-server -p no-preload-165130 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain": exit status 11
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context no-preload-165130 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:213: (dbg) Non-zero exit: kubectl --context no-preload-165130 describe deploy/metrics-server -n kube-system: exit status 1 (85.305163ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): deployments.apps "metrics-server" not found

                                                
                                                
** /stderr **
start_stop_delete_test.go:215: failed to get info on auto-pause deployments. args "kubectl --context no-preload-165130 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:219: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/no-preload/serial/EnableAddonWhileActive]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/no-preload/serial/EnableAddonWhileActive]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect no-preload-165130
helpers_test.go:243: (dbg) docker inspect no-preload-165130:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "1c65dce5fc4b31daf0735f57309cfc4584efb68447875212bc03d5cc1861ca03",
	        "Created": "2025-11-22T00:54:44.324816446Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 700684,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-11-22T00:54:44.545179363Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:97061622515a85470dad13cb046ad5fc5021fcd2b05aa921a620abb52951cd5d",
	        "ResolvConfPath": "/var/lib/docker/containers/1c65dce5fc4b31daf0735f57309cfc4584efb68447875212bc03d5cc1861ca03/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/1c65dce5fc4b31daf0735f57309cfc4584efb68447875212bc03d5cc1861ca03/hostname",
	        "HostsPath": "/var/lib/docker/containers/1c65dce5fc4b31daf0735f57309cfc4584efb68447875212bc03d5cc1861ca03/hosts",
	        "LogPath": "/var/lib/docker/containers/1c65dce5fc4b31daf0735f57309cfc4584efb68447875212bc03d5cc1861ca03/1c65dce5fc4b31daf0735f57309cfc4584efb68447875212bc03d5cc1861ca03-json.log",
	        "Name": "/no-preload-165130",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "no-preload-165130:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "no-preload-165130",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "1c65dce5fc4b31daf0735f57309cfc4584efb68447875212bc03d5cc1861ca03",
	                "LowerDir": "/var/lib/docker/overlay2/1fcc7ed347f82b0a593d86e8b13d7b8b6ed58d69e01b67e3031748c6c4f0b12f-init/diff:/var/lib/docker/overlay2/7e8788c6de692bc1c3758a2bb2c4b8da0fbba26855f855c0f3b655bfbdd92f8e/diff",
	                "MergedDir": "/var/lib/docker/overlay2/1fcc7ed347f82b0a593d86e8b13d7b8b6ed58d69e01b67e3031748c6c4f0b12f/merged",
	                "UpperDir": "/var/lib/docker/overlay2/1fcc7ed347f82b0a593d86e8b13d7b8b6ed58d69e01b67e3031748c6c4f0b12f/diff",
	                "WorkDir": "/var/lib/docker/overlay2/1fcc7ed347f82b0a593d86e8b13d7b8b6ed58d69e01b67e3031748c6c4f0b12f/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "no-preload-165130",
	                "Source": "/var/lib/docker/volumes/no-preload-165130/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "no-preload-165130",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763588073-21934@sha256:19d3da0413e1bfa354cbb88004c6796f8e9772a083e0230b0f6e50212ee04c7e",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "no-preload-165130",
	                "name.minikube.sigs.k8s.io": "no-preload-165130",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "4b5aa3342c9f7a8d34eba49098fada5590582f3b67d41fe30fa056539d906719",
	            "SandboxKey": "/var/run/docker/netns/4b5aa3342c9f",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33781"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33782"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33785"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33783"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33784"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "no-preload-165130": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.85.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "62:b0:99:1c:66:6a",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "0ab9f51973bdd85552219ab44a532b9743aba79f533b4d8124872498c1e7cb0a",
	                    "EndpointID": "60a8c8487a2f8e4785368dfdbf9ed3e592cc7d27a8f59d42f6c5451852065e5d",
	                    "Gateway": "192.168.85.1",
	                    "IPAddress": "192.168.85.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "no-preload-165130",
	                        "1c65dce5fc4b"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-165130 -n no-preload-165130
helpers_test.go:252: <<< TestStartStop/group/no-preload/serial/EnableAddonWhileActive FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/no-preload/serial/EnableAddonWhileActive]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p no-preload-165130 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p no-preload-165130 logs -n 25: (1.292152125s)
helpers_test.go:260: TestStartStop/group/no-preload/serial/EnableAddonWhileActive logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────
────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │          PROFILE          │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────
────┤
	│ ssh     │ -p cilium-163229 sudo find /etc/crio -type f -exec sh -c 'echo {}; cat {}' \;                                                                                                                                                                 │ cilium-163229             │ jenkins │ v1.37.0 │ 22 Nov 25 00:50 UTC │                     │
	│ ssh     │ -p cilium-163229 sudo crio config                                                                                                                                                                                                             │ cilium-163229             │ jenkins │ v1.37.0 │ 22 Nov 25 00:50 UTC │                     │
	│ delete  │ -p cilium-163229                                                                                                                                                                                                                              │ cilium-163229             │ jenkins │ v1.37.0 │ 22 Nov 25 00:50 UTC │ 22 Nov 25 00:50 UTC │
	│ start   │ -p force-systemd-env-634519 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio                                                                                                                                    │ force-systemd-env-634519  │ jenkins │ v1.37.0 │ 22 Nov 25 00:50 UTC │ 22 Nov 25 00:51 UTC │
	│ delete  │ -p kubernetes-upgrade-134864                                                                                                                                                                                                                  │ kubernetes-upgrade-134864 │ jenkins │ v1.37.0 │ 22 Nov 25 00:50 UTC │ 22 Nov 25 00:51 UTC │
	│ start   │ -p cert-expiration-621390 --memory=3072 --cert-expiration=3m --driver=docker  --container-runtime=crio                                                                                                                                        │ cert-expiration-621390    │ jenkins │ v1.37.0 │ 22 Nov 25 00:51 UTC │ 22 Nov 25 00:51 UTC │
	│ delete  │ -p force-systemd-env-634519                                                                                                                                                                                                                   │ force-systemd-env-634519  │ jenkins │ v1.37.0 │ 22 Nov 25 00:51 UTC │ 22 Nov 25 00:51 UTC │
	│ start   │ -p cert-options-002126 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=crio                     │ cert-options-002126       │ jenkins │ v1.37.0 │ 22 Nov 25 00:51 UTC │ 22 Nov 25 00:52 UTC │
	│ ssh     │ cert-options-002126 ssh openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt                                                                                                                                                   │ cert-options-002126       │ jenkins │ v1.37.0 │ 22 Nov 25 00:52 UTC │ 22 Nov 25 00:52 UTC │
	│ ssh     │ -p cert-options-002126 -- sudo cat /etc/kubernetes/admin.conf                                                                                                                                                                                 │ cert-options-002126       │ jenkins │ v1.37.0 │ 22 Nov 25 00:52 UTC │ 22 Nov 25 00:52 UTC │
	│ delete  │ -p cert-options-002126                                                                                                                                                                                                                        │ cert-options-002126       │ jenkins │ v1.37.0 │ 22 Nov 25 00:52 UTC │ 22 Nov 25 00:52 UTC │
	│ start   │ -p old-k8s-version-625837 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-625837    │ jenkins │ v1.37.0 │ 22 Nov 25 00:52 UTC │ 22 Nov 25 00:53 UTC │
	│ addons  │ enable metrics-server -p old-k8s-version-625837 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                  │ old-k8s-version-625837    │ jenkins │ v1.37.0 │ 22 Nov 25 00:53 UTC │                     │
	│ stop    │ -p old-k8s-version-625837 --alsologtostderr -v=3                                                                                                                                                                                              │ old-k8s-version-625837    │ jenkins │ v1.37.0 │ 22 Nov 25 00:53 UTC │ 22 Nov 25 00:53 UTC │
	│ addons  │ enable dashboard -p old-k8s-version-625837 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                             │ old-k8s-version-625837    │ jenkins │ v1.37.0 │ 22 Nov 25 00:53 UTC │ 22 Nov 25 00:53 UTC │
	│ start   │ -p old-k8s-version-625837 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-625837    │ jenkins │ v1.37.0 │ 22 Nov 25 00:53 UTC │ 22 Nov 25 00:54 UTC │
	│ image   │ old-k8s-version-625837 image list --format=json                                                                                                                                                                                               │ old-k8s-version-625837    │ jenkins │ v1.37.0 │ 22 Nov 25 00:54 UTC │ 22 Nov 25 00:54 UTC │
	│ pause   │ -p old-k8s-version-625837 --alsologtostderr -v=1                                                                                                                                                                                              │ old-k8s-version-625837    │ jenkins │ v1.37.0 │ 22 Nov 25 00:54 UTC │                     │
	│ start   │ -p cert-expiration-621390 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=crio                                                                                                                                     │ cert-expiration-621390    │ jenkins │ v1.37.0 │ 22 Nov 25 00:54 UTC │ 22 Nov 25 00:55 UTC │
	│ delete  │ -p old-k8s-version-625837                                                                                                                                                                                                                     │ old-k8s-version-625837    │ jenkins │ v1.37.0 │ 22 Nov 25 00:54 UTC │ 22 Nov 25 00:54 UTC │
	│ delete  │ -p old-k8s-version-625837                                                                                                                                                                                                                     │ old-k8s-version-625837    │ jenkins │ v1.37.0 │ 22 Nov 25 00:54 UTC │ 22 Nov 25 00:54 UTC │
	│ start   │ -p no-preload-165130 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                       │ no-preload-165130         │ jenkins │ v1.37.0 │ 22 Nov 25 00:54 UTC │ 22 Nov 25 00:56 UTC │
	│ delete  │ -p cert-expiration-621390                                                                                                                                                                                                                     │ cert-expiration-621390    │ jenkins │ v1.37.0 │ 22 Nov 25 00:55 UTC │ 22 Nov 25 00:55 UTC │
	│ start   │ -p embed-certs-879000 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                        │ embed-certs-879000        │ jenkins │ v1.37.0 │ 22 Nov 25 00:55 UTC │                     │
	│ addons  │ enable metrics-server -p no-preload-165130 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ no-preload-165130         │ jenkins │ v1.37.0 │ 22 Nov 25 00:56 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────
────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/22 00:55:12
	Running on machine: ip-172-31-21-244
	Binary: Built with gc go1.25.3 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1122 00:55:12.488441  703787 out.go:360] Setting OutFile to fd 1 ...
	I1122 00:55:12.488831  703787 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1122 00:55:12.488856  703787 out.go:374] Setting ErrFile to fd 2...
	I1122 00:55:12.488863  703787 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1122 00:55:12.489176  703787 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21934-513600/.minikube/bin
	I1122 00:55:12.489599  703787 out.go:368] Setting JSON to false
	I1122 00:55:12.490546  703787 start.go:133] hostinfo: {"hostname":"ip-172-31-21-244","uptime":20229,"bootTime":1763752684,"procs":173,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"da8ac1fd-6236-412a-a346-95873c98230d"}
	I1122 00:55:12.490620  703787 start.go:143] virtualization:  
	I1122 00:55:12.494165  703787 out.go:179] * [embed-certs-879000] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1122 00:55:12.498369  703787 out.go:179]   - MINIKUBE_LOCATION=21934
	I1122 00:55:12.498436  703787 notify.go:221] Checking for updates...
	I1122 00:55:12.504666  703787 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1122 00:55:12.507649  703787 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21934-513600/kubeconfig
	I1122 00:55:12.510834  703787 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21934-513600/.minikube
	I1122 00:55:12.513781  703787 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1122 00:55:12.516839  703787 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1122 00:55:08.964869  700166 ssh_runner.go:235] Completed: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5: (5.603224344s)
	I1122 00:55:08.964901  700166 ssh_runner.go:352] existence check for /var/lib/minikube/images/storage-provisioner_v5: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/storage-provisioner_v5': No such file or directory
	I1122 00:55:08.964925  700166 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21934-513600/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 --> /var/lib/minikube/images/storage-provisioner_v5 (8035840 bytes)
	I1122 00:55:08.965051  700166 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/etcd_3.6.4-0: (5.603555107s)
	I1122 00:55:08.965061  700166 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/21934-513600/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.4-0 from cache
	I1122 00:55:09.074152  700166 crio.go:275] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I1122 00:55:09.074229  700166 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/storage-provisioner_v5
	I1122 00:55:09.885592  700166 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/21934-513600/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I1122 00:55:09.885629  700166 cache_images.go:125] Successfully loaded all cached images
	I1122 00:55:09.885636  700166 cache_images.go:94] duration metric: took 17.921760808s to LoadCachedImages
	I1122 00:55:09.885648  700166 kubeadm.go:935] updating node { 192.168.85.2 8443 v1.34.1 crio true true} ...
	I1122 00:55:09.885739  700166 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=no-preload-165130 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:no-preload-165130 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1122 00:55:09.885890  700166 ssh_runner.go:195] Run: crio config
	I1122 00:55:09.975206  700166 cni.go:84] Creating CNI manager for ""
	I1122 00:55:09.975242  700166 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1122 00:55:09.975260  700166 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1122 00:55:09.975293  700166 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-165130 NodeName:no-preload-165130 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc
/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1122 00:55:09.975437  700166 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "no-preload-165130"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.85.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1122 00:55:09.975521  700166 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1122 00:55:09.986897  700166 binaries.go:54] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.34.1: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.34.1': No such file or directory
	
	Initiating transfer...
	I1122 00:55:09.986975  700166 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.34.1
	I1122 00:55:09.995710  700166 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.34.1/bin/linux/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.34.1/bin/linux/arm64/kubectl.sha256
	I1122 00:55:09.995880  700166 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.34.1/kubectl
	I1122 00:55:09.996119  700166 download.go:108] Downloading: https://dl.k8s.io/release/v1.34.1/bin/linux/arm64/kubelet?checksum=file:https://dl.k8s.io/release/v1.34.1/bin/linux/arm64/kubelet.sha256 -> /home/jenkins/minikube-integration/21934-513600/.minikube/cache/linux/arm64/v1.34.1/kubelet
	I1122 00:55:09.996471  700166 download.go:108] Downloading: https://dl.k8s.io/release/v1.34.1/bin/linux/arm64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.34.1/bin/linux/arm64/kubeadm.sha256 -> /home/jenkins/minikube-integration/21934-513600/.minikube/cache/linux/arm64/v1.34.1/kubeadm
	I1122 00:55:10.007645  700166 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.34.1/kubectl: stat -c "%s %y" /var/lib/minikube/binaries/v1.34.1/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.34.1/kubectl': No such file or directory
	I1122 00:55:10.007698  700166 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21934-513600/.minikube/cache/linux/arm64/v1.34.1/kubectl --> /var/lib/minikube/binaries/v1.34.1/kubectl (58130616 bytes)
	I1122 00:55:10.974228  700166 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1122 00:55:10.988614  700166 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.34.1/kubeadm
	I1122 00:55:10.991790  700166 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.34.1/kubelet
	I1122 00:55:10.995492  700166 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.34.1/kubeadm: stat -c "%s %y" /var/lib/minikube/binaries/v1.34.1/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.34.1/kubeadm': No such file or directory
	I1122 00:55:10.995530  700166 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21934-513600/.minikube/cache/linux/arm64/v1.34.1/kubeadm --> /var/lib/minikube/binaries/v1.34.1/kubeadm (71434424 bytes)
	I1122 00:55:10.997177  700166 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.34.1/kubelet: stat -c "%s %y" /var/lib/minikube/binaries/v1.34.1/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.34.1/kubelet': No such file or directory
	I1122 00:55:10.997212  700166 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21934-513600/.minikube/cache/linux/arm64/v1.34.1/kubelet --> /var/lib/minikube/binaries/v1.34.1/kubelet (56426788 bytes)
	I1122 00:55:11.630888  700166 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1122 00:55:11.641737  700166 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (367 bytes)
	I1122 00:55:11.658954  700166 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1122 00:55:11.680224  700166 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2214 bytes)
	I1122 00:55:11.707570  700166 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I1122 00:55:11.712128  700166 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1122 00:55:11.725106  700166 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1122 00:55:11.848942  700166 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1122 00:55:11.877217  700166 certs.go:69] Setting up /home/jenkins/minikube-integration/21934-513600/.minikube/profiles/no-preload-165130 for IP: 192.168.85.2
	I1122 00:55:11.877237  700166 certs.go:195] generating shared ca certs ...
	I1122 00:55:11.877260  700166 certs.go:227] acquiring lock for ca certs: {Name:mkaf4c79493334cb2058b349c36be7473837f9b7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1122 00:55:11.877401  700166 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21934-513600/.minikube/ca.key
	I1122 00:55:11.877444  700166 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21934-513600/.minikube/proxy-client-ca.key
	I1122 00:55:11.877450  700166 certs.go:257] generating profile certs ...
	I1122 00:55:11.877505  700166 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21934-513600/.minikube/profiles/no-preload-165130/client.key
	I1122 00:55:11.877517  700166 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21934-513600/.minikube/profiles/no-preload-165130/client.crt with IP's: []
	I1122 00:55:12.345120  700166 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21934-513600/.minikube/profiles/no-preload-165130/client.crt ...
	I1122 00:55:12.345153  700166 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21934-513600/.minikube/profiles/no-preload-165130/client.crt: {Name:mk009fd411c7987abc8adb3486298ad0ff8182e2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1122 00:55:12.345350  700166 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21934-513600/.minikube/profiles/no-preload-165130/client.key ...
	I1122 00:55:12.345365  700166 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21934-513600/.minikube/profiles/no-preload-165130/client.key: {Name:mkc5136e63691f2fa8a619443afc14eb0506b28b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1122 00:55:12.345445  700166 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21934-513600/.minikube/profiles/no-preload-165130/apiserver.key.f1b30e0b
	I1122 00:55:12.345469  700166 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21934-513600/.minikube/profiles/no-preload-165130/apiserver.crt.f1b30e0b with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.85.2]
	I1122 00:55:12.637656  700166 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21934-513600/.minikube/profiles/no-preload-165130/apiserver.crt.f1b30e0b ...
	I1122 00:55:12.637684  700166 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21934-513600/.minikube/profiles/no-preload-165130/apiserver.crt.f1b30e0b: {Name:mke001dc531f7e693a3e6f9fac591a62f6a69649 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1122 00:55:12.637908  700166 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21934-513600/.minikube/profiles/no-preload-165130/apiserver.key.f1b30e0b ...
	I1122 00:55:12.637920  700166 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21934-513600/.minikube/profiles/no-preload-165130/apiserver.key.f1b30e0b: {Name:mkd5fbd9227abcb6f590e3f63ea274094f588b25 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1122 00:55:12.638016  700166 certs.go:382] copying /home/jenkins/minikube-integration/21934-513600/.minikube/profiles/no-preload-165130/apiserver.crt.f1b30e0b -> /home/jenkins/minikube-integration/21934-513600/.minikube/profiles/no-preload-165130/apiserver.crt
	I1122 00:55:12.638101  700166 certs.go:386] copying /home/jenkins/minikube-integration/21934-513600/.minikube/profiles/no-preload-165130/apiserver.key.f1b30e0b -> /home/jenkins/minikube-integration/21934-513600/.minikube/profiles/no-preload-165130/apiserver.key
	I1122 00:55:12.638156  700166 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21934-513600/.minikube/profiles/no-preload-165130/proxy-client.key
	I1122 00:55:12.638169  700166 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21934-513600/.minikube/profiles/no-preload-165130/proxy-client.crt with IP's: []
	I1122 00:55:12.520159  703787 config.go:182] Loaded profile config "no-preload-165130": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1122 00:55:12.520255  703787 driver.go:422] Setting default libvirt URI to qemu:///system
	I1122 00:55:12.556207  703787 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1122 00:55:12.556326  703787 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1122 00:55:12.656473  703787 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:55 OomKillDisable:true NGoroutines:68 SystemTime:2025-11-22 00:55:12.646854435 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1122 00:55:12.656576  703787 docker.go:319] overlay module found
	I1122 00:55:12.659663  703787 out.go:179] * Using the docker driver based on user configuration
	I1122 00:55:12.662515  703787 start.go:309] selected driver: docker
	I1122 00:55:12.662537  703787 start.go:930] validating driver "docker" against <nil>
	I1122 00:55:12.662552  703787 start.go:941] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1122 00:55:12.663264  703787 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1122 00:55:12.757267  703787 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:55 OomKillDisable:true NGoroutines:68 SystemTime:2025-11-22 00:55:12.74839812 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aa
rch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pa
th:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1122 00:55:12.757405  703787 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1122 00:55:12.757615  703787 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1122 00:55:12.760691  703787 out.go:179] * Using Docker driver with root privileges
	I1122 00:55:12.763498  703787 cni.go:84] Creating CNI manager for ""
	I1122 00:55:12.763557  703787 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1122 00:55:12.763568  703787 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1122 00:55:12.763645  703787 start.go:353] cluster config:
	{Name:embed-certs-879000 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763588073-21934@sha256:19d3da0413e1bfa354cbb88004c6796f8e9772a083e0230b0f6e50212ee04c7e Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:embed-certs-879000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Contain
erRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPI
D:0 GPUs: AutoPauseInterval:1m0s}
	I1122 00:55:12.768522  703787 out.go:179] * Starting "embed-certs-879000" primary control-plane node in "embed-certs-879000" cluster
	I1122 00:55:12.771368  703787 cache.go:134] Beginning downloading kic base image for docker with crio
	I1122 00:55:12.779869  703787 out.go:179] * Pulling base image v0.0.48-1763588073-21934 ...
	I1122 00:55:12.782750  703787 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1122 00:55:12.782797  703787 preload.go:203] Found local preload: /home/jenkins/minikube-integration/21934-513600/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4
	I1122 00:55:12.782807  703787 cache.go:65] Caching tarball of preloaded images
	I1122 00:55:12.782898  703787 preload.go:238] Found /home/jenkins/minikube-integration/21934-513600/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1122 00:55:12.782907  703787 cache.go:68] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1122 00:55:12.783020  703787 profile.go:143] Saving config to /home/jenkins/minikube-integration/21934-513600/.minikube/profiles/embed-certs-879000/config.json ...
	I1122 00:55:12.783037  703787 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21934-513600/.minikube/profiles/embed-certs-879000/config.json: {Name:mk1ad3844c61e5cba9123086aca7c2ba7a3d57cd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1122 00:55:12.783186  703787 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763588073-21934@sha256:19d3da0413e1bfa354cbb88004c6796f8e9772a083e0230b0f6e50212ee04c7e in local docker daemon
	I1122 00:55:12.840255  703787 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763588073-21934@sha256:19d3da0413e1bfa354cbb88004c6796f8e9772a083e0230b0f6e50212ee04c7e in local docker daemon, skipping pull
	I1122 00:55:12.840282  703787 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763588073-21934@sha256:19d3da0413e1bfa354cbb88004c6796f8e9772a083e0230b0f6e50212ee04c7e exists in daemon, skipping load
	I1122 00:55:12.840299  703787 cache.go:243] Successfully downloaded all kic artifacts
	I1122 00:55:12.840322  703787 start.go:360] acquireMachinesLock for embed-certs-879000: {Name:mk05ac8d8898660ab51c5645d9a1c115c537bdda Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1122 00:55:12.840429  703787 start.go:364] duration metric: took 89.671µs to acquireMachinesLock for "embed-certs-879000"
	I1122 00:55:12.840457  703787 start.go:93] Provisioning new machine with config: &{Name:embed-certs-879000 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763588073-21934@sha256:19d3da0413e1bfa354cbb88004c6796f8e9772a083e0230b0f6e50212ee04c7e Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:embed-certs-879000 Namespace:default APIServerHAVIP: APIServe
rName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmw
arePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1122 00:55:12.840523  703787 start.go:125] createHost starting for "" (driver="docker")
	I1122 00:55:13.146131  700166 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21934-513600/.minikube/profiles/no-preload-165130/proxy-client.crt ...
	I1122 00:55:13.146159  700166 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21934-513600/.minikube/profiles/no-preload-165130/proxy-client.crt: {Name:mkeac4a0bad2298bd95edcd605522cfff854d351 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1122 00:55:13.146365  700166 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21934-513600/.minikube/profiles/no-preload-165130/proxy-client.key ...
	I1122 00:55:13.146375  700166 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21934-513600/.minikube/profiles/no-preload-165130/proxy-client.key: {Name:mkeadc4e2653aec460e2e4fbb2e28d7a9e5c06f0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1122 00:55:13.146575  700166 certs.go:484] found cert: /home/jenkins/minikube-integration/21934-513600/.minikube/certs/516937.pem (1338 bytes)
	W1122 00:55:13.146617  700166 certs.go:480] ignoring /home/jenkins/minikube-integration/21934-513600/.minikube/certs/516937_empty.pem, impossibly tiny 0 bytes
	I1122 00:55:13.146625  700166 certs.go:484] found cert: /home/jenkins/minikube-integration/21934-513600/.minikube/certs/ca-key.pem (1679 bytes)
	I1122 00:55:13.146654  700166 certs.go:484] found cert: /home/jenkins/minikube-integration/21934-513600/.minikube/certs/ca.pem (1078 bytes)
	I1122 00:55:13.146679  700166 certs.go:484] found cert: /home/jenkins/minikube-integration/21934-513600/.minikube/certs/cert.pem (1123 bytes)
	I1122 00:55:13.146707  700166 certs.go:484] found cert: /home/jenkins/minikube-integration/21934-513600/.minikube/certs/key.pem (1675 bytes)
	I1122 00:55:13.146751  700166 certs.go:484] found cert: /home/jenkins/minikube-integration/21934-513600/.minikube/files/etc/ssl/certs/5169372.pem (1708 bytes)
	I1122 00:55:13.148774  700166 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21934-513600/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1122 00:55:13.172924  700166 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21934-513600/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1122 00:55:13.203378  700166 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21934-513600/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1122 00:55:13.223239  700166 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21934-513600/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1122 00:55:13.250722  700166 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21934-513600/.minikube/profiles/no-preload-165130/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1122 00:55:13.273011  700166 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21934-513600/.minikube/profiles/no-preload-165130/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1122 00:55:13.306383  700166 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21934-513600/.minikube/profiles/no-preload-165130/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1122 00:55:13.346825  700166 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21934-513600/.minikube/profiles/no-preload-165130/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1122 00:55:13.367590  700166 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21934-513600/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1122 00:55:13.397434  700166 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21934-513600/.minikube/certs/516937.pem --> /usr/share/ca-certificates/516937.pem (1338 bytes)
	I1122 00:55:13.447494  700166 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21934-513600/.minikube/files/etc/ssl/certs/5169372.pem --> /usr/share/ca-certificates/5169372.pem (1708 bytes)
	I1122 00:55:13.480688  700166 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1122 00:55:13.498217  700166 ssh_runner.go:195] Run: openssl version
	I1122 00:55:13.506242  700166 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/5169372.pem && ln -fs /usr/share/ca-certificates/5169372.pem /etc/ssl/certs/5169372.pem"
	I1122 00:55:13.517118  700166 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/5169372.pem
	I1122 00:55:13.526971  700166 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov 21 23:55 /usr/share/ca-certificates/5169372.pem
	I1122 00:55:13.527040  700166 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/5169372.pem
	I1122 00:55:13.583971  700166 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/5169372.pem /etc/ssl/certs/3ec20f2e.0"
	I1122 00:55:13.613509  700166 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1122 00:55:13.622690  700166 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1122 00:55:13.635001  700166 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov 21 23:48 /usr/share/ca-certificates/minikubeCA.pem
	I1122 00:55:13.635067  700166 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1122 00:55:13.677081  700166 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1122 00:55:13.686312  700166 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/516937.pem && ln -fs /usr/share/ca-certificates/516937.pem /etc/ssl/certs/516937.pem"
	I1122 00:55:13.710364  700166 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/516937.pem
	I1122 00:55:13.719213  700166 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov 21 23:55 /usr/share/ca-certificates/516937.pem
	I1122 00:55:13.719291  700166 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/516937.pem
	I1122 00:55:13.778997  700166 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/516937.pem /etc/ssl/certs/51391683.0"
	I1122 00:55:13.792378  700166 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1122 00:55:13.802210  700166 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1122 00:55:13.802268  700166 kubeadm.go:401] StartCluster: {Name:no-preload-165130 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763588073-21934@sha256:19d3da0413e1bfa354cbb88004c6796f8e9772a083e0230b0f6e50212ee04c7e Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:no-preload-165130 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISer
verNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: So
cketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1122 00:55:13.802350  700166 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1122 00:55:13.802411  700166 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1122 00:55:13.856863  700166 cri.go:89] found id: ""
	I1122 00:55:13.856936  700166 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1122 00:55:13.869419  700166 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1122 00:55:13.891119  700166 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1122 00:55:13.891218  700166 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1122 00:55:13.900637  700166 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1122 00:55:13.900657  700166 kubeadm.go:158] found existing configuration files:
	
	I1122 00:55:13.900717  700166 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1122 00:55:13.914422  700166 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1122 00:55:13.914496  700166 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1122 00:55:13.925421  700166 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1122 00:55:13.936137  700166 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1122 00:55:13.936201  700166 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1122 00:55:13.945613  700166 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1122 00:55:13.955436  700166 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1122 00:55:13.955504  700166 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1122 00:55:13.964632  700166 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1122 00:55:13.974812  700166 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1122 00:55:13.974878  700166 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1122 00:55:13.984774  700166 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1122 00:55:14.042168  700166 kubeadm.go:319] [init] Using Kubernetes version: v1.34.1
	I1122 00:55:14.042502  700166 kubeadm.go:319] [preflight] Running pre-flight checks
	I1122 00:55:14.068417  700166 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1122 00:55:14.068498  700166 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1122 00:55:14.068538  700166 kubeadm.go:319] OS: Linux
	I1122 00:55:14.068591  700166 kubeadm.go:319] CGROUPS_CPU: enabled
	I1122 00:55:14.068644  700166 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1122 00:55:14.068696  700166 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1122 00:55:14.068748  700166 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1122 00:55:14.068799  700166 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1122 00:55:14.068854  700166 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1122 00:55:14.068903  700166 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1122 00:55:14.068955  700166 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1122 00:55:14.069005  700166 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1122 00:55:14.150868  700166 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1122 00:55:14.150982  700166 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1122 00:55:14.151079  700166 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1122 00:55:14.185512  700166 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1122 00:55:12.853834  703787 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1122 00:55:12.854119  703787 start.go:159] libmachine.API.Create for "embed-certs-879000" (driver="docker")
	I1122 00:55:12.854169  703787 client.go:173] LocalClient.Create starting
	I1122 00:55:12.854262  703787 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21934-513600/.minikube/certs/ca.pem
	I1122 00:55:12.854299  703787 main.go:143] libmachine: Decoding PEM data...
	I1122 00:55:12.854319  703787 main.go:143] libmachine: Parsing certificate...
	I1122 00:55:12.854373  703787 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21934-513600/.minikube/certs/cert.pem
	I1122 00:55:12.854406  703787 main.go:143] libmachine: Decoding PEM data...
	I1122 00:55:12.854423  703787 main.go:143] libmachine: Parsing certificate...
	I1122 00:55:12.854794  703787 cli_runner.go:164] Run: docker network inspect embed-certs-879000 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1122 00:55:12.885308  703787 cli_runner.go:211] docker network inspect embed-certs-879000 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1122 00:55:12.885395  703787 network_create.go:284] running [docker network inspect embed-certs-879000] to gather additional debugging logs...
	I1122 00:55:12.885412  703787 cli_runner.go:164] Run: docker network inspect embed-certs-879000
	W1122 00:55:12.919972  703787 cli_runner.go:211] docker network inspect embed-certs-879000 returned with exit code 1
	I1122 00:55:12.920001  703787 network_create.go:287] error running [docker network inspect embed-certs-879000]: docker network inspect embed-certs-879000: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network embed-certs-879000 not found
	I1122 00:55:12.920015  703787 network_create.go:289] output of [docker network inspect embed-certs-879000]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network embed-certs-879000 not found
	
	** /stderr **
	I1122 00:55:12.920108  703787 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1122 00:55:12.940693  703787 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-b16c782e3da8 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:ba:82:00:9d:45:d0} reservation:<nil>}
	I1122 00:55:12.941031  703787 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-13c9c00b5de5 IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:7a:4e:a4:3d:42:9e} reservation:<nil>}
	I1122 00:55:12.941375  703787 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-3c074a6aa87b IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:9a:1f:77:e5:90:0b} reservation:<nil>}
	I1122 00:55:12.941817  703787 network.go:206] using free private subnet 192.168.76.0/24: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x40019759f0}
	I1122 00:55:12.941845  703787 network_create.go:124] attempt to create docker network embed-certs-879000 192.168.76.0/24 with gateway 192.168.76.1 and MTU of 1500 ...
	I1122 00:55:12.941905  703787 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=embed-certs-879000 embed-certs-879000
	I1122 00:55:13.011675  703787 network_create.go:108] docker network embed-certs-879000 192.168.76.0/24 created
	I1122 00:55:13.011709  703787 kic.go:121] calculated static IP "192.168.76.2" for the "embed-certs-879000" container
	I1122 00:55:13.011795  703787 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1122 00:55:13.035788  703787 cli_runner.go:164] Run: docker volume create embed-certs-879000 --label name.minikube.sigs.k8s.io=embed-certs-879000 --label created_by.minikube.sigs.k8s.io=true
	I1122 00:55:13.063301  703787 oci.go:103] Successfully created a docker volume embed-certs-879000
	I1122 00:55:13.063400  703787 cli_runner.go:164] Run: docker run --rm --name embed-certs-879000-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=embed-certs-879000 --entrypoint /usr/bin/test -v embed-certs-879000:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763588073-21934@sha256:19d3da0413e1bfa354cbb88004c6796f8e9772a083e0230b0f6e50212ee04c7e -d /var/lib
	I1122 00:55:13.685831  703787 oci.go:107] Successfully prepared a docker volume embed-certs-879000
	I1122 00:55:13.685978  703787 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1122 00:55:13.685991  703787 kic.go:194] Starting extracting preloaded images to volume ...
	I1122 00:55:13.686062  703787 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21934-513600/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v embed-certs-879000:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763588073-21934@sha256:19d3da0413e1bfa354cbb88004c6796f8e9772a083e0230b0f6e50212ee04c7e -I lz4 -xf /preloaded.tar -C /extractDir
	I1122 00:55:14.191329  700166 out.go:252]   - Generating certificates and keys ...
	I1122 00:55:14.191502  700166 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1122 00:55:14.191608  700166 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1122 00:55:14.983080  700166 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1122 00:55:15.623581  700166 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1122 00:55:16.346809  700166 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1122 00:55:16.508383  700166 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1122 00:55:18.856042  703787 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21934-513600/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v embed-certs-879000:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763588073-21934@sha256:19d3da0413e1bfa354cbb88004c6796f8e9772a083e0230b0f6e50212ee04c7e -I lz4 -xf /preloaded.tar -C /extractDir: (5.169945675s)
	I1122 00:55:18.856077  703787 kic.go:203] duration metric: took 5.170082475s to extract preloaded images to volume ...
	W1122 00:55:18.856208  703787 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I1122 00:55:18.856328  703787 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1122 00:55:18.945489  703787 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname embed-certs-879000 --name embed-certs-879000 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=embed-certs-879000 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=embed-certs-879000 --network embed-certs-879000 --ip 192.168.76.2 --volume embed-certs-879000:/var --security-opt apparmor=unconfined --memory=3072mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763588073-21934@sha256:19d3da0413e1bfa354cbb88004c6796f8e9772a083e0230b0f6e50212ee04c7e
	I1122 00:55:19.298425  703787 cli_runner.go:164] Run: docker container inspect embed-certs-879000 --format={{.State.Running}}
	I1122 00:55:19.321443  703787 cli_runner.go:164] Run: docker container inspect embed-certs-879000 --format={{.State.Status}}
	I1122 00:55:19.354192  703787 cli_runner.go:164] Run: docker exec embed-certs-879000 stat /var/lib/dpkg/alternatives/iptables
	I1122 00:55:19.431005  703787 oci.go:144] the created container "embed-certs-879000" has a running status.
	I1122 00:55:19.431043  703787 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21934-513600/.minikube/machines/embed-certs-879000/id_rsa...
	I1122 00:55:19.626206  703787 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21934-513600/.minikube/machines/embed-certs-879000/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1122 00:55:19.672364  703787 cli_runner.go:164] Run: docker container inspect embed-certs-879000 --format={{.State.Status}}
	I1122 00:55:19.698718  703787 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1122 00:55:19.698738  703787 kic_runner.go:114] Args: [docker exec --privileged embed-certs-879000 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1122 00:55:19.789711  703787 cli_runner.go:164] Run: docker container inspect embed-certs-879000 --format={{.State.Status}}
	I1122 00:55:19.819909  703787 machine.go:94] provisionDockerMachine start ...
	I1122 00:55:19.820010  703787 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-879000
	I1122 00:55:19.849583  703787 main.go:143] libmachine: Using SSH client type: native
	I1122 00:55:19.850035  703787 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33787 <nil> <nil>}
	I1122 00:55:19.850052  703787 main.go:143] libmachine: About to run SSH command:
	hostname
	I1122 00:55:19.850798  703787 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:44016->127.0.0.1:33787: read: connection reset by peer
	I1122 00:55:18.459680  700166 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1122 00:55:18.460036  700166 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [localhost no-preload-165130] and IPs [192.168.85.2 127.0.0.1 ::1]
	I1122 00:55:18.688214  700166 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1122 00:55:18.688557  700166 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [localhost no-preload-165130] and IPs [192.168.85.2 127.0.0.1 ::1]
	I1122 00:55:21.097058  700166 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1122 00:55:21.685015  700166 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1122 00:55:22.209907  700166 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1122 00:55:22.210233  700166 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1122 00:55:22.611676  700166 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1122 00:55:22.844150  700166 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1122 00:55:23.418225  700166 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1122 00:55:23.929950  700166 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1122 00:55:25.209676  700166 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1122 00:55:25.217071  700166 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1122 00:55:25.218509  700166 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1122 00:55:23.004448  703787 main.go:143] libmachine: SSH cmd err, output: <nil>: embed-certs-879000
	
	I1122 00:55:23.004472  703787 ubuntu.go:182] provisioning hostname "embed-certs-879000"
	I1122 00:55:23.004556  703787 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-879000
	I1122 00:55:23.028733  703787 main.go:143] libmachine: Using SSH client type: native
	I1122 00:55:23.029051  703787 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33787 <nil> <nil>}
	I1122 00:55:23.029067  703787 main.go:143] libmachine: About to run SSH command:
	sudo hostname embed-certs-879000 && echo "embed-certs-879000" | sudo tee /etc/hostname
	I1122 00:55:23.196256  703787 main.go:143] libmachine: SSH cmd err, output: <nil>: embed-certs-879000
	
	I1122 00:55:23.196466  703787 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-879000
	I1122 00:55:23.223835  703787 main.go:143] libmachine: Using SSH client type: native
	I1122 00:55:23.224184  703787 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33787 <nil> <nil>}
	I1122 00:55:23.224205  703787 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-879000' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-879000/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-879000' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1122 00:55:23.378176  703787 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1122 00:55:23.378249  703787 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21934-513600/.minikube CaCertPath:/home/jenkins/minikube-integration/21934-513600/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21934-513600/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21934-513600/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21934-513600/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21934-513600/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21934-513600/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21934-513600/.minikube}
	I1122 00:55:23.378285  703787 ubuntu.go:190] setting up certificates
	I1122 00:55:23.378323  703787 provision.go:84] configureAuth start
	I1122 00:55:23.378413  703787 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-879000
	I1122 00:55:23.400367  703787 provision.go:143] copyHostCerts
	I1122 00:55:23.400429  703787 exec_runner.go:144] found /home/jenkins/minikube-integration/21934-513600/.minikube/ca.pem, removing ...
	I1122 00:55:23.400438  703787 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21934-513600/.minikube/ca.pem
	I1122 00:55:23.400519  703787 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21934-513600/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21934-513600/.minikube/ca.pem (1078 bytes)
	I1122 00:55:23.400614  703787 exec_runner.go:144] found /home/jenkins/minikube-integration/21934-513600/.minikube/cert.pem, removing ...
	I1122 00:55:23.400619  703787 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21934-513600/.minikube/cert.pem
	I1122 00:55:23.400645  703787 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21934-513600/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21934-513600/.minikube/cert.pem (1123 bytes)
	I1122 00:55:23.400734  703787 exec_runner.go:144] found /home/jenkins/minikube-integration/21934-513600/.minikube/key.pem, removing ...
	I1122 00:55:23.400740  703787 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21934-513600/.minikube/key.pem
	I1122 00:55:23.400769  703787 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21934-513600/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21934-513600/.minikube/key.pem (1675 bytes)
	I1122 00:55:23.400824  703787 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21934-513600/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21934-513600/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21934-513600/.minikube/certs/ca-key.pem org=jenkins.embed-certs-879000 san=[127.0.0.1 192.168.76.2 embed-certs-879000 localhost minikube]
	I1122 00:55:23.636565  703787 provision.go:177] copyRemoteCerts
	I1122 00:55:23.636681  703787 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1122 00:55:23.636738  703787 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-879000
	I1122 00:55:23.656777  703787 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33787 SSHKeyPath:/home/jenkins/minikube-integration/21934-513600/.minikube/machines/embed-certs-879000/id_rsa Username:docker}
	I1122 00:55:23.757406  703787 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21934-513600/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1122 00:55:23.774784  703787 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21934-513600/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1122 00:55:23.793463  703787 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21934-513600/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1122 00:55:23.812324  703787 provision.go:87] duration metric: took 433.964557ms to configureAuth
	I1122 00:55:23.812352  703787 ubuntu.go:206] setting minikube options for container-runtime
	I1122 00:55:23.812524  703787 config.go:182] Loaded profile config "embed-certs-879000": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1122 00:55:23.812633  703787 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-879000
	I1122 00:55:23.836591  703787 main.go:143] libmachine: Using SSH client type: native
	I1122 00:55:23.836904  703787 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33787 <nil> <nil>}
	I1122 00:55:23.836920  703787 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1122 00:55:24.153819  703787 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1122 00:55:24.153885  703787 machine.go:97] duration metric: took 4.333954123s to provisionDockerMachine
	I1122 00:55:24.153910  703787 client.go:176] duration metric: took 11.299732968s to LocalClient.Create
	I1122 00:55:24.153940  703787 start.go:167] duration metric: took 11.299826659s to libmachine.API.Create "embed-certs-879000"
	I1122 00:55:24.153977  703787 start.go:293] postStartSetup for "embed-certs-879000" (driver="docker")
	I1122 00:55:24.154000  703787 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1122 00:55:24.154107  703787 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1122 00:55:24.154168  703787 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-879000
	I1122 00:55:24.175586  703787 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33787 SSHKeyPath:/home/jenkins/minikube-integration/21934-513600/.minikube/machines/embed-certs-879000/id_rsa Username:docker}
	I1122 00:55:24.278838  703787 ssh_runner.go:195] Run: cat /etc/os-release
	I1122 00:55:24.282568  703787 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1122 00:55:24.282592  703787 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1122 00:55:24.282603  703787 filesync.go:126] Scanning /home/jenkins/minikube-integration/21934-513600/.minikube/addons for local assets ...
	I1122 00:55:24.282654  703787 filesync.go:126] Scanning /home/jenkins/minikube-integration/21934-513600/.minikube/files for local assets ...
	I1122 00:55:24.282731  703787 filesync.go:149] local asset: /home/jenkins/minikube-integration/21934-513600/.minikube/files/etc/ssl/certs/5169372.pem -> 5169372.pem in /etc/ssl/certs
	I1122 00:55:24.282834  703787 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1122 00:55:24.290690  703787 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21934-513600/.minikube/files/etc/ssl/certs/5169372.pem --> /etc/ssl/certs/5169372.pem (1708 bytes)
	I1122 00:55:24.310298  703787 start.go:296] duration metric: took 156.293549ms for postStartSetup
	I1122 00:55:24.310722  703787 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-879000
	I1122 00:55:24.329365  703787 profile.go:143] Saving config to /home/jenkins/minikube-integration/21934-513600/.minikube/profiles/embed-certs-879000/config.json ...
	I1122 00:55:24.329632  703787 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1122 00:55:24.329671  703787 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-879000
	I1122 00:55:24.348386  703787 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33787 SSHKeyPath:/home/jenkins/minikube-integration/21934-513600/.minikube/machines/embed-certs-879000/id_rsa Username:docker}
	I1122 00:55:24.447013  703787 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1122 00:55:24.452194  703787 start.go:128] duration metric: took 11.61165817s to createHost
	I1122 00:55:24.452219  703787 start.go:83] releasing machines lock for "embed-certs-879000", held for 11.611778084s
	I1122 00:55:24.452294  703787 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-879000
	I1122 00:55:24.471314  703787 ssh_runner.go:195] Run: cat /version.json
	I1122 00:55:24.471366  703787 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-879000
	I1122 00:55:24.471599  703787 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1122 00:55:24.471644  703787 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-879000
	I1122 00:55:24.501529  703787 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33787 SSHKeyPath:/home/jenkins/minikube-integration/21934-513600/.minikube/machines/embed-certs-879000/id_rsa Username:docker}
	I1122 00:55:24.506579  703787 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33787 SSHKeyPath:/home/jenkins/minikube-integration/21934-513600/.minikube/machines/embed-certs-879000/id_rsa Username:docker}
	I1122 00:55:24.722381  703787 ssh_runner.go:195] Run: systemctl --version
	I1122 00:55:24.729306  703787 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1122 00:55:24.771484  703787 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1122 00:55:24.776719  703787 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1122 00:55:24.776836  703787 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1122 00:55:24.813921  703787 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/10-crio-bridge.conflist.disabled] bridge cni config(s)
	I1122 00:55:24.813994  703787 start.go:496] detecting cgroup driver to use...
	I1122 00:55:24.814037  703787 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1122 00:55:24.814119  703787 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1122 00:55:24.834630  703787 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1122 00:55:24.848741  703787 docker.go:218] disabling cri-docker service (if available) ...
	I1122 00:55:24.848853  703787 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1122 00:55:24.867960  703787 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1122 00:55:24.887861  703787 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1122 00:55:25.041152  703787 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1122 00:55:25.216517  703787 docker.go:234] disabling docker service ...
	I1122 00:55:25.216640  703787 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1122 00:55:25.245499  703787 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1122 00:55:25.260806  703787 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1122 00:55:25.425647  703787 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1122 00:55:25.572562  703787 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1122 00:55:25.587450  703787 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1122 00:55:25.600913  703787 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1122 00:55:25.601022  703787 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1122 00:55:25.609558  703787 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1122 00:55:25.609671  703787 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1122 00:55:25.618098  703787 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1122 00:55:25.626273  703787 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1122 00:55:25.634385  703787 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1122 00:55:25.641995  703787 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1122 00:55:25.650076  703787 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1122 00:55:25.663104  703787 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1122 00:55:25.671840  703787 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1122 00:55:25.679798  703787 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1122 00:55:25.691838  703787 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1122 00:55:25.827081  703787 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1122 00:55:26.018995  703787 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1122 00:55:26.019069  703787 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1122 00:55:26.023544  703787 start.go:564] Will wait 60s for crictl version
	I1122 00:55:26.023629  703787 ssh_runner.go:195] Run: which crictl
	I1122 00:55:26.027724  703787 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1122 00:55:26.062011  703787 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.2
	RuntimeApiVersion:  v1
	I1122 00:55:26.062108  703787 ssh_runner.go:195] Run: crio --version
	I1122 00:55:26.099749  703787 ssh_runner.go:195] Run: crio --version
	I1122 00:55:26.141851  703787 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.2 ...
	I1122 00:55:26.144625  703787 cli_runner.go:164] Run: docker network inspect embed-certs-879000 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1122 00:55:26.163511  703787 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I1122 00:55:26.167610  703787 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1122 00:55:26.176977  703787 kubeadm.go:884] updating cluster {Name:embed-certs-879000 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763588073-21934@sha256:19d3da0413e1bfa354cbb88004c6796f8e9772a083e0230b0f6e50212ee04c7e Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:embed-certs-879000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA AP
IServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath
: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1122 00:55:26.177096  703787 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1122 00:55:26.177148  703787 ssh_runner.go:195] Run: sudo crictl images --output json
	I1122 00:55:26.216138  703787 crio.go:514] all images are preloaded for cri-o runtime.
	I1122 00:55:26.216156  703787 crio.go:433] Images already preloaded, skipping extraction
	I1122 00:55:26.216210  703787 ssh_runner.go:195] Run: sudo crictl images --output json
	I1122 00:55:26.243935  703787 crio.go:514] all images are preloaded for cri-o runtime.
	I1122 00:55:26.244002  703787 cache_images.go:86] Images are preloaded, skipping loading
	I1122 00:55:26.244025  703787 kubeadm.go:935] updating node { 192.168.76.2 8443 v1.34.1 crio true true} ...
	I1122 00:55:26.244135  703787 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=embed-certs-879000 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:embed-certs-879000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1122 00:55:26.244260  703787 ssh_runner.go:195] Run: crio config
	I1122 00:55:26.321497  703787 cni.go:84] Creating CNI manager for ""
	I1122 00:55:26.321565  703787 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1122 00:55:26.321598  703787 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1122 00:55:26.321644  703787 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-879000 NodeName:embed-certs-879000 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/e
tc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1122 00:55:26.321826  703787 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "embed-certs-879000"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.76.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1122 00:55:26.321926  703787 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1122 00:55:26.329625  703787 binaries.go:51] Found k8s binaries, skipping transfer
	I1122 00:55:26.329738  703787 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1122 00:55:26.337075  703787 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (368 bytes)
	I1122 00:55:26.349586  703787 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1122 00:55:26.362258  703787 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2215 bytes)
	I1122 00:55:26.374988  703787 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I1122 00:55:26.379174  703787 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1122 00:55:26.388186  703787 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1122 00:55:26.545093  703787 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1122 00:55:26.565390  703787 certs.go:69] Setting up /home/jenkins/minikube-integration/21934-513600/.minikube/profiles/embed-certs-879000 for IP: 192.168.76.2
	I1122 00:55:26.565449  703787 certs.go:195] generating shared ca certs ...
	I1122 00:55:26.565489  703787 certs.go:227] acquiring lock for ca certs: {Name:mkaf4c79493334cb2058b349c36be7473837f9b7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1122 00:55:26.565676  703787 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21934-513600/.minikube/ca.key
	I1122 00:55:26.565726  703787 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21934-513600/.minikube/proxy-client-ca.key
	I1122 00:55:26.565734  703787 certs.go:257] generating profile certs ...
	I1122 00:55:26.565791  703787 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21934-513600/.minikube/profiles/embed-certs-879000/client.key
	I1122 00:55:26.565828  703787 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21934-513600/.minikube/profiles/embed-certs-879000/client.crt with IP's: []
	I1122 00:55:26.790677  703787 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21934-513600/.minikube/profiles/embed-certs-879000/client.crt ...
	I1122 00:55:26.790708  703787 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21934-513600/.minikube/profiles/embed-certs-879000/client.crt: {Name:mk6c8e8cccf12f6873cbc988e3c792a5fbfbf4ab Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1122 00:55:26.790942  703787 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21934-513600/.minikube/profiles/embed-certs-879000/client.key ...
	I1122 00:55:26.790957  703787 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21934-513600/.minikube/profiles/embed-certs-879000/client.key: {Name:mk3b891a657b525992c0a804aa74d4e6c8e063e9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1122 00:55:26.791057  703787 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21934-513600/.minikube/profiles/embed-certs-879000/apiserver.key.f00c2ee1
	I1122 00:55:26.791077  703787 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21934-513600/.minikube/profiles/embed-certs-879000/apiserver.crt.f00c2ee1 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.76.2]
	I1122 00:55:26.927776  703787 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21934-513600/.minikube/profiles/embed-certs-879000/apiserver.crt.f00c2ee1 ...
	I1122 00:55:26.927806  703787 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21934-513600/.minikube/profiles/embed-certs-879000/apiserver.crt.f00c2ee1: {Name:mkc46248405516ac699b367f2b2ed39428b9e732 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1122 00:55:26.927986  703787 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21934-513600/.minikube/profiles/embed-certs-879000/apiserver.key.f00c2ee1 ...
	I1122 00:55:26.928000  703787 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21934-513600/.minikube/profiles/embed-certs-879000/apiserver.key.f00c2ee1: {Name:mk5eb052b6e02d3688a6c508610bffbee482eb4d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1122 00:55:26.928084  703787 certs.go:382] copying /home/jenkins/minikube-integration/21934-513600/.minikube/profiles/embed-certs-879000/apiserver.crt.f00c2ee1 -> /home/jenkins/minikube-integration/21934-513600/.minikube/profiles/embed-certs-879000/apiserver.crt
	I1122 00:55:26.928167  703787 certs.go:386] copying /home/jenkins/minikube-integration/21934-513600/.minikube/profiles/embed-certs-879000/apiserver.key.f00c2ee1 -> /home/jenkins/minikube-integration/21934-513600/.minikube/profiles/embed-certs-879000/apiserver.key
	I1122 00:55:26.928232  703787 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21934-513600/.minikube/profiles/embed-certs-879000/proxy-client.key
	I1122 00:55:26.928250  703787 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21934-513600/.minikube/profiles/embed-certs-879000/proxy-client.crt with IP's: []
	I1122 00:55:25.221351  700166 out.go:252]   - Booting up control plane ...
	I1122 00:55:25.221457  700166 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1122 00:55:25.221535  700166 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1122 00:55:25.223371  700166 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1122 00:55:25.257231  700166 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1122 00:55:25.257341  700166 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1122 00:55:25.267694  700166 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1122 00:55:25.267805  700166 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1122 00:55:25.267851  700166 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1122 00:55:25.445485  700166 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1122 00:55:25.445605  700166 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1122 00:55:26.950222  700166 kubeadm.go:319] [kubelet-check] The kubelet is healthy after 1.50173099s
	I1122 00:55:26.951489  700166 kubeadm.go:319] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1122 00:55:26.951584  700166 kubeadm.go:319] [control-plane-check] Checking kube-apiserver at https://192.168.85.2:8443/livez
	I1122 00:55:26.951673  700166 kubeadm.go:319] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1122 00:55:26.951751  700166 kubeadm.go:319] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1122 00:55:27.495270  703787 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21934-513600/.minikube/profiles/embed-certs-879000/proxy-client.crt ...
	I1122 00:55:27.495298  703787 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21934-513600/.minikube/profiles/embed-certs-879000/proxy-client.crt: {Name:mk8e3fdb8c1d6698a8cf2a0608231705ff27e434 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1122 00:55:27.495514  703787 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21934-513600/.minikube/profiles/embed-certs-879000/proxy-client.key ...
	I1122 00:55:27.495530  703787 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21934-513600/.minikube/profiles/embed-certs-879000/proxy-client.key: {Name:mka1a6f97515c690236fa5acba330a29aba4fe84 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1122 00:55:27.495734  703787 certs.go:484] found cert: /home/jenkins/minikube-integration/21934-513600/.minikube/certs/516937.pem (1338 bytes)
	W1122 00:55:27.495781  703787 certs.go:480] ignoring /home/jenkins/minikube-integration/21934-513600/.minikube/certs/516937_empty.pem, impossibly tiny 0 bytes
	I1122 00:55:27.495797  703787 certs.go:484] found cert: /home/jenkins/minikube-integration/21934-513600/.minikube/certs/ca-key.pem (1679 bytes)
	I1122 00:55:27.495826  703787 certs.go:484] found cert: /home/jenkins/minikube-integration/21934-513600/.minikube/certs/ca.pem (1078 bytes)
	I1122 00:55:27.495855  703787 certs.go:484] found cert: /home/jenkins/minikube-integration/21934-513600/.minikube/certs/cert.pem (1123 bytes)
	I1122 00:55:27.495886  703787 certs.go:484] found cert: /home/jenkins/minikube-integration/21934-513600/.minikube/certs/key.pem (1675 bytes)
	I1122 00:55:27.495938  703787 certs.go:484] found cert: /home/jenkins/minikube-integration/21934-513600/.minikube/files/etc/ssl/certs/5169372.pem (1708 bytes)
	I1122 00:55:27.496502  703787 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21934-513600/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1122 00:55:27.524707  703787 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21934-513600/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1122 00:55:27.579403  703787 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21934-513600/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1122 00:55:27.601937  703787 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21934-513600/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1122 00:55:27.641895  703787 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21934-513600/.minikube/profiles/embed-certs-879000/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I1122 00:55:27.667535  703787 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21934-513600/.minikube/profiles/embed-certs-879000/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1122 00:55:27.700713  703787 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21934-513600/.minikube/profiles/embed-certs-879000/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1122 00:55:27.720527  703787 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21934-513600/.minikube/profiles/embed-certs-879000/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1122 00:55:27.763803  703787 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21934-513600/.minikube/certs/516937.pem --> /usr/share/ca-certificates/516937.pem (1338 bytes)
	I1122 00:55:27.803323  703787 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21934-513600/.minikube/files/etc/ssl/certs/5169372.pem --> /usr/share/ca-certificates/5169372.pem (1708 bytes)
	I1122 00:55:27.827349  703787 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21934-513600/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1122 00:55:27.862793  703787 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1122 00:55:27.877492  703787 ssh_runner.go:195] Run: openssl version
	I1122 00:55:27.887553  703787 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/516937.pem && ln -fs /usr/share/ca-certificates/516937.pem /etc/ssl/certs/516937.pem"
	I1122 00:55:27.899880  703787 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/516937.pem
	I1122 00:55:27.906146  703787 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov 21 23:55 /usr/share/ca-certificates/516937.pem
	I1122 00:55:27.906216  703787 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/516937.pem
	I1122 00:55:27.974173  703787 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/516937.pem /etc/ssl/certs/51391683.0"
	I1122 00:55:27.991691  703787 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/5169372.pem && ln -fs /usr/share/ca-certificates/5169372.pem /etc/ssl/certs/5169372.pem"
	I1122 00:55:28.004852  703787 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/5169372.pem
	I1122 00:55:28.019483  703787 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov 21 23:55 /usr/share/ca-certificates/5169372.pem
	I1122 00:55:28.019560  703787 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/5169372.pem
	I1122 00:55:28.119245  703787 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/5169372.pem /etc/ssl/certs/3ec20f2e.0"
	I1122 00:55:28.134985  703787 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1122 00:55:28.148638  703787 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1122 00:55:28.158325  703787 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov 21 23:48 /usr/share/ca-certificates/minikubeCA.pem
	I1122 00:55:28.158400  703787 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1122 00:55:28.219669  703787 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1122 00:55:28.229129  703787 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1122 00:55:28.233647  703787 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1122 00:55:28.233700  703787 kubeadm.go:401] StartCluster: {Name:embed-certs-879000 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763588073-21934@sha256:19d3da0413e1bfa354cbb88004c6796f8e9772a083e0230b0f6e50212ee04c7e Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:embed-certs-879000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISe
rverNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: S
ocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1122 00:55:28.233782  703787 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1122 00:55:28.233884  703787 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1122 00:55:28.270244  703787 cri.go:89] found id: ""
	I1122 00:55:28.270317  703787 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1122 00:55:28.286991  703787 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1122 00:55:28.299072  703787 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1122 00:55:28.299138  703787 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1122 00:55:28.314761  703787 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1122 00:55:28.314780  703787 kubeadm.go:158] found existing configuration files:
	
	I1122 00:55:28.314833  703787 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1122 00:55:28.322822  703787 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1122 00:55:28.322887  703787 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1122 00:55:28.330089  703787 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1122 00:55:28.345729  703787 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1122 00:55:28.345797  703787 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1122 00:55:28.355388  703787 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1122 00:55:28.363725  703787 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1122 00:55:28.363795  703787 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1122 00:55:28.381870  703787 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1122 00:55:28.394371  703787 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1122 00:55:28.394461  703787 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1122 00:55:28.402252  703787 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1122 00:55:28.468934  703787 kubeadm.go:319] [init] Using Kubernetes version: v1.34.1
	I1122 00:55:28.468998  703787 kubeadm.go:319] [preflight] Running pre-flight checks
	I1122 00:55:28.518230  703787 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1122 00:55:28.518317  703787 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1122 00:55:28.518357  703787 kubeadm.go:319] OS: Linux
	I1122 00:55:28.518406  703787 kubeadm.go:319] CGROUPS_CPU: enabled
	I1122 00:55:28.518458  703787 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1122 00:55:28.518509  703787 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1122 00:55:28.518564  703787 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1122 00:55:28.518617  703787 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1122 00:55:28.518669  703787 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1122 00:55:28.518718  703787 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1122 00:55:28.518769  703787 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1122 00:55:28.518818  703787 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1122 00:55:28.634349  703787 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1122 00:55:28.634479  703787 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1122 00:55:28.634593  703787 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1122 00:55:28.654247  703787 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1122 00:55:28.659990  703787 out.go:252]   - Generating certificates and keys ...
	I1122 00:55:28.660109  703787 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1122 00:55:28.660194  703787 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1122 00:55:28.975935  703787 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1122 00:55:29.394136  703787 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1122 00:55:29.456480  703787 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1122 00:55:29.920425  703787 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1122 00:55:30.752539  703787 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1122 00:55:30.753054  703787 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [embed-certs-879000 localhost] and IPs [192.168.76.2 127.0.0.1 ::1]
	I1122 00:55:32.193548  703787 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1122 00:55:32.194092  703787 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [embed-certs-879000 localhost] and IPs [192.168.76.2 127.0.0.1 ::1]
	I1122 00:55:33.106184  703787 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1122 00:55:33.266628  703787 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1122 00:55:33.660022  703787 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1122 00:55:33.660520  703787 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1122 00:55:33.801538  703787 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1122 00:55:34.514133  703787 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1122 00:55:35.404379  703787 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1122 00:55:35.680907  703787 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1122 00:55:36.208130  703787 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1122 00:55:36.208233  703787 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1122 00:55:36.211199  703787 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1122 00:55:33.598766  700166 kubeadm.go:319] [control-plane-check] kube-controller-manager is healthy after 6.646603432s
	I1122 00:55:35.419354  700166 kubeadm.go:319] [control-plane-check] kube-scheduler is healthy after 8.466957112s
	I1122 00:55:36.453587  700166 kubeadm.go:319] [control-plane-check] kube-apiserver is healthy after 9.502470344s
	I1122 00:55:36.484319  700166 kubeadm.go:319] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1122 00:55:36.506984  700166 kubeadm.go:319] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1122 00:55:36.527782  700166 kubeadm.go:319] [upload-certs] Skipping phase. Please see --upload-certs
	I1122 00:55:36.528262  700166 kubeadm.go:319] [mark-control-plane] Marking the node no-preload-165130 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1122 00:55:36.542063  700166 kubeadm.go:319] [bootstrap-token] Using token: qp3zts.wnogt5v3he40n5uk
	I1122 00:55:36.214395  703787 out.go:252]   - Booting up control plane ...
	I1122 00:55:36.214499  703787 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1122 00:55:36.214584  703787 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1122 00:55:36.216227  703787 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1122 00:55:36.235783  703787 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1122 00:55:36.235999  703787 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1122 00:55:36.247039  703787 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1122 00:55:36.247566  703787 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1122 00:55:36.247756  703787 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1122 00:55:36.395817  703787 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1122 00:55:36.395941  703787 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1122 00:55:36.545395  700166 out.go:252]   - Configuring RBAC rules ...
	I1122 00:55:36.545521  700166 kubeadm.go:319] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1122 00:55:36.551846  700166 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1122 00:55:36.560307  700166 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1122 00:55:36.569269  700166 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1122 00:55:36.579507  700166 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1122 00:55:36.602314  700166 kubeadm.go:319] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1122 00:55:36.871619  700166 kubeadm.go:319] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1122 00:55:37.359914  700166 kubeadm.go:319] [addons] Applied essential addon: CoreDNS
	I1122 00:55:37.879249  700166 kubeadm.go:319] [addons] Applied essential addon: kube-proxy
	I1122 00:55:37.880846  700166 kubeadm.go:319] 
	I1122 00:55:37.880930  700166 kubeadm.go:319] Your Kubernetes control-plane has initialized successfully!
	I1122 00:55:37.880936  700166 kubeadm.go:319] 
	I1122 00:55:37.881020  700166 kubeadm.go:319] To start using your cluster, you need to run the following as a regular user:
	I1122 00:55:37.881024  700166 kubeadm.go:319] 
	I1122 00:55:37.881049  700166 kubeadm.go:319]   mkdir -p $HOME/.kube
	I1122 00:55:37.881527  700166 kubeadm.go:319]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1122 00:55:37.881591  700166 kubeadm.go:319]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1122 00:55:37.881596  700166 kubeadm.go:319] 
	I1122 00:55:37.881650  700166 kubeadm.go:319] Alternatively, if you are the root user, you can run:
	I1122 00:55:37.881654  700166 kubeadm.go:319] 
	I1122 00:55:37.881708  700166 kubeadm.go:319]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1122 00:55:37.881712  700166 kubeadm.go:319] 
	I1122 00:55:37.881779  700166 kubeadm.go:319] You should now deploy a pod network to the cluster.
	I1122 00:55:37.881876  700166 kubeadm.go:319] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1122 00:55:37.881948  700166 kubeadm.go:319]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1122 00:55:37.881952  700166 kubeadm.go:319] 
	I1122 00:55:37.882294  700166 kubeadm.go:319] You can now join any number of control-plane nodes by copying certificate authorities
	I1122 00:55:37.882378  700166 kubeadm.go:319] and service account keys on each node and then running the following as root:
	I1122 00:55:37.882387  700166 kubeadm.go:319] 
	I1122 00:55:37.882685  700166 kubeadm.go:319]   kubeadm join control-plane.minikube.internal:8443 --token qp3zts.wnogt5v3he40n5uk \
	I1122 00:55:37.882801  700166 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:ecfebb5fda4f065a571cf90106e71e452abce05aaa4d3155b81d7383977d6854 \
	I1122 00:55:37.883007  700166 kubeadm.go:319] 	--control-plane 
	I1122 00:55:37.883017  700166 kubeadm.go:319] 
	I1122 00:55:37.883351  700166 kubeadm.go:319] Then you can join any number of worker nodes by running the following on each as root:
	I1122 00:55:37.883360  700166 kubeadm.go:319] 
	I1122 00:55:37.883681  700166 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8443 --token qp3zts.wnogt5v3he40n5uk \
	I1122 00:55:37.883973  700166 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:ecfebb5fda4f065a571cf90106e71e452abce05aaa4d3155b81d7383977d6854 
	I1122 00:55:37.892070  700166 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is in maintenance mode, please migrate to cgroups v2
	I1122 00:55:37.892443  700166 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1122 00:55:37.892574  700166 kubeadm.go:319] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1122 00:55:37.892587  700166 cni.go:84] Creating CNI manager for ""
	I1122 00:55:37.892594  700166 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1122 00:55:37.896257  700166 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I1122 00:55:37.897180  703787 kubeadm.go:319] [kubelet-check] The kubelet is healthy after 1.501404856s
	I1122 00:55:37.901082  703787 kubeadm.go:319] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1122 00:55:37.901212  703787 kubeadm.go:319] [control-plane-check] Checking kube-apiserver at https://192.168.76.2:8443/livez
	I1122 00:55:37.901375  703787 kubeadm.go:319] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1122 00:55:37.901510  703787 kubeadm.go:319] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1122 00:55:37.900055  700166 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1122 00:55:37.906319  700166 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.1/kubectl ...
	I1122 00:55:37.906341  700166 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1122 00:55:37.927600  700166 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1122 00:55:38.224839  700166 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1122 00:55:38.224982  700166 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1122 00:55:38.225084  700166 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes no-preload-165130 minikube.k8s.io/updated_at=2025_11_22T00_55_38_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=299bbe887a12c40541707cc636234f35f4ff1785 minikube.k8s.io/name=no-preload-165130 minikube.k8s.io/primary=true
	I1122 00:55:38.514952  700166 ops.go:34] apiserver oom_adj: -16
	I1122 00:55:38.515059  700166 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1122 00:55:39.015169  700166 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1122 00:55:39.515969  700166 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1122 00:55:40.016069  700166 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1122 00:55:40.515169  700166 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1122 00:55:41.016048  700166 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1122 00:55:41.515961  700166 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1122 00:55:42.015202  700166 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1122 00:55:42.515420  700166 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1122 00:55:43.015863  700166 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1122 00:55:43.256854  700166 kubeadm.go:1114] duration metric: took 5.03191583s to wait for elevateKubeSystemPrivileges
	I1122 00:55:43.256880  700166 kubeadm.go:403] duration metric: took 29.454617132s to StartCluster
	I1122 00:55:43.256896  700166 settings.go:142] acquiring lock: {Name:mk6c31eb57ec65b047b78b4e1046e03fe7cc77bb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1122 00:55:43.256958  700166 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21934-513600/kubeconfig
	I1122 00:55:43.257590  700166 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21934-513600/kubeconfig: {Name:mkcd094d8a765e5a81369c0fb33cb5e696b17bb8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1122 00:55:43.257794  700166 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1122 00:55:43.257949  700166 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1122 00:55:43.258191  700166 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1122 00:55:43.258259  700166 addons.go:70] Setting storage-provisioner=true in profile "no-preload-165130"
	I1122 00:55:43.258274  700166 addons.go:239] Setting addon storage-provisioner=true in "no-preload-165130"
	I1122 00:55:43.258300  700166 host.go:66] Checking if "no-preload-165130" exists ...
	I1122 00:55:43.258784  700166 cli_runner.go:164] Run: docker container inspect no-preload-165130 --format={{.State.Status}}
	I1122 00:55:43.259233  700166 config.go:182] Loaded profile config "no-preload-165130": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1122 00:55:43.259321  700166 addons.go:70] Setting default-storageclass=true in profile "no-preload-165130"
	I1122 00:55:43.259364  700166 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "no-preload-165130"
	I1122 00:55:43.259675  700166 cli_runner.go:164] Run: docker container inspect no-preload-165130 --format={{.State.Status}}
	I1122 00:55:43.261169  700166 out.go:179] * Verifying Kubernetes components...
	I1122 00:55:43.267916  700166 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1122 00:55:43.309519  700166 addons.go:239] Setting addon default-storageclass=true in "no-preload-165130"
	I1122 00:55:43.309560  700166 host.go:66] Checking if "no-preload-165130" exists ...
	I1122 00:55:43.310172  700166 cli_runner.go:164] Run: docker container inspect no-preload-165130 --format={{.State.Status}}
	I1122 00:55:43.320283  700166 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1122 00:55:43.324809  700166 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1122 00:55:43.324831  700166 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1122 00:55:43.324900  700166 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-165130
	I1122 00:55:43.339158  700166 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1122 00:55:43.339177  700166 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1122 00:55:43.339239  700166 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-165130
	I1122 00:55:43.370172  700166 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33781 SSHKeyPath:/home/jenkins/minikube-integration/21934-513600/.minikube/machines/no-preload-165130/id_rsa Username:docker}
	I1122 00:55:43.390947  700166 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33781 SSHKeyPath:/home/jenkins/minikube-integration/21934-513600/.minikube/machines/no-preload-165130/id_rsa Username:docker}
	I1122 00:55:43.853781  700166 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1122 00:55:43.947399  700166 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.85.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1122 00:55:43.947591  700166 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1122 00:55:44.010839  700166 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1122 00:55:45.931054  700166 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (1.983409692s)
	I1122 00:55:45.931797  700166 node_ready.go:35] waiting up to 6m0s for node "no-preload-165130" to be "Ready" ...
	I1122 00:55:45.932070  700166 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.85.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (1.984591721s)
	I1122 00:55:45.932085  700166 start.go:977] {"host.minikube.internal": 192.168.85.1} host record injected into CoreDNS's ConfigMap
	I1122 00:55:45.932914  700166 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.921999748s)
	I1122 00:55:45.934491  700166 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (2.080623149s)
	I1122 00:55:46.040053  700166 out.go:179] * Enabled addons: storage-provisioner, default-storageclass
	I1122 00:55:42.738835  703787 kubeadm.go:319] [control-plane-check] kube-controller-manager is healthy after 4.837053842s
	I1122 00:55:46.043082  700166 addons.go:530] duration metric: took 2.784880874s for enable addons: enabled=[storage-provisioner default-storageclass]
	I1122 00:55:46.438437  700166 kapi.go:214] "coredns" deployment in "kube-system" namespace and "no-preload-165130" context rescaled to 1 replicas
	I1122 00:55:47.907511  703787 kubeadm.go:319] [control-plane-check] kube-apiserver is healthy after 10.006087648s
	I1122 00:55:49.552686  703787 kubeadm.go:319] [control-plane-check] kube-scheduler is healthy after 11.649573425s
	I1122 00:55:49.586900  703787 kubeadm.go:319] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1122 00:55:49.605172  703787 kubeadm.go:319] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1122 00:55:49.624145  703787 kubeadm.go:319] [upload-certs] Skipping phase. Please see --upload-certs
	I1122 00:55:49.624736  703787 kubeadm.go:319] [mark-control-plane] Marking the node embed-certs-879000 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1122 00:55:49.647063  703787 kubeadm.go:319] [bootstrap-token] Using token: da6k9z.dqqxu8dmw1am5pxg
	I1122 00:55:49.650201  703787 out.go:252]   - Configuring RBAC rules ...
	I1122 00:55:49.650333  703787 kubeadm.go:319] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1122 00:55:49.670698  703787 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1122 00:55:49.684942  703787 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1122 00:55:49.689340  703787 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1122 00:55:49.696696  703787 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1122 00:55:49.703919  703787 kubeadm.go:319] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1122 00:55:49.957794  703787 kubeadm.go:319] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1122 00:55:50.436273  703787 kubeadm.go:319] [addons] Applied essential addon: CoreDNS
	I1122 00:55:50.962146  703787 kubeadm.go:319] [addons] Applied essential addon: kube-proxy
	I1122 00:55:50.962933  703787 kubeadm.go:319] 
	I1122 00:55:50.963002  703787 kubeadm.go:319] Your Kubernetes control-plane has initialized successfully!
	I1122 00:55:50.963007  703787 kubeadm.go:319] 
	I1122 00:55:50.963083  703787 kubeadm.go:319] To start using your cluster, you need to run the following as a regular user:
	I1122 00:55:50.963088  703787 kubeadm.go:319] 
	I1122 00:55:50.963112  703787 kubeadm.go:319]   mkdir -p $HOME/.kube
	I1122 00:55:50.963441  703787 kubeadm.go:319]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1122 00:55:50.963497  703787 kubeadm.go:319]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1122 00:55:50.963518  703787 kubeadm.go:319] 
	I1122 00:55:50.963572  703787 kubeadm.go:319] Alternatively, if you are the root user, you can run:
	I1122 00:55:50.963576  703787 kubeadm.go:319] 
	I1122 00:55:50.963638  703787 kubeadm.go:319]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1122 00:55:50.963643  703787 kubeadm.go:319] 
	I1122 00:55:50.963694  703787 kubeadm.go:319] You should now deploy a pod network to the cluster.
	I1122 00:55:50.963769  703787 kubeadm.go:319] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1122 00:55:50.963837  703787 kubeadm.go:319]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1122 00:55:50.963841  703787 kubeadm.go:319] 
	I1122 00:55:50.963924  703787 kubeadm.go:319] You can now join any number of control-plane nodes by copying certificate authorities
	I1122 00:55:50.964001  703787 kubeadm.go:319] and service account keys on each node and then running the following as root:
	I1122 00:55:50.964004  703787 kubeadm.go:319] 
	I1122 00:55:50.964088  703787 kubeadm.go:319]   kubeadm join control-plane.minikube.internal:8443 --token da6k9z.dqqxu8dmw1am5pxg \
	I1122 00:55:50.964191  703787 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:ecfebb5fda4f065a571cf90106e71e452abce05aaa4d3155b81d7383977d6854 \
	I1122 00:55:50.964211  703787 kubeadm.go:319] 	--control-plane 
	I1122 00:55:50.964214  703787 kubeadm.go:319] 
	I1122 00:55:50.964299  703787 kubeadm.go:319] Then you can join any number of worker nodes by running the following on each as root:
	I1122 00:55:50.964302  703787 kubeadm.go:319] 
	I1122 00:55:50.964677  703787 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8443 --token da6k9z.dqqxu8dmw1am5pxg \
	I1122 00:55:50.964798  703787 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:ecfebb5fda4f065a571cf90106e71e452abce05aaa4d3155b81d7383977d6854 
	I1122 00:55:50.969509  703787 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is in maintenance mode, please migrate to cgroups v2
	I1122 00:55:50.969863  703787 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1122 00:55:50.970004  703787 kubeadm.go:319] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1122 00:55:50.970031  703787 cni.go:84] Creating CNI manager for ""
	I1122 00:55:50.970038  703787 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1122 00:55:50.973162  703787 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I1122 00:55:50.976112  703787 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1122 00:55:50.980192  703787 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.1/kubectl ...
	I1122 00:55:50.980216  703787 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1122 00:55:50.995100  703787 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1122 00:55:51.457575  703787 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1122 00:55:51.457705  703787 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1122 00:55:51.457778  703787 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes embed-certs-879000 minikube.k8s.io/updated_at=2025_11_22T00_55_51_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=299bbe887a12c40541707cc636234f35f4ff1785 minikube.k8s.io/name=embed-certs-879000 minikube.k8s.io/primary=true
	I1122 00:55:51.702726  703787 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1122 00:55:51.702841  703787 ops.go:34] apiserver oom_adj: -16
	I1122 00:55:52.202845  703787 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	W1122 00:55:47.937162  700166 node_ready.go:57] node "no-preload-165130" has "Ready":"False" status (will retry)
	W1122 00:55:50.434522  700166 node_ready.go:57] node "no-preload-165130" has "Ready":"False" status (will retry)
	W1122 00:55:52.434698  700166 node_ready.go:57] node "no-preload-165130" has "Ready":"False" status (will retry)
	I1122 00:55:52.703245  703787 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1122 00:55:53.202846  703787 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1122 00:55:53.702880  703787 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1122 00:55:54.202987  703787 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1122 00:55:54.335399  703787 kubeadm.go:1114] duration metric: took 2.877731403s to wait for elevateKubeSystemPrivileges
	I1122 00:55:54.335429  703787 kubeadm.go:403] duration metric: took 26.101732207s to StartCluster
	I1122 00:55:54.335447  703787 settings.go:142] acquiring lock: {Name:mk6c31eb57ec65b047b78b4e1046e03fe7cc77bb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1122 00:55:54.335513  703787 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21934-513600/kubeconfig
	I1122 00:55:54.336849  703787 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21934-513600/kubeconfig: {Name:mkcd094d8a765e5a81369c0fb33cb5e696b17bb8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1122 00:55:54.337078  703787 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1122 00:55:54.337190  703787 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1122 00:55:54.337453  703787 config.go:182] Loaded profile config "embed-certs-879000": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1122 00:55:54.337568  703787 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1122 00:55:54.337632  703787 addons.go:70] Setting storage-provisioner=true in profile "embed-certs-879000"
	I1122 00:55:54.337651  703787 addons.go:239] Setting addon storage-provisioner=true in "embed-certs-879000"
	I1122 00:55:54.337684  703787 host.go:66] Checking if "embed-certs-879000" exists ...
	I1122 00:55:54.338405  703787 cli_runner.go:164] Run: docker container inspect embed-certs-879000 --format={{.State.Status}}
	I1122 00:55:54.338825  703787 addons.go:70] Setting default-storageclass=true in profile "embed-certs-879000"
	I1122 00:55:54.338847  703787 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-879000"
	I1122 00:55:54.339161  703787 cli_runner.go:164] Run: docker container inspect embed-certs-879000 --format={{.State.Status}}
	I1122 00:55:54.341138  703787 out.go:179] * Verifying Kubernetes components...
	I1122 00:55:54.344078  703787 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1122 00:55:54.392683  703787 addons.go:239] Setting addon default-storageclass=true in "embed-certs-879000"
	I1122 00:55:54.392725  703787 host.go:66] Checking if "embed-certs-879000" exists ...
	I1122 00:55:54.393072  703787 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1122 00:55:54.393164  703787 cli_runner.go:164] Run: docker container inspect embed-certs-879000 --format={{.State.Status}}
	I1122 00:55:54.395879  703787 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1122 00:55:54.395912  703787 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1122 00:55:54.395969  703787 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-879000
	I1122 00:55:54.433083  703787 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33787 SSHKeyPath:/home/jenkins/minikube-integration/21934-513600/.minikube/machines/embed-certs-879000/id_rsa Username:docker}
	I1122 00:55:54.434568  703787 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1122 00:55:54.434585  703787 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1122 00:55:54.434654  703787 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-879000
	I1122 00:55:54.463395  703787 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33787 SSHKeyPath:/home/jenkins/minikube-integration/21934-513600/.minikube/machines/embed-certs-879000/id_rsa Username:docker}
	I1122 00:55:54.724078  703787 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.76.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1122 00:55:54.724184  703787 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1122 00:55:54.744331  703787 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1122 00:55:54.800204  703787 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1122 00:55:55.447642  703787 node_ready.go:35] waiting up to 6m0s for node "embed-certs-879000" to be "Ready" ...
	I1122 00:55:55.447993  703787 start.go:977] {"host.minikube.internal": 192.168.76.1} host record injected into CoreDNS's ConfigMap
	I1122 00:55:55.763279  703787 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.018907337s)
	I1122 00:55:55.774023  703787 out.go:179] * Enabled addons: storage-provisioner, default-storageclass
	I1122 00:55:55.776770  703787 addons.go:530] duration metric: took 1.439195439s for enable addons: enabled=[storage-provisioner default-storageclass]
	I1122 00:55:55.952177  703787 kapi.go:214] "coredns" deployment in "kube-system" namespace and "embed-certs-879000" context rescaled to 1 replicas
	W1122 00:55:57.451312  703787 node_ready.go:57] node "embed-certs-879000" has "Ready":"False" status (will retry)
	W1122 00:55:54.436517  700166 node_ready.go:57] node "no-preload-165130" has "Ready":"False" status (will retry)
	W1122 00:55:56.934991  700166 node_ready.go:57] node "no-preload-165130" has "Ready":"False" status (will retry)
	W1122 00:55:59.950950  703787 node_ready.go:57] node "embed-certs-879000" has "Ready":"False" status (will retry)
	W1122 00:56:02.450940  703787 node_ready.go:57] node "embed-certs-879000" has "Ready":"False" status (will retry)
	W1122 00:55:59.434516  700166 node_ready.go:57] node "no-preload-165130" has "Ready":"False" status (will retry)
	I1122 00:55:59.934738  700166 node_ready.go:49] node "no-preload-165130" is "Ready"
	I1122 00:55:59.934768  700166 node_ready.go:38] duration metric: took 14.002954573s for node "no-preload-165130" to be "Ready" ...
	I1122 00:55:59.934782  700166 api_server.go:52] waiting for apiserver process to appear ...
	I1122 00:55:59.934846  700166 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1122 00:55:59.959170  700166 api_server.go:72] duration metric: took 16.701325775s to wait for apiserver process to appear ...
	I1122 00:55:59.959195  700166 api_server.go:88] waiting for apiserver healthz status ...
	I1122 00:55:59.959216  700166 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1122 00:55:59.967213  700166 api_server.go:279] https://192.168.85.2:8443/healthz returned 200:
	ok
	I1122 00:55:59.968287  700166 api_server.go:141] control plane version: v1.34.1
	I1122 00:55:59.968310  700166 api_server.go:131] duration metric: took 9.107861ms to wait for apiserver health ...
	I1122 00:55:59.968319  700166 system_pods.go:43] waiting for kube-system pods to appear ...
	I1122 00:55:59.971235  700166 system_pods.go:59] 8 kube-system pods found
	I1122 00:55:59.971335  700166 system_pods.go:61] "coredns-66bc5c9577-pt27w" [54abb602-6f61-4692-a49d-c67637de05aa] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1122 00:55:59.971350  700166 system_pods.go:61] "etcd-no-preload-165130" [9924c059-bb41-4a3c-87f4-5bbf226dc98f] Running
	I1122 00:55:59.971357  700166 system_pods.go:61] "kindnet-2kqbq" [431f8066-47ae-445f-ba11-89e3d9b34f04] Running
	I1122 00:55:59.971363  700166 system_pods.go:61] "kube-apiserver-no-preload-165130" [aaa8100d-a0ba-46d9-975c-7400a36bcc5f] Running
	I1122 00:55:59.971368  700166 system_pods.go:61] "kube-controller-manager-no-preload-165130" [597218b5-9e1c-43ed-8c13-3560d3b80422] Running
	I1122 00:55:59.971372  700166 system_pods.go:61] "kube-proxy-kr4ll" [b7ff7069-d8ba-4340-b2a8-57db9eb94b57] Running
	I1122 00:55:59.971397  700166 system_pods.go:61] "kube-scheduler-no-preload-165130" [6dc003f2-6224-4216-8657-71ebabad3744] Running
	I1122 00:55:59.971403  700166 system_pods.go:61] "storage-provisioner" [3cb5ecac-491c-4635-85b4-a7e2719d7aec] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1122 00:55:59.971414  700166 system_pods.go:74] duration metric: took 3.088427ms to wait for pod list to return data ...
	I1122 00:55:59.971423  700166 default_sa.go:34] waiting for default service account to be created ...
	I1122 00:55:59.973694  700166 default_sa.go:45] found service account: "default"
	I1122 00:55:59.973719  700166 default_sa.go:55] duration metric: took 2.285316ms for default service account to be created ...
	I1122 00:55:59.973729  700166 system_pods.go:116] waiting for k8s-apps to be running ...
	I1122 00:55:59.976772  700166 system_pods.go:86] 8 kube-system pods found
	I1122 00:55:59.976806  700166 system_pods.go:89] "coredns-66bc5c9577-pt27w" [54abb602-6f61-4692-a49d-c67637de05aa] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1122 00:55:59.976813  700166 system_pods.go:89] "etcd-no-preload-165130" [9924c059-bb41-4a3c-87f4-5bbf226dc98f] Running
	I1122 00:55:59.976819  700166 system_pods.go:89] "kindnet-2kqbq" [431f8066-47ae-445f-ba11-89e3d9b34f04] Running
	I1122 00:55:59.976823  700166 system_pods.go:89] "kube-apiserver-no-preload-165130" [aaa8100d-a0ba-46d9-975c-7400a36bcc5f] Running
	I1122 00:55:59.976831  700166 system_pods.go:89] "kube-controller-manager-no-preload-165130" [597218b5-9e1c-43ed-8c13-3560d3b80422] Running
	I1122 00:55:59.976834  700166 system_pods.go:89] "kube-proxy-kr4ll" [b7ff7069-d8ba-4340-b2a8-57db9eb94b57] Running
	I1122 00:55:59.976839  700166 system_pods.go:89] "kube-scheduler-no-preload-165130" [6dc003f2-6224-4216-8657-71ebabad3744] Running
	I1122 00:55:59.976847  700166 system_pods.go:89] "storage-provisioner" [3cb5ecac-491c-4635-85b4-a7e2719d7aec] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1122 00:55:59.976870  700166 retry.go:31] will retry after 250.616601ms: missing components: kube-dns
	I1122 00:56:00.251015  700166 system_pods.go:86] 8 kube-system pods found
	I1122 00:56:00.251080  700166 system_pods.go:89] "coredns-66bc5c9577-pt27w" [54abb602-6f61-4692-a49d-c67637de05aa] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1122 00:56:00.251088  700166 system_pods.go:89] "etcd-no-preload-165130" [9924c059-bb41-4a3c-87f4-5bbf226dc98f] Running
	I1122 00:56:00.251095  700166 system_pods.go:89] "kindnet-2kqbq" [431f8066-47ae-445f-ba11-89e3d9b34f04] Running
	I1122 00:56:00.251100  700166 system_pods.go:89] "kube-apiserver-no-preload-165130" [aaa8100d-a0ba-46d9-975c-7400a36bcc5f] Running
	I1122 00:56:00.251118  700166 system_pods.go:89] "kube-controller-manager-no-preload-165130" [597218b5-9e1c-43ed-8c13-3560d3b80422] Running
	I1122 00:56:00.251123  700166 system_pods.go:89] "kube-proxy-kr4ll" [b7ff7069-d8ba-4340-b2a8-57db9eb94b57] Running
	I1122 00:56:00.251127  700166 system_pods.go:89] "kube-scheduler-no-preload-165130" [6dc003f2-6224-4216-8657-71ebabad3744] Running
	I1122 00:56:00.251133  700166 system_pods.go:89] "storage-provisioner" [3cb5ecac-491c-4635-85b4-a7e2719d7aec] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1122 00:56:00.251150  700166 retry.go:31] will retry after 349.63447ms: missing components: kube-dns
	I1122 00:56:00.605279  700166 system_pods.go:86] 8 kube-system pods found
	I1122 00:56:00.605325  700166 system_pods.go:89] "coredns-66bc5c9577-pt27w" [54abb602-6f61-4692-a49d-c67637de05aa] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1122 00:56:00.605333  700166 system_pods.go:89] "etcd-no-preload-165130" [9924c059-bb41-4a3c-87f4-5bbf226dc98f] Running
	I1122 00:56:00.605339  700166 system_pods.go:89] "kindnet-2kqbq" [431f8066-47ae-445f-ba11-89e3d9b34f04] Running
	I1122 00:56:00.605344  700166 system_pods.go:89] "kube-apiserver-no-preload-165130" [aaa8100d-a0ba-46d9-975c-7400a36bcc5f] Running
	I1122 00:56:00.605349  700166 system_pods.go:89] "kube-controller-manager-no-preload-165130" [597218b5-9e1c-43ed-8c13-3560d3b80422] Running
	I1122 00:56:00.605354  700166 system_pods.go:89] "kube-proxy-kr4ll" [b7ff7069-d8ba-4340-b2a8-57db9eb94b57] Running
	I1122 00:56:00.605359  700166 system_pods.go:89] "kube-scheduler-no-preload-165130" [6dc003f2-6224-4216-8657-71ebabad3744] Running
	I1122 00:56:00.605365  700166 system_pods.go:89] "storage-provisioner" [3cb5ecac-491c-4635-85b4-a7e2719d7aec] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1122 00:56:00.605382  700166 retry.go:31] will retry after 402.521097ms: missing components: kube-dns
	I1122 00:56:01.012766  700166 system_pods.go:86] 8 kube-system pods found
	I1122 00:56:01.012803  700166 system_pods.go:89] "coredns-66bc5c9577-pt27w" [54abb602-6f61-4692-a49d-c67637de05aa] Running
	I1122 00:56:01.012812  700166 system_pods.go:89] "etcd-no-preload-165130" [9924c059-bb41-4a3c-87f4-5bbf226dc98f] Running
	I1122 00:56:01.012817  700166 system_pods.go:89] "kindnet-2kqbq" [431f8066-47ae-445f-ba11-89e3d9b34f04] Running
	I1122 00:56:01.012821  700166 system_pods.go:89] "kube-apiserver-no-preload-165130" [aaa8100d-a0ba-46d9-975c-7400a36bcc5f] Running
	I1122 00:56:01.012826  700166 system_pods.go:89] "kube-controller-manager-no-preload-165130" [597218b5-9e1c-43ed-8c13-3560d3b80422] Running
	I1122 00:56:01.012831  700166 system_pods.go:89] "kube-proxy-kr4ll" [b7ff7069-d8ba-4340-b2a8-57db9eb94b57] Running
	I1122 00:56:01.012835  700166 system_pods.go:89] "kube-scheduler-no-preload-165130" [6dc003f2-6224-4216-8657-71ebabad3744] Running
	I1122 00:56:01.012839  700166 system_pods.go:89] "storage-provisioner" [3cb5ecac-491c-4635-85b4-a7e2719d7aec] Running
	I1122 00:56:01.012847  700166 system_pods.go:126] duration metric: took 1.039112679s to wait for k8s-apps to be running ...
	I1122 00:56:01.012855  700166 system_svc.go:44] waiting for kubelet service to be running ....
	I1122 00:56:01.012914  700166 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1122 00:56:01.029978  700166 system_svc.go:56] duration metric: took 17.111212ms WaitForService to wait for kubelet
	I1122 00:56:01.030004  700166 kubeadm.go:587] duration metric: took 17.772165451s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1122 00:56:01.030022  700166 node_conditions.go:102] verifying NodePressure condition ...
	I1122 00:56:01.032999  700166 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1122 00:56:01.033030  700166 node_conditions.go:123] node cpu capacity is 2
	I1122 00:56:01.033044  700166 node_conditions.go:105] duration metric: took 3.017192ms to run NodePressure ...
	I1122 00:56:01.033057  700166 start.go:242] waiting for startup goroutines ...
	I1122 00:56:01.033065  700166 start.go:247] waiting for cluster config update ...
	I1122 00:56:01.033077  700166 start.go:256] writing updated cluster config ...
	I1122 00:56:01.033360  700166 ssh_runner.go:195] Run: rm -f paused
	I1122 00:56:01.037462  700166 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1122 00:56:01.041247  700166 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-pt27w" in "kube-system" namespace to be "Ready" or be gone ...
	I1122 00:56:01.045545  700166 pod_ready.go:94] pod "coredns-66bc5c9577-pt27w" is "Ready"
	I1122 00:56:01.045580  700166 pod_ready.go:86] duration metric: took 4.300781ms for pod "coredns-66bc5c9577-pt27w" in "kube-system" namespace to be "Ready" or be gone ...
	I1122 00:56:01.048133  700166 pod_ready.go:83] waiting for pod "etcd-no-preload-165130" in "kube-system" namespace to be "Ready" or be gone ...
	I1122 00:56:01.052741  700166 pod_ready.go:94] pod "etcd-no-preload-165130" is "Ready"
	I1122 00:56:01.052768  700166 pod_ready.go:86] duration metric: took 4.608473ms for pod "etcd-no-preload-165130" in "kube-system" namespace to be "Ready" or be gone ...
	I1122 00:56:01.054935  700166 pod_ready.go:83] waiting for pod "kube-apiserver-no-preload-165130" in "kube-system" namespace to be "Ready" or be gone ...
	I1122 00:56:01.059273  700166 pod_ready.go:94] pod "kube-apiserver-no-preload-165130" is "Ready"
	I1122 00:56:01.059300  700166 pod_ready.go:86] duration metric: took 4.341633ms for pod "kube-apiserver-no-preload-165130" in "kube-system" namespace to be "Ready" or be gone ...
	I1122 00:56:01.061715  700166 pod_ready.go:83] waiting for pod "kube-controller-manager-no-preload-165130" in "kube-system" namespace to be "Ready" or be gone ...
	I1122 00:56:01.443316  700166 pod_ready.go:94] pod "kube-controller-manager-no-preload-165130" is "Ready"
	I1122 00:56:01.443350  700166 pod_ready.go:86] duration metric: took 381.610392ms for pod "kube-controller-manager-no-preload-165130" in "kube-system" namespace to be "Ready" or be gone ...
	I1122 00:56:01.643852  700166 pod_ready.go:83] waiting for pod "kube-proxy-kr4ll" in "kube-system" namespace to be "Ready" or be gone ...
	I1122 00:56:02.041858  700166 pod_ready.go:94] pod "kube-proxy-kr4ll" is "Ready"
	I1122 00:56:02.041892  700166 pod_ready.go:86] duration metric: took 397.938251ms for pod "kube-proxy-kr4ll" in "kube-system" namespace to be "Ready" or be gone ...
	I1122 00:56:02.242383  700166 pod_ready.go:83] waiting for pod "kube-scheduler-no-preload-165130" in "kube-system" namespace to be "Ready" or be gone ...
	I1122 00:56:02.643084  700166 pod_ready.go:94] pod "kube-scheduler-no-preload-165130" is "Ready"
	I1122 00:56:02.643115  700166 pod_ready.go:86] duration metric: took 400.704898ms for pod "kube-scheduler-no-preload-165130" in "kube-system" namespace to be "Ready" or be gone ...
	I1122 00:56:02.643134  700166 pod_ready.go:40] duration metric: took 1.60563929s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1122 00:56:02.701427  700166 start.go:628] kubectl: 1.33.2, cluster: 1.34.1 (minor skew: 1)
	I1122 00:56:02.704548  700166 out.go:179] * Done! kubectl is now configured to use "no-preload-165130" cluster and "default" namespace by default
	W1122 00:56:04.950808  703787 node_ready.go:57] node "embed-certs-879000" has "Ready":"False" status (will retry)
	W1122 00:56:06.951080  703787 node_ready.go:57] node "embed-certs-879000" has "Ready":"False" status (will retry)
	
	
	==> CRI-O <==
	Nov 22 00:56:00 no-preload-165130 crio[834]: time="2025-11-22T00:56:00.452506407Z" level=info msg="Created container f9c3c3123265372ec7546505b2099cb4b44293284d70111120f985f62189fad8: kube-system/coredns-66bc5c9577-pt27w/coredns" id=f009aa18-1f9c-4386-ab16-1d5fa6c482c7 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 22 00:56:00 no-preload-165130 crio[834]: time="2025-11-22T00:56:00.455488654Z" level=info msg="Starting container: f9c3c3123265372ec7546505b2099cb4b44293284d70111120f985f62189fad8" id=11abdf6c-fbae-4dd8-bf73-6378df426cd5 name=/runtime.v1.RuntimeService/StartContainer
	Nov 22 00:56:00 no-preload-165130 crio[834]: time="2025-11-22T00:56:00.48494531Z" level=info msg="Started container" PID=2519 containerID=f9c3c3123265372ec7546505b2099cb4b44293284d70111120f985f62189fad8 description=kube-system/coredns-66bc5c9577-pt27w/coredns id=11abdf6c-fbae-4dd8-bf73-6378df426cd5 name=/runtime.v1.RuntimeService/StartContainer sandboxID=715498dd4e7742b16cd3f73ec3bc4ab1815b49daebe532ed50775132fc6d7ac4
	Nov 22 00:56:03 no-preload-165130 crio[834]: time="2025-11-22T00:56:03.231093204Z" level=info msg="Running pod sandbox: default/busybox/POD" id=815a9ef6-7375-4f59-833e-98c5da710fc9 name=/runtime.v1.RuntimeService/RunPodSandbox
	Nov 22 00:56:03 no-preload-165130 crio[834]: time="2025-11-22T00:56:03.231173275Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 22 00:56:03 no-preload-165130 crio[834]: time="2025-11-22T00:56:03.236648545Z" level=info msg="Got pod network &{Name:busybox Namespace:default ID:9dadcde1a7526afc6af1d44d31a1dbd7ca5503122a7c01bb453eea2f19a4a304 UID:e6b0f65d-f761-4d8a-b568-8eb439d4ec02 NetNS:/var/run/netns/2367e043-8a7b-4e2e-bd7c-1db22c828436 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0x4001460718}] Aliases:map[]}"
	Nov 22 00:56:03 no-preload-165130 crio[834]: time="2025-11-22T00:56:03.237291726Z" level=info msg="Adding pod default_busybox to CNI network \"kindnet\" (type=ptp)"
	Nov 22 00:56:03 no-preload-165130 crio[834]: time="2025-11-22T00:56:03.247667422Z" level=info msg="Got pod network &{Name:busybox Namespace:default ID:9dadcde1a7526afc6af1d44d31a1dbd7ca5503122a7c01bb453eea2f19a4a304 UID:e6b0f65d-f761-4d8a-b568-8eb439d4ec02 NetNS:/var/run/netns/2367e043-8a7b-4e2e-bd7c-1db22c828436 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0x4001460718}] Aliases:map[]}"
	Nov 22 00:56:03 no-preload-165130 crio[834]: time="2025-11-22T00:56:03.247818335Z" level=info msg="Checking pod default_busybox for CNI network kindnet (type=ptp)"
	Nov 22 00:56:03 no-preload-165130 crio[834]: time="2025-11-22T00:56:03.250469925Z" level=info msg="Ran pod sandbox 9dadcde1a7526afc6af1d44d31a1dbd7ca5503122a7c01bb453eea2f19a4a304 with infra container: default/busybox/POD" id=815a9ef6-7375-4f59-833e-98c5da710fc9 name=/runtime.v1.RuntimeService/RunPodSandbox
	Nov 22 00:56:03 no-preload-165130 crio[834]: time="2025-11-22T00:56:03.252886256Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=7e001088-5eda-47ce-8a7c-2e8960f41a3a name=/runtime.v1.ImageService/ImageStatus
	Nov 22 00:56:03 no-preload-165130 crio[834]: time="2025-11-22T00:56:03.253202489Z" level=info msg="Image gcr.io/k8s-minikube/busybox:1.28.4-glibc not found" id=7e001088-5eda-47ce-8a7c-2e8960f41a3a name=/runtime.v1.ImageService/ImageStatus
	Nov 22 00:56:03 no-preload-165130 crio[834]: time="2025-11-22T00:56:03.253358439Z" level=info msg="Neither image nor artfiact gcr.io/k8s-minikube/busybox:1.28.4-glibc found" id=7e001088-5eda-47ce-8a7c-2e8960f41a3a name=/runtime.v1.ImageService/ImageStatus
	Nov 22 00:56:03 no-preload-165130 crio[834]: time="2025-11-22T00:56:03.25436915Z" level=info msg="Pulling image: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=5e086469-fa34-495e-8b95-5a48e6d9e331 name=/runtime.v1.ImageService/PullImage
	Nov 22 00:56:03 no-preload-165130 crio[834]: time="2025-11-22T00:56:03.256723075Z" level=info msg="Trying to access \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\""
	Nov 22 00:56:05 no-preload-165130 crio[834]: time="2025-11-22T00:56:05.29114426Z" level=info msg="Pulled image: gcr.io/k8s-minikube/busybox@sha256:580b0aa58b210f512f818b7b7ef4f63c803f7a8cd6baf571b1462b79f7b7719e" id=5e086469-fa34-495e-8b95-5a48e6d9e331 name=/runtime.v1.ImageService/PullImage
	Nov 22 00:56:05 no-preload-165130 crio[834]: time="2025-11-22T00:56:05.291798796Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=809a6de3-a2a6-4ae4-a39d-e2a964c3bd67 name=/runtime.v1.ImageService/ImageStatus
	Nov 22 00:56:05 no-preload-165130 crio[834]: time="2025-11-22T00:56:05.29314919Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=8f19c8c7-76d9-40b4-8713-5fd6585200d6 name=/runtime.v1.ImageService/ImageStatus
	Nov 22 00:56:05 no-preload-165130 crio[834]: time="2025-11-22T00:56:05.298602888Z" level=info msg="Creating container: default/busybox/busybox" id=9e6b0656-21cc-496e-a24e-ae85369ac154 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 22 00:56:05 no-preload-165130 crio[834]: time="2025-11-22T00:56:05.298745169Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 22 00:56:05 no-preload-165130 crio[834]: time="2025-11-22T00:56:05.30423785Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 22 00:56:05 no-preload-165130 crio[834]: time="2025-11-22T00:56:05.304868313Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 22 00:56:05 no-preload-165130 crio[834]: time="2025-11-22T00:56:05.321308595Z" level=info msg="Created container 5d52de897af24a337f31466558acaa9134a5583199888cb5ca53bddc6e6c1afe: default/busybox/busybox" id=9e6b0656-21cc-496e-a24e-ae85369ac154 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 22 00:56:05 no-preload-165130 crio[834]: time="2025-11-22T00:56:05.322197521Z" level=info msg="Starting container: 5d52de897af24a337f31466558acaa9134a5583199888cb5ca53bddc6e6c1afe" id=3d6e9d2a-256a-4b27-afe7-55086bbf76c1 name=/runtime.v1.RuntimeService/StartContainer
	Nov 22 00:56:05 no-preload-165130 crio[834]: time="2025-11-22T00:56:05.32557791Z" level=info msg="Started container" PID=2570 containerID=5d52de897af24a337f31466558acaa9134a5583199888cb5ca53bddc6e6c1afe description=default/busybox/busybox id=3d6e9d2a-256a-4b27-afe7-55086bbf76c1 name=/runtime.v1.RuntimeService/StartContainer sandboxID=9dadcde1a7526afc6af1d44d31a1dbd7ca5503122a7c01bb453eea2f19a4a304
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                         NAMESPACE
	5d52de897af24       gcr.io/k8s-minikube/busybox@sha256:580b0aa58b210f512f818b7b7ef4f63c803f7a8cd6baf571b1462b79f7b7719e   7 seconds ago       Running             busybox                   0                   9dadcde1a7526       busybox                                     default
	f9c3c31232653       138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc                                      12 seconds ago      Running             coredns                   0                   715498dd4e774       coredns-66bc5c9577-pt27w                    kube-system
	54b1534f6e5fc       66749159455b3f08c8318fe0233122f54d0f5889f9c5fdfb73c3fd9d99895b51                                      12 seconds ago      Running             storage-provisioner       0                   64eedf063da3f       storage-provisioner                         kube-system
	e51a18d219663       docker.io/kindest/kindnetd@sha256:2bdc3188f2ddc8e54841f69ef900a8dde1280057c97500f966a7ef31364021f1    23 seconds ago      Running             kindnet-cni               0                   91e6a9d8b0e81       kindnet-2kqbq                               kube-system
	9fb7958848a41       05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9                                      28 seconds ago      Running             kube-proxy                0                   51122fafb169f       kube-proxy-kr4ll                            kube-system
	1bc125b68d262       b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0                                      45 seconds ago      Running             kube-scheduler            0                   49bbc126470be       kube-scheduler-no-preload-165130            kube-system
	5685e3f6847a5       7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a                                      45 seconds ago      Running             kube-controller-manager   0                   5f639a1097f1c       kube-controller-manager-no-preload-165130   kube-system
	a26473818e6d1       a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e                                      45 seconds ago      Running             etcd                      0                   ce2f090d6d6f5       etcd-no-preload-165130                      kube-system
	a5e68dcd783fe       43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196                                      45 seconds ago      Running             kube-apiserver            0                   1c8167ce7dd66       kube-apiserver-no-preload-165130            kube-system
	
	
	==> coredns [f9c3c3123265372ec7546505b2099cb4b44293284d70111120f985f62189fad8] <==
	maxprocs: Leaving GOMAXPROCS=2: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = fa9a0cdcdddcb4be74a0eaf7cfcb211c40e29ddf5507e03bbfc0065bade31f0f2641a2513136e246f32328dd126fc93236fb5c595246f0763926a524386705e8
	CoreDNS-1.12.1
	linux/arm64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:41373 - 2044 "HINFO IN 5377604585252200411.7885071896162613558. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.02231659s
	
	
	==> describe nodes <==
	Name:               no-preload-165130
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=no-preload-165130
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=299bbe887a12c40541707cc636234f35f4ff1785
	                    minikube.k8s.io/name=no-preload-165130
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_11_22T00_55_38_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 22 Nov 2025 00:55:34 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  no-preload-165130
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 22 Nov 2025 00:56:08 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 22 Nov 2025 00:56:08 +0000   Sat, 22 Nov 2025 00:55:27 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 22 Nov 2025 00:56:08 +0000   Sat, 22 Nov 2025 00:55:27 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 22 Nov 2025 00:56:08 +0000   Sat, 22 Nov 2025 00:55:27 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 22 Nov 2025 00:56:08 +0000   Sat, 22 Nov 2025 00:55:59 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.85.2
	  Hostname:    no-preload-165130
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	System Info:
	  Machine ID:                 c92eea4d3c03c0156e01fcf8691e3907
	  System UUID:                194834cc-9098-4e11-a16d-906d0fa2db99
	  Boot ID:                    72ac7385-472f-47d1-a23e-bd80468d6e09
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.2
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         10s
	  kube-system                 coredns-66bc5c9577-pt27w                     100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     29s
	  kube-system                 etcd-no-preload-165130                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         35s
	  kube-system                 kindnet-2kqbq                                100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      29s
	  kube-system                 kube-apiserver-no-preload-165130             250m (12%)    0 (0%)      0 (0%)           0 (0%)         35s
	  kube-system                 kube-controller-manager-no-preload-165130    200m (10%)    0 (0%)      0 (0%)           0 (0%)         35s
	  kube-system                 kube-proxy-kr4ll                             0 (0%)        0 (0%)      0 (0%)           0 (0%)         29s
	  kube-system                 kube-scheduler-no-preload-165130             100m (5%)     0 (0%)      0 (0%)           0 (0%)         35s
	  kube-system                 storage-provisioner                          0 (0%)        0 (0%)      0 (0%)           0 (0%)         27s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  100m (5%)
	  memory             220Mi (2%)  220Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                From             Message
	  ----     ------                   ----               ----             -------
	  Normal   Starting                 27s                kube-proxy       
	  Normal   NodeHasSufficientMemory  46s (x8 over 46s)  kubelet          Node no-preload-165130 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    46s (x8 over 46s)  kubelet          Node no-preload-165130 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     46s (x8 over 46s)  kubelet          Node no-preload-165130 status is now: NodeHasSufficientPID
	  Normal   Starting                 35s                kubelet          Starting kubelet.
	  Warning  CgroupV1                 35s                kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  35s                kubelet          Node no-preload-165130 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    35s                kubelet          Node no-preload-165130 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     35s                kubelet          Node no-preload-165130 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           30s                node-controller  Node no-preload-165130 event: Registered Node no-preload-165130 in Controller
	  Normal   NodeReady                13s                kubelet          Node no-preload-165130 status is now: NodeReady
	
	
	==> dmesg <==
	[ +30.712010] overlayfs: idmapped layers are currently not supported
	[Nov22 00:32] overlayfs: idmapped layers are currently not supported
	[Nov22 00:33] overlayfs: idmapped layers are currently not supported
	[Nov22 00:35] overlayfs: idmapped layers are currently not supported
	[Nov22 00:36] overlayfs: idmapped layers are currently not supported
	[ +18.168104] overlayfs: idmapped layers are currently not supported
	[Nov22 00:37] overlayfs: idmapped layers are currently not supported
	[ +56.322609] overlayfs: idmapped layers are currently not supported
	[Nov22 00:38] overlayfs: idmapped layers are currently not supported
	[Nov22 00:39] overlayfs: idmapped layers are currently not supported
	[ +23.174928] overlayfs: idmapped layers are currently not supported
	[Nov22 00:41] overlayfs: idmapped layers are currently not supported
	[Nov22 00:42] overlayfs: idmapped layers are currently not supported
	[Nov22 00:44] overlayfs: idmapped layers are currently not supported
	[Nov22 00:45] overlayfs: idmapped layers are currently not supported
	[Nov22 00:46] overlayfs: idmapped layers are currently not supported
	[Nov22 00:48] overlayfs: idmapped layers are currently not supported
	[Nov22 00:50] overlayfs: idmapped layers are currently not supported
	[Nov22 00:51] overlayfs: idmapped layers are currently not supported
	[ +11.900293] overlayfs: idmapped layers are currently not supported
	[ +28.922055] overlayfs: idmapped layers are currently not supported
	[Nov22 00:52] overlayfs: idmapped layers are currently not supported
	[Nov22 00:53] overlayfs: idmapped layers are currently not supported
	[Nov22 00:54] overlayfs: idmapped layers are currently not supported
	[Nov22 00:55] overlayfs: idmapped layers are currently not supported
	
	
	==> etcd [a26473818e6d13cb55bf8c5098ac46f3ec6db26135a5cb49291474377f3b2683] <==
	{"level":"warn","ts":"2025-11-22T00:55:31.283513Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34246","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-22T00:55:31.322903Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34276","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-22T00:55:31.396945Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34290","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-22T00:55:31.478513Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34298","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-22T00:55:31.521844Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34316","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-22T00:55:31.570118Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34338","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-22T00:55:31.614915Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34360","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-22T00:55:31.654837Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34368","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-22T00:55:31.697232Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34376","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-22T00:55:31.756383Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34406","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-22T00:55:31.851142Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34414","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-22T00:55:31.856628Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34430","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-22T00:55:31.951796Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34434","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-22T00:55:32.004088Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34452","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-22T00:55:32.052224Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34470","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-22T00:55:32.130230Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34500","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-22T00:55:32.182750Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34512","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-22T00:55:32.214794Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34524","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-22T00:55:32.269533Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34542","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-22T00:55:32.302409Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34564","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-22T00:55:32.373883Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34590","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-22T00:55:32.430049Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34616","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-22T00:55:32.457485Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34640","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-22T00:55:32.498414Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34672","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-22T00:55:32.712813Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34682","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 00:56:12 up  5:38,  0 user,  load average: 5.63, 4.16, 2.91
	Linux no-preload-165130 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [e51a18d219663175196e20badec8915c43571b8ff5f334542553614b74c56d40] <==
	I1122 00:55:49.313952       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1122 00:55:49.314335       1 main.go:139] hostIP = 192.168.85.2
	podIP = 192.168.85.2
	I1122 00:55:49.314470       1 main.go:148] setting mtu 1500 for CNI 
	I1122 00:55:49.314490       1 main.go:178] kindnetd IP family: "ipv4"
	I1122 00:55:49.314504       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-11-22T00:55:49Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1122 00:55:49.514449       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1122 00:55:49.514530       1 controller.go:381] "Waiting for informer caches to sync"
	I1122 00:55:49.514564       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1122 00:55:49.515319       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1122 00:55:49.714716       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1122 00:55:49.714740       1 metrics.go:72] Registering metrics
	I1122 00:55:49.714800       1 controller.go:711] "Syncing nftables rules"
	I1122 00:55:59.521955       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1122 00:55:59.521998       1 main.go:301] handling current node
	I1122 00:56:09.514963       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1122 00:56:09.514996       1 main.go:301] handling current node
	
	
	==> kube-apiserver [a5e68dcd783fed963f13d557f6dd59450c591883f09e63636f04b2270fe359e6] <==
	I1122 00:55:34.487533       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I1122 00:55:34.530999       1 controller.go:667] quota admission added evaluator for: namespaces
	I1122 00:55:34.535362       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1122 00:55:34.536005       1 default_servicecidr_controller.go:228] Setting default ServiceCIDR condition Ready to True
	E1122 00:55:34.543787       1 controller.go:148] "Unhandled Error" err="while syncing ConfigMap \"kube-system/kube-apiserver-legacy-service-account-token-tracking\", err: namespaces \"kube-system\" not found" logger="UnhandledError"
	I1122 00:55:34.588798       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1122 00:55:34.606435       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1122 00:55:34.608184       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1122 00:55:35.137119       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I1122 00:55:35.148539       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I1122 00:55:35.148573       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1122 00:55:36.165056       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1122 00:55:36.226830       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1122 00:55:36.358998       1 alloc.go:328] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W1122 00:55:36.367295       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.85.2]
	I1122 00:55:36.368613       1 controller.go:667] quota admission added evaluator for: endpoints
	I1122 00:55:36.374293       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1122 00:55:37.199669       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1122 00:55:37.316611       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1122 00:55:37.355453       1 alloc.go:328] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I1122 00:55:37.425471       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I1122 00:55:42.508736       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1122 00:55:42.567073       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1122 00:55:42.986544       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1122 00:55:43.257661       1 controller.go:667] quota admission added evaluator for: controllerrevisions.apps
	
	
	==> kube-controller-manager [5685e3f6847a5d3dd3a466ba51859f21810bfacf4d1650d243f84d0600d38d08] <==
	I1122 00:55:42.371147       1 shared_informer.go:356] "Caches are synced" controller="node"
	I1122 00:55:42.381392       1 range_allocator.go:177] "Sending events to api server" logger="node-ipam-controller"
	I1122 00:55:42.381486       1 range_allocator.go:183] "Starting range CIDR allocator" logger="node-ipam-controller"
	I1122 00:55:42.381528       1 shared_informer.go:349] "Waiting for caches to sync" controller="cidrallocator"
	I1122 00:55:42.381563       1 shared_informer.go:356] "Caches are synced" controller="cidrallocator"
	I1122 00:55:42.372726       1 shared_informer.go:356] "Caches are synced" controller="crt configmap"
	I1122 00:55:42.332037       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1122 00:55:42.332174       1 shared_informer.go:356] "Caches are synced" controller="taint"
	I1122 00:55:42.382421       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I1122 00:55:42.382550       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="no-preload-165130"
	I1122 00:55:42.382625       1 node_lifecycle_controller.go:1025] "Controller detected that all Nodes are not-Ready. Entering master disruption mode" logger="node-lifecycle-controller"
	I1122 00:55:42.398774       1 shared_informer.go:356] "Caches are synced" controller="PVC protection"
	I1122 00:55:42.399561       1 shared_informer.go:356] "Caches are synced" controller="HPA"
	I1122 00:55:42.399658       1 shared_informer.go:356] "Caches are synced" controller="cronjob"
	I1122 00:55:42.399723       1 shared_informer.go:356] "Caches are synced" controller="legacy-service-account-token-cleaner"
	I1122 00:55:42.406091       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1122 00:55:42.406295       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	I1122 00:55:42.414086       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	I1122 00:55:42.436800       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I1122 00:55:42.503072       1 range_allocator.go:428] "Set node PodCIDR" logger="node-ipam-controller" node="no-preload-165130" podCIDRs=["10.244.0.0/24"]
	I1122 00:55:42.589925       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1122 00:55:42.653894       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1122 00:55:42.653983       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1122 00:55:42.654013       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1122 00:56:02.384975       1 node_lifecycle_controller.go:1044] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	
	
	==> kube-proxy [9fb7958848a41d9bd28f8679ab8807238594882201721fc1090297f89bc9743b] <==
	I1122 00:55:44.416420       1 server_linux.go:53] "Using iptables proxy"
	I1122 00:55:44.606841       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1122 00:55:44.709860       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1122 00:55:44.709915       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.85.2"]
	E1122 00:55:44.709987       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1122 00:55:44.845927       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1122 00:55:44.845985       1 server_linux.go:132] "Using iptables Proxier"
	I1122 00:55:44.870774       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1122 00:55:44.871096       1 server.go:527] "Version info" version="v1.34.1"
	I1122 00:55:44.871111       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1122 00:55:44.872500       1 config.go:200] "Starting service config controller"
	I1122 00:55:44.872508       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1122 00:55:44.872531       1 config.go:106] "Starting endpoint slice config controller"
	I1122 00:55:44.872535       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1122 00:55:44.872554       1 config.go:403] "Starting serviceCIDR config controller"
	I1122 00:55:44.872559       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1122 00:55:44.886771       1 config.go:309] "Starting node config controller"
	I1122 00:55:44.886790       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1122 00:55:44.886798       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1122 00:55:44.975540       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1122 00:55:44.975617       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1122 00:55:44.975857       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	
	
	==> kube-scheduler [1bc125b68d262d5c90f3323cafef02e2651d87d438b03d38eaf109b0c843160e] <==
	I1122 00:55:35.376426       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1122 00:55:35.382456       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1122 00:55:35.382667       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1122 00:55:35.383082       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1122 00:55:35.382689       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	E1122 00:55:35.398444       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1122 00:55:35.398511       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1122 00:55:35.408978       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1122 00:55:35.409040       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1122 00:55:35.409109       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1122 00:55:35.409150       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1122 00:55:35.409183       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1122 00:55:35.409274       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1122 00:55:35.409308       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1122 00:55:35.409391       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1122 00:55:35.409425       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1122 00:55:35.409458       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1122 00:55:35.409506       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1122 00:55:35.414265       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_arm64.s:1223" type="*v1.ConfigMap"
	E1122 00:55:35.414353       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1122 00:55:35.414462       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1122 00:55:35.414501       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1122 00:55:35.414540       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1122 00:55:35.417276       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	I1122 00:55:36.783916       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Nov 22 00:55:38 no-preload-165130 kubelet[2018]: I1122 00:55:38.716193    2018 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-no-preload-165130" podStartSLOduration=1.716173806 podStartE2EDuration="1.716173806s" podCreationTimestamp="2025-11-22 00:55:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-22 00:55:38.714284557 +0000 UTC m=+1.529685736" watchObservedRunningTime="2025-11-22 00:55:38.716173806 +0000 UTC m=+1.531574977"
	Nov 22 00:55:38 no-preload-165130 kubelet[2018]: I1122 00:55:38.757533    2018 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/etcd-no-preload-165130" podStartSLOduration=1.7573196370000002 podStartE2EDuration="1.757319637s" podCreationTimestamp="2025-11-22 00:55:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-22 00:55:38.737392344 +0000 UTC m=+1.552793531" watchObservedRunningTime="2025-11-22 00:55:38.757319637 +0000 UTC m=+1.572720808"
	Nov 22 00:55:42 no-preload-165130 kubelet[2018]: I1122 00:55:42.564406    2018 kuberuntime_manager.go:1828] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	Nov 22 00:55:42 no-preload-165130 kubelet[2018]: I1122 00:55:42.565489    2018 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Nov 22 00:55:43 no-preload-165130 kubelet[2018]: I1122 00:55:43.519019    2018 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/431f8066-47ae-445f-ba11-89e3d9b34f04-cni-cfg\") pod \"kindnet-2kqbq\" (UID: \"431f8066-47ae-445f-ba11-89e3d9b34f04\") " pod="kube-system/kindnet-2kqbq"
	Nov 22 00:55:43 no-preload-165130 kubelet[2018]: I1122 00:55:43.519064    2018 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/431f8066-47ae-445f-ba11-89e3d9b34f04-lib-modules\") pod \"kindnet-2kqbq\" (UID: \"431f8066-47ae-445f-ba11-89e3d9b34f04\") " pod="kube-system/kindnet-2kqbq"
	Nov 22 00:55:43 no-preload-165130 kubelet[2018]: I1122 00:55:43.519084    2018 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/b7ff7069-d8ba-4340-b2a8-57db9eb94b57-xtables-lock\") pod \"kube-proxy-kr4ll\" (UID: \"b7ff7069-d8ba-4340-b2a8-57db9eb94b57\") " pod="kube-system/kube-proxy-kr4ll"
	Nov 22 00:55:43 no-preload-165130 kubelet[2018]: I1122 00:55:43.519161    2018 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/b7ff7069-d8ba-4340-b2a8-57db9eb94b57-lib-modules\") pod \"kube-proxy-kr4ll\" (UID: \"b7ff7069-d8ba-4340-b2a8-57db9eb94b57\") " pod="kube-system/kube-proxy-kr4ll"
	Nov 22 00:55:43 no-preload-165130 kubelet[2018]: I1122 00:55:43.519218    2018 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/431f8066-47ae-445f-ba11-89e3d9b34f04-xtables-lock\") pod \"kindnet-2kqbq\" (UID: \"431f8066-47ae-445f-ba11-89e3d9b34f04\") " pod="kube-system/kindnet-2kqbq"
	Nov 22 00:55:43 no-preload-165130 kubelet[2018]: I1122 00:55:43.519239    2018 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zvtt2\" (UniqueName: \"kubernetes.io/projected/431f8066-47ae-445f-ba11-89e3d9b34f04-kube-api-access-zvtt2\") pod \"kindnet-2kqbq\" (UID: \"431f8066-47ae-445f-ba11-89e3d9b34f04\") " pod="kube-system/kindnet-2kqbq"
	Nov 22 00:55:43 no-preload-165130 kubelet[2018]: I1122 00:55:43.519325    2018 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/b7ff7069-d8ba-4340-b2a8-57db9eb94b57-kube-proxy\") pod \"kube-proxy-kr4ll\" (UID: \"b7ff7069-d8ba-4340-b2a8-57db9eb94b57\") " pod="kube-system/kube-proxy-kr4ll"
	Nov 22 00:55:43 no-preload-165130 kubelet[2018]: I1122 00:55:43.519362    2018 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t9bpk\" (UniqueName: \"kubernetes.io/projected/b7ff7069-d8ba-4340-b2a8-57db9eb94b57-kube-api-access-t9bpk\") pod \"kube-proxy-kr4ll\" (UID: \"b7ff7069-d8ba-4340-b2a8-57db9eb94b57\") " pod="kube-system/kube-proxy-kr4ll"
	Nov 22 00:55:43 no-preload-165130 kubelet[2018]: I1122 00:55:43.798439    2018 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory"
	Nov 22 00:55:44 no-preload-165130 kubelet[2018]: W1122 00:55:44.125215    2018 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/1c65dce5fc4b31daf0735f57309cfc4584efb68447875212bc03d5cc1861ca03/crio-51122fafb169fb335fc329541c8903b15eaa13393b3f182788478279d732f914 WatchSource:0}: Error finding container 51122fafb169fb335fc329541c8903b15eaa13393b3f182788478279d732f914: Status 404 returned error can't find the container with id 51122fafb169fb335fc329541c8903b15eaa13393b3f182788478279d732f914
	Nov 22 00:55:44 no-preload-165130 kubelet[2018]: I1122 00:55:44.843722    2018 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-kr4ll" podStartSLOduration=1.843700909 podStartE2EDuration="1.843700909s" podCreationTimestamp="2025-11-22 00:55:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-22 00:55:44.809615917 +0000 UTC m=+7.625017096" watchObservedRunningTime="2025-11-22 00:55:44.843700909 +0000 UTC m=+7.659102080"
	Nov 22 00:55:59 no-preload-165130 kubelet[2018]: I1122 00:55:59.773509    2018 kubelet_node_status.go:439] "Fast updating node status as it just became ready"
	Nov 22 00:55:59 no-preload-165130 kubelet[2018]: I1122 00:55:59.803839    2018 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kindnet-2kqbq" podStartSLOduration=11.795314099 podStartE2EDuration="16.803821511s" podCreationTimestamp="2025-11-22 00:55:43 +0000 UTC" firstStartedPulling="2025-11-22 00:55:44.124492451 +0000 UTC m=+6.939893622" lastFinishedPulling="2025-11-22 00:55:49.132999855 +0000 UTC m=+11.948401034" observedRunningTime="2025-11-22 00:55:49.808109724 +0000 UTC m=+12.623510895" watchObservedRunningTime="2025-11-22 00:55:59.803821511 +0000 UTC m=+22.619222690"
	Nov 22 00:55:59 no-preload-165130 kubelet[2018]: I1122 00:55:59.904519    2018 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/54abb602-6f61-4692-a49d-c67637de05aa-config-volume\") pod \"coredns-66bc5c9577-pt27w\" (UID: \"54abb602-6f61-4692-a49d-c67637de05aa\") " pod="kube-system/coredns-66bc5c9577-pt27w"
	Nov 22 00:55:59 no-preload-165130 kubelet[2018]: I1122 00:55:59.904847    2018 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nst6l\" (UniqueName: \"kubernetes.io/projected/3cb5ecac-491c-4635-85b4-a7e2719d7aec-kube-api-access-nst6l\") pod \"storage-provisioner\" (UID: \"3cb5ecac-491c-4635-85b4-a7e2719d7aec\") " pod="kube-system/storage-provisioner"
	Nov 22 00:55:59 no-preload-165130 kubelet[2018]: I1122 00:55:59.905019    2018 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cnqzm\" (UniqueName: \"kubernetes.io/projected/54abb602-6f61-4692-a49d-c67637de05aa-kube-api-access-cnqzm\") pod \"coredns-66bc5c9577-pt27w\" (UID: \"54abb602-6f61-4692-a49d-c67637de05aa\") " pod="kube-system/coredns-66bc5c9577-pt27w"
	Nov 22 00:55:59 no-preload-165130 kubelet[2018]: I1122 00:55:59.905188    2018 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/3cb5ecac-491c-4635-85b4-a7e2719d7aec-tmp\") pod \"storage-provisioner\" (UID: \"3cb5ecac-491c-4635-85b4-a7e2719d7aec\") " pod="kube-system/storage-provisioner"
	Nov 22 00:56:00 no-preload-165130 kubelet[2018]: I1122 00:56:00.847458    2018 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/storage-provisioner" podStartSLOduration=15.847438764 podStartE2EDuration="15.847438764s" podCreationTimestamp="2025-11-22 00:55:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-22 00:56:00.830469817 +0000 UTC m=+23.645871012" watchObservedRunningTime="2025-11-22 00:56:00.847438764 +0000 UTC m=+23.662839943"
	Nov 22 00:56:02 no-preload-165130 kubelet[2018]: I1122 00:56:02.922080    2018 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-pt27w" podStartSLOduration=19.922058065999998 podStartE2EDuration="19.922058066s" podCreationTimestamp="2025-11-22 00:55:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-22 00:56:00.847909306 +0000 UTC m=+23.663310493" watchObservedRunningTime="2025-11-22 00:56:02.922058066 +0000 UTC m=+25.737459245"
	Nov 22 00:56:03 no-preload-165130 kubelet[2018]: I1122 00:56:03.041277    2018 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bj6mm\" (UniqueName: \"kubernetes.io/projected/e6b0f65d-f761-4d8a-b568-8eb439d4ec02-kube-api-access-bj6mm\") pod \"busybox\" (UID: \"e6b0f65d-f761-4d8a-b568-8eb439d4ec02\") " pod="default/busybox"
	Nov 22 00:56:11 no-preload-165130 kubelet[2018]: E1122 00:56:11.047138    2018 upgradeaware.go:427] Error proxying data from client to backend: readfrom tcp 127.0.0.1:57950->127.0.0.1:33615: write tcp 127.0.0.1:57950->127.0.0.1:33615: write: broken pipe
	
	
	==> storage-provisioner [54b1534f6e5fc7bec62a0f71823393d141eb80ec9df39020692f508522f5ff5b] <==
	I1122 00:56:00.414772       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1122 00:56:00.439798       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1122 00:56:00.440095       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W1122 00:56:00.444248       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1122 00:56:00.489477       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1122 00:56:00.489712       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1122 00:56:00.493535       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"fa43c339-2ef6-4277-ae88-e611a28aa232", APIVersion:"v1", ResourceVersion:"450", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' no-preload-165130_674f855b-0336-453d-b9c5-806dd386da2e became leader
	I1122 00:56:00.493726       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_no-preload-165130_674f855b-0336-453d-b9c5-806dd386da2e!
	W1122 00:56:00.541728       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1122 00:56:00.564285       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1122 00:56:00.594716       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_no-preload-165130_674f855b-0336-453d-b9c5-806dd386da2e!
	W1122 00:56:02.567704       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1122 00:56:02.572702       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1122 00:56:04.623514       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1122 00:56:04.633134       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1122 00:56:06.636848       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1122 00:56:06.646804       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1122 00:56:08.649529       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1122 00:56:08.654467       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1122 00:56:10.658168       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1122 00:56:10.662788       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1122 00:56:12.666435       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1122 00:56:12.674250       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-165130 -n no-preload-165130
helpers_test.go:269: (dbg) Run:  kubectl --context no-preload-165130 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/no-preload/serial/EnableAddonWhileActive FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (2.64s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (3.73s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p embed-certs-879000 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Non-zero exit: out/minikube-linux-arm64 addons enable metrics-server -p embed-certs-879000 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: exit status 11 (356.445182ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-22T00:56:47Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_2bafae6fa40fec163538f94366e390b0317a8b15_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:205: failed to enable an addon post-stop. args "out/minikube-linux-arm64 addons enable metrics-server -p embed-certs-879000 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain": exit status 11
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context embed-certs-879000 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:213: (dbg) Non-zero exit: kubectl --context embed-certs-879000 describe deploy/metrics-server -n kube-system: exit status 1 (134.693658ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): deployments.apps "metrics-server" not found

                                                
                                                
** /stderr **
start_stop_delete_test.go:215: failed to get info on auto-pause deployments. args "kubectl --context embed-certs-879000 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:219: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/embed-certs/serial/EnableAddonWhileActive]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/embed-certs/serial/EnableAddonWhileActive]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect embed-certs-879000
helpers_test.go:243: (dbg) docker inspect embed-certs-879000:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "a6fb6b81dce5b3616a7bbeb7273662d346745bb8e0cc70a843d43e0a09366fd0",
	        "Created": "2025-11-22T00:55:18.964561473Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 704364,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-11-22T00:55:19.033047247Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:97061622515a85470dad13cb046ad5fc5021fcd2b05aa921a620abb52951cd5d",
	        "ResolvConfPath": "/var/lib/docker/containers/a6fb6b81dce5b3616a7bbeb7273662d346745bb8e0cc70a843d43e0a09366fd0/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/a6fb6b81dce5b3616a7bbeb7273662d346745bb8e0cc70a843d43e0a09366fd0/hostname",
	        "HostsPath": "/var/lib/docker/containers/a6fb6b81dce5b3616a7bbeb7273662d346745bb8e0cc70a843d43e0a09366fd0/hosts",
	        "LogPath": "/var/lib/docker/containers/a6fb6b81dce5b3616a7bbeb7273662d346745bb8e0cc70a843d43e0a09366fd0/a6fb6b81dce5b3616a7bbeb7273662d346745bb8e0cc70a843d43e0a09366fd0-json.log",
	        "Name": "/embed-certs-879000",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "embed-certs-879000:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "embed-certs-879000",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "a6fb6b81dce5b3616a7bbeb7273662d346745bb8e0cc70a843d43e0a09366fd0",
	                "LowerDir": "/var/lib/docker/overlay2/b7e6923f56a551fc28b0dd2aeb630a3573a17c8126bc88462d7dcfbefd35cac0-init/diff:/var/lib/docker/overlay2/7e8788c6de692bc1c3758a2bb2c4b8da0fbba26855f855c0f3b655bfbdd92f8e/diff",
	                "MergedDir": "/var/lib/docker/overlay2/b7e6923f56a551fc28b0dd2aeb630a3573a17c8126bc88462d7dcfbefd35cac0/merged",
	                "UpperDir": "/var/lib/docker/overlay2/b7e6923f56a551fc28b0dd2aeb630a3573a17c8126bc88462d7dcfbefd35cac0/diff",
	                "WorkDir": "/var/lib/docker/overlay2/b7e6923f56a551fc28b0dd2aeb630a3573a17c8126bc88462d7dcfbefd35cac0/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "embed-certs-879000",
	                "Source": "/var/lib/docker/volumes/embed-certs-879000/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "embed-certs-879000",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763588073-21934@sha256:19d3da0413e1bfa354cbb88004c6796f8e9772a083e0230b0f6e50212ee04c7e",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "embed-certs-879000",
	                "name.minikube.sigs.k8s.io": "embed-certs-879000",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "bfb884f39f65987897f727742dfe92eeacb065248eab48c1a864deac6b53de2b",
	            "SandboxKey": "/var/run/docker/netns/bfb884f39f65",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33787"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33788"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33791"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33789"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33790"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "embed-certs-879000": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "82:4b:0a:29:44:96",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "9a53cf267b81b1ff031dda8888cce06c9d46b1b11b960898e399a8e14526904f",
	                    "EndpointID": "411b1ed7dc2a465faa340d694af0e5acaf21dfba22019e5322b4d7c373cc45dd",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "embed-certs-879000",
	                        "a6fb6b81dce5"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-879000 -n embed-certs-879000
helpers_test.go:252: <<< TestStartStop/group/embed-certs/serial/EnableAddonWhileActive FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/embed-certs/serial/EnableAddonWhileActive]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p embed-certs-879000 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p embed-certs-879000 logs -n 25: (1.976318913s)
helpers_test.go:260: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────
────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │          PROFILE          │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────
────┤
	│ delete  │ -p kubernetes-upgrade-134864                                                                                                                                                                                                                  │ kubernetes-upgrade-134864 │ jenkins │ v1.37.0 │ 22 Nov 25 00:50 UTC │ 22 Nov 25 00:51 UTC │
	│ start   │ -p cert-expiration-621390 --memory=3072 --cert-expiration=3m --driver=docker  --container-runtime=crio                                                                                                                                        │ cert-expiration-621390    │ jenkins │ v1.37.0 │ 22 Nov 25 00:51 UTC │ 22 Nov 25 00:51 UTC │
	│ delete  │ -p force-systemd-env-634519                                                                                                                                                                                                                   │ force-systemd-env-634519  │ jenkins │ v1.37.0 │ 22 Nov 25 00:51 UTC │ 22 Nov 25 00:51 UTC │
	│ start   │ -p cert-options-002126 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=crio                     │ cert-options-002126       │ jenkins │ v1.37.0 │ 22 Nov 25 00:51 UTC │ 22 Nov 25 00:52 UTC │
	│ ssh     │ cert-options-002126 ssh openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt                                                                                                                                                   │ cert-options-002126       │ jenkins │ v1.37.0 │ 22 Nov 25 00:52 UTC │ 22 Nov 25 00:52 UTC │
	│ ssh     │ -p cert-options-002126 -- sudo cat /etc/kubernetes/admin.conf                                                                                                                                                                                 │ cert-options-002126       │ jenkins │ v1.37.0 │ 22 Nov 25 00:52 UTC │ 22 Nov 25 00:52 UTC │
	│ delete  │ -p cert-options-002126                                                                                                                                                                                                                        │ cert-options-002126       │ jenkins │ v1.37.0 │ 22 Nov 25 00:52 UTC │ 22 Nov 25 00:52 UTC │
	│ start   │ -p old-k8s-version-625837 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-625837    │ jenkins │ v1.37.0 │ 22 Nov 25 00:52 UTC │ 22 Nov 25 00:53 UTC │
	│ addons  │ enable metrics-server -p old-k8s-version-625837 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                  │ old-k8s-version-625837    │ jenkins │ v1.37.0 │ 22 Nov 25 00:53 UTC │                     │
	│ stop    │ -p old-k8s-version-625837 --alsologtostderr -v=3                                                                                                                                                                                              │ old-k8s-version-625837    │ jenkins │ v1.37.0 │ 22 Nov 25 00:53 UTC │ 22 Nov 25 00:53 UTC │
	│ addons  │ enable dashboard -p old-k8s-version-625837 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                             │ old-k8s-version-625837    │ jenkins │ v1.37.0 │ 22 Nov 25 00:53 UTC │ 22 Nov 25 00:53 UTC │
	│ start   │ -p old-k8s-version-625837 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-625837    │ jenkins │ v1.37.0 │ 22 Nov 25 00:53 UTC │ 22 Nov 25 00:54 UTC │
	│ image   │ old-k8s-version-625837 image list --format=json                                                                                                                                                                                               │ old-k8s-version-625837    │ jenkins │ v1.37.0 │ 22 Nov 25 00:54 UTC │ 22 Nov 25 00:54 UTC │
	│ pause   │ -p old-k8s-version-625837 --alsologtostderr -v=1                                                                                                                                                                                              │ old-k8s-version-625837    │ jenkins │ v1.37.0 │ 22 Nov 25 00:54 UTC │                     │
	│ start   │ -p cert-expiration-621390 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=crio                                                                                                                                     │ cert-expiration-621390    │ jenkins │ v1.37.0 │ 22 Nov 25 00:54 UTC │ 22 Nov 25 00:55 UTC │
	│ delete  │ -p old-k8s-version-625837                                                                                                                                                                                                                     │ old-k8s-version-625837    │ jenkins │ v1.37.0 │ 22 Nov 25 00:54 UTC │ 22 Nov 25 00:54 UTC │
	│ delete  │ -p old-k8s-version-625837                                                                                                                                                                                                                     │ old-k8s-version-625837    │ jenkins │ v1.37.0 │ 22 Nov 25 00:54 UTC │ 22 Nov 25 00:54 UTC │
	│ start   │ -p no-preload-165130 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                       │ no-preload-165130         │ jenkins │ v1.37.0 │ 22 Nov 25 00:54 UTC │ 22 Nov 25 00:56 UTC │
	│ delete  │ -p cert-expiration-621390                                                                                                                                                                                                                     │ cert-expiration-621390    │ jenkins │ v1.37.0 │ 22 Nov 25 00:55 UTC │ 22 Nov 25 00:55 UTC │
	│ start   │ -p embed-certs-879000 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                        │ embed-certs-879000        │ jenkins │ v1.37.0 │ 22 Nov 25 00:55 UTC │ 22 Nov 25 00:56 UTC │
	│ addons  │ enable metrics-server -p no-preload-165130 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ no-preload-165130         │ jenkins │ v1.37.0 │ 22 Nov 25 00:56 UTC │                     │
	│ stop    │ -p no-preload-165130 --alsologtostderr -v=3                                                                                                                                                                                                   │ no-preload-165130         │ jenkins │ v1.37.0 │ 22 Nov 25 00:56 UTC │ 22 Nov 25 00:56 UTC │
	│ addons  │ enable dashboard -p no-preload-165130 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ no-preload-165130         │ jenkins │ v1.37.0 │ 22 Nov 25 00:56 UTC │ 22 Nov 25 00:56 UTC │
	│ start   │ -p no-preload-165130 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                       │ no-preload-165130         │ jenkins │ v1.37.0 │ 22 Nov 25 00:56 UTC │                     │
	│ addons  │ enable metrics-server -p embed-certs-879000 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                      │ embed-certs-879000        │ jenkins │ v1.37.0 │ 22 Nov 25 00:56 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────
────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/22 00:56:25
	Running on machine: ip-172-31-21-244
	Binary: Built with gc go1.25.3 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1122 00:56:25.934824  707914 out.go:360] Setting OutFile to fd 1 ...
	I1122 00:56:25.934950  707914 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1122 00:56:25.934958  707914 out.go:374] Setting ErrFile to fd 2...
	I1122 00:56:25.934963  707914 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1122 00:56:25.935329  707914 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21934-513600/.minikube/bin
	I1122 00:56:25.935769  707914 out.go:368] Setting JSON to false
	I1122 00:56:25.937052  707914 start.go:133] hostinfo: {"hostname":"ip-172-31-21-244","uptime":20302,"bootTime":1763752684,"procs":189,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"da8ac1fd-6236-412a-a346-95873c98230d"}
	I1122 00:56:25.937125  707914 start.go:143] virtualization:  
	I1122 00:56:25.940461  707914 out.go:179] * [no-preload-165130] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1122 00:56:25.944384  707914 notify.go:221] Checking for updates...
	I1122 00:56:25.944907  707914 out.go:179]   - MINIKUBE_LOCATION=21934
	I1122 00:56:25.947891  707914 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1122 00:56:25.952137  707914 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21934-513600/kubeconfig
	I1122 00:56:25.954993  707914 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21934-513600/.minikube
	I1122 00:56:25.957942  707914 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1122 00:56:25.960762  707914 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1122 00:56:25.964086  707914 config.go:182] Loaded profile config "no-preload-165130": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1122 00:56:25.964688  707914 driver.go:422] Setting default libvirt URI to qemu:///system
	I1122 00:56:25.988433  707914 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1122 00:56:25.988555  707914 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1122 00:56:26.053623  707914 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-11-22 00:56:26.043515938 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1122 00:56:26.053730  707914 docker.go:319] overlay module found
	I1122 00:56:26.057090  707914 out.go:179] * Using the docker driver based on existing profile
	I1122 00:56:26.060281  707914 start.go:309] selected driver: docker
	I1122 00:56:26.060304  707914 start.go:930] validating driver "docker" against &{Name:no-preload-165130 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763588073-21934@sha256:19d3da0413e1bfa354cbb88004c6796f8e9772a083e0230b0f6e50212ee04c7e Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:no-preload-165130 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9
p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1122 00:56:26.060411  707914 start.go:941] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1122 00:56:26.061114  707914 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1122 00:56:26.121224  707914 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-11-22 00:56:26.111922789 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1122 00:56:26.121566  707914 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1122 00:56:26.121599  707914 cni.go:84] Creating CNI manager for ""
	I1122 00:56:26.121657  707914 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1122 00:56:26.121701  707914 start.go:353] cluster config:
	{Name:no-preload-165130 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763588073-21934@sha256:19d3da0413e1bfa354cbb88004c6796f8e9772a083e0230b0f6e50212ee04c7e Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:no-preload-165130 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Containe
rRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false
DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1122 00:56:26.126599  707914 out.go:179] * Starting "no-preload-165130" primary control-plane node in "no-preload-165130" cluster
	I1122 00:56:26.129461  707914 cache.go:134] Beginning downloading kic base image for docker with crio
	I1122 00:56:26.132380  707914 out.go:179] * Pulling base image v0.0.48-1763588073-21934 ...
	I1122 00:56:26.135267  707914 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1122 00:56:26.135336  707914 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763588073-21934@sha256:19d3da0413e1bfa354cbb88004c6796f8e9772a083e0230b0f6e50212ee04c7e in local docker daemon
	I1122 00:56:26.135405  707914 profile.go:143] Saving config to /home/jenkins/minikube-integration/21934-513600/.minikube/profiles/no-preload-165130/config.json ...
	I1122 00:56:26.135707  707914 cache.go:107] acquiring lock: {Name:mkccae51ac51b4e13a82c99c90f714f5d8e6a78d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1122 00:56:26.135787  707914 cache.go:115] /home/jenkins/minikube-integration/21934-513600/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I1122 00:56:26.135797  707914 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/home/jenkins/minikube-integration/21934-513600/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5" took 98.122µs
	I1122 00:56:26.135819  707914 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /home/jenkins/minikube-integration/21934-513600/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I1122 00:56:26.135831  707914 cache.go:107] acquiring lock: {Name:mkb712154645fc3ac14bde131446a775327c2641 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1122 00:56:26.135882  707914 cache.go:115] /home/jenkins/minikube-integration/21934-513600/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.34.1 exists
	I1122 00:56:26.135890  707914 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.34.1" -> "/home/jenkins/minikube-integration/21934-513600/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.34.1" took 61.315µs
	I1122 00:56:26.135898  707914 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.34.1 -> /home/jenkins/minikube-integration/21934-513600/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.34.1 succeeded
	I1122 00:56:26.135908  707914 cache.go:107] acquiring lock: {Name:mkd5d7eccf0150d8168e1c2048d8a5378a152610 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1122 00:56:26.135937  707914 cache.go:115] /home/jenkins/minikube-integration/21934-513600/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.34.1 exists
	I1122 00:56:26.135942  707914 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.34.1" -> "/home/jenkins/minikube-integration/21934-513600/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.34.1" took 35.88µs
	I1122 00:56:26.135948  707914 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.34.1 -> /home/jenkins/minikube-integration/21934-513600/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.34.1 succeeded
	I1122 00:56:26.135931  707914 cache.go:107] acquiring lock: {Name:mk405569f9b4812fa789b3bc7f2e872b2b814256 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1122 00:56:26.135956  707914 cache.go:107] acquiring lock: {Name:mk918655b4de1599248d33bc22ae3173a79f2505 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1122 00:56:26.135986  707914 cache.go:115] /home/jenkins/minikube-integration/21934-513600/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.4-0 exists
	I1122 00:56:26.135991  707914 cache.go:96] cache image "registry.k8s.io/etcd:3.6.4-0" -> "/home/jenkins/minikube-integration/21934-513600/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.4-0" took 36.176µs
	I1122 00:56:26.136001  707914 cache.go:80] save to tar file registry.k8s.io/etcd:3.6.4-0 -> /home/jenkins/minikube-integration/21934-513600/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.4-0 succeeded
	I1122 00:56:26.136005  707914 cache.go:115] /home/jenkins/minikube-integration/21934-513600/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1 exists
	I1122 00:56:26.136013  707914 cache.go:96] cache image "registry.k8s.io/pause:3.10.1" -> "/home/jenkins/minikube-integration/21934-513600/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1" took 89.293µs
	I1122 00:56:26.136010  707914 cache.go:107] acquiring lock: {Name:mkccd5490eb096c02d7bf53ba45e6b299cd7b832 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1122 00:56:26.136021  707914 cache.go:80] save to tar file registry.k8s.io/pause:3.10.1 -> /home/jenkins/minikube-integration/21934-513600/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1 succeeded
	I1122 00:56:26.136033  707914 cache.go:107] acquiring lock: {Name:mkebcb918b6c1f985f0ddf1d377887bcdde1db54 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1122 00:56:26.136051  707914 cache.go:107] acquiring lock: {Name:mk608278b4dc954ace4fd92e9035710e309313b2 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1122 00:56:26.136064  707914 cache.go:115] /home/jenkins/minikube-integration/21934-513600/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.34.1 exists
	I1122 00:56:26.136070  707914 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.34.1" -> "/home/jenkins/minikube-integration/21934-513600/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.34.1" took 38.292µs
	I1122 00:56:26.136076  707914 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.34.1 -> /home/jenkins/minikube-integration/21934-513600/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.34.1 succeeded
	I1122 00:56:26.136041  707914 cache.go:115] /home/jenkins/minikube-integration/21934-513600/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.12.1 exists
	I1122 00:56:26.136084  707914 cache.go:115] /home/jenkins/minikube-integration/21934-513600/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.34.1 exists
	I1122 00:56:26.136090  707914 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.34.1" -> "/home/jenkins/minikube-integration/21934-513600/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.34.1" took 40.204µs
	I1122 00:56:26.136097  707914 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.34.1 -> /home/jenkins/minikube-integration/21934-513600/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.34.1 succeeded
	I1122 00:56:26.136087  707914 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.12.1" -> "/home/jenkins/minikube-integration/21934-513600/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.12.1" took 77.373µs
	I1122 00:56:26.136103  707914 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.12.1 -> /home/jenkins/minikube-integration/21934-513600/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.12.1 succeeded
	I1122 00:56:26.136108  707914 cache.go:87] Successfully saved all images to host disk.
	I1122 00:56:26.161481  707914 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763588073-21934@sha256:19d3da0413e1bfa354cbb88004c6796f8e9772a083e0230b0f6e50212ee04c7e in local docker daemon, skipping pull
	I1122 00:56:26.161503  707914 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763588073-21934@sha256:19d3da0413e1bfa354cbb88004c6796f8e9772a083e0230b0f6e50212ee04c7e exists in daemon, skipping load
	I1122 00:56:26.161520  707914 cache.go:243] Successfully downloaded all kic artifacts
	I1122 00:56:26.161544  707914 start.go:360] acquireMachinesLock for no-preload-165130: {Name:mk56e2e71927a7ee6c11b310f276b198601e924e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1122 00:56:26.161599  707914 start.go:364] duration metric: took 35.716µs to acquireMachinesLock for "no-preload-165130"
	I1122 00:56:26.161628  707914 start.go:96] Skipping create...Using existing machine configuration
	I1122 00:56:26.161638  707914 fix.go:54] fixHost starting: 
	I1122 00:56:26.161942  707914 cli_runner.go:164] Run: docker container inspect no-preload-165130 --format={{.State.Status}}
	I1122 00:56:26.178553  707914 fix.go:112] recreateIfNeeded on no-preload-165130: state=Stopped err=<nil>
	W1122 00:56:26.178590  707914 fix.go:138] unexpected machine state, will restart: <nil>
	W1122 00:56:23.450620  703787 node_ready.go:57] node "embed-certs-879000" has "Ready":"False" status (will retry)
	W1122 00:56:25.451090  703787 node_ready.go:57] node "embed-certs-879000" has "Ready":"False" status (will retry)
	W1122 00:56:27.451301  703787 node_ready.go:57] node "embed-certs-879000" has "Ready":"False" status (will retry)
	I1122 00:56:26.181947  707914 out.go:252] * Restarting existing docker container for "no-preload-165130" ...
	I1122 00:56:26.182134  707914 cli_runner.go:164] Run: docker start no-preload-165130
	I1122 00:56:26.449170  707914 cli_runner.go:164] Run: docker container inspect no-preload-165130 --format={{.State.Status}}
	I1122 00:56:26.472612  707914 kic.go:430] container "no-preload-165130" state is running.
	I1122 00:56:26.473006  707914 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-165130
	I1122 00:56:26.502010  707914 profile.go:143] Saving config to /home/jenkins/minikube-integration/21934-513600/.minikube/profiles/no-preload-165130/config.json ...
	I1122 00:56:26.502238  707914 machine.go:94] provisionDockerMachine start ...
	I1122 00:56:26.502303  707914 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-165130
	I1122 00:56:26.533224  707914 main.go:143] libmachine: Using SSH client type: native
	I1122 00:56:26.533550  707914 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33792 <nil> <nil>}
	I1122 00:56:26.533566  707914 main.go:143] libmachine: About to run SSH command:
	hostname
	I1122 00:56:26.534223  707914 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I1122 00:56:29.681479  707914 main.go:143] libmachine: SSH cmd err, output: <nil>: no-preload-165130
	
	I1122 00:56:29.681502  707914 ubuntu.go:182] provisioning hostname "no-preload-165130"
	I1122 00:56:29.681608  707914 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-165130
	I1122 00:56:29.700303  707914 main.go:143] libmachine: Using SSH client type: native
	I1122 00:56:29.700610  707914 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33792 <nil> <nil>}
	I1122 00:56:29.700628  707914 main.go:143] libmachine: About to run SSH command:
	sudo hostname no-preload-165130 && echo "no-preload-165130" | sudo tee /etc/hostname
	I1122 00:56:29.852171  707914 main.go:143] libmachine: SSH cmd err, output: <nil>: no-preload-165130
	
	I1122 00:56:29.852247  707914 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-165130
	I1122 00:56:29.870343  707914 main.go:143] libmachine: Using SSH client type: native
	I1122 00:56:29.870657  707914 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33792 <nil> <nil>}
	I1122 00:56:29.870681  707914 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-165130' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-165130/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-165130' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1122 00:56:30.031287  707914 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1122 00:56:30.031337  707914 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21934-513600/.minikube CaCertPath:/home/jenkins/minikube-integration/21934-513600/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21934-513600/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21934-513600/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21934-513600/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21934-513600/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21934-513600/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21934-513600/.minikube}
	I1122 00:56:30.031402  707914 ubuntu.go:190] setting up certificates
	I1122 00:56:30.031415  707914 provision.go:84] configureAuth start
	I1122 00:56:30.031509  707914 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-165130
	I1122 00:56:30.058034  707914 provision.go:143] copyHostCerts
	I1122 00:56:30.058111  707914 exec_runner.go:144] found /home/jenkins/minikube-integration/21934-513600/.minikube/ca.pem, removing ...
	I1122 00:56:30.058128  707914 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21934-513600/.minikube/ca.pem
	I1122 00:56:30.058209  707914 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21934-513600/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21934-513600/.minikube/ca.pem (1078 bytes)
	I1122 00:56:30.058309  707914 exec_runner.go:144] found /home/jenkins/minikube-integration/21934-513600/.minikube/cert.pem, removing ...
	I1122 00:56:30.058320  707914 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21934-513600/.minikube/cert.pem
	I1122 00:56:30.058347  707914 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21934-513600/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21934-513600/.minikube/cert.pem (1123 bytes)
	I1122 00:56:30.058497  707914 exec_runner.go:144] found /home/jenkins/minikube-integration/21934-513600/.minikube/key.pem, removing ...
	I1122 00:56:30.058508  707914 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21934-513600/.minikube/key.pem
	I1122 00:56:30.058545  707914 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21934-513600/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21934-513600/.minikube/key.pem (1675 bytes)
	I1122 00:56:30.058610  707914 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21934-513600/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21934-513600/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21934-513600/.minikube/certs/ca-key.pem org=jenkins.no-preload-165130 san=[127.0.0.1 192.168.85.2 localhost minikube no-preload-165130]
	I1122 00:56:30.159518  707914 provision.go:177] copyRemoteCerts
	I1122 00:56:30.159591  707914 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1122 00:56:30.159640  707914 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-165130
	I1122 00:56:30.179579  707914 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33792 SSHKeyPath:/home/jenkins/minikube-integration/21934-513600/.minikube/machines/no-preload-165130/id_rsa Username:docker}
	I1122 00:56:30.285935  707914 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21934-513600/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1122 00:56:30.311299  707914 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21934-513600/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1122 00:56:30.333055  707914 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21934-513600/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1122 00:56:30.351236  707914 provision.go:87] duration metric: took 319.795193ms to configureAuth
	I1122 00:56:30.351332  707914 ubuntu.go:206] setting minikube options for container-runtime
	I1122 00:56:30.351556  707914 config.go:182] Loaded profile config "no-preload-165130": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1122 00:56:30.351676  707914 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-165130
	I1122 00:56:30.369383  707914 main.go:143] libmachine: Using SSH client type: native
	I1122 00:56:30.369684  707914 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33792 <nil> <nil>}
	I1122 00:56:30.369698  707914 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1122 00:56:30.741006  707914 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1122 00:56:30.741089  707914 machine.go:97] duration metric: took 4.238840352s to provisionDockerMachine
	I1122 00:56:30.741118  707914 start.go:293] postStartSetup for "no-preload-165130" (driver="docker")
	I1122 00:56:30.741144  707914 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1122 00:56:30.741246  707914 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1122 00:56:30.741318  707914 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-165130
	I1122 00:56:30.762354  707914 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33792 SSHKeyPath:/home/jenkins/minikube-integration/21934-513600/.minikube/machines/no-preload-165130/id_rsa Username:docker}
	I1122 00:56:30.861733  707914 ssh_runner.go:195] Run: cat /etc/os-release
	I1122 00:56:30.865030  707914 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1122 00:56:30.865098  707914 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1122 00:56:30.865116  707914 filesync.go:126] Scanning /home/jenkins/minikube-integration/21934-513600/.minikube/addons for local assets ...
	I1122 00:56:30.865170  707914 filesync.go:126] Scanning /home/jenkins/minikube-integration/21934-513600/.minikube/files for local assets ...
	I1122 00:56:30.865260  707914 filesync.go:149] local asset: /home/jenkins/minikube-integration/21934-513600/.minikube/files/etc/ssl/certs/5169372.pem -> 5169372.pem in /etc/ssl/certs
	I1122 00:56:30.865362  707914 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1122 00:56:30.872732  707914 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21934-513600/.minikube/files/etc/ssl/certs/5169372.pem --> /etc/ssl/certs/5169372.pem (1708 bytes)
	I1122 00:56:30.890988  707914 start.go:296] duration metric: took 149.840652ms for postStartSetup
	I1122 00:56:30.891070  707914 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1122 00:56:30.891135  707914 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-165130
	I1122 00:56:30.908112  707914 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33792 SSHKeyPath:/home/jenkins/minikube-integration/21934-513600/.minikube/machines/no-preload-165130/id_rsa Username:docker}
	I1122 00:56:31.009524  707914 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1122 00:56:31.014827  707914 fix.go:56] duration metric: took 4.853181484s for fixHost
	I1122 00:56:31.014855  707914 start.go:83] releasing machines lock for "no-preload-165130", held for 4.853242364s
	I1122 00:56:31.014927  707914 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-165130
	I1122 00:56:31.033293  707914 ssh_runner.go:195] Run: cat /version.json
	I1122 00:56:31.033342  707914 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-165130
	I1122 00:56:31.033363  707914 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1122 00:56:31.033420  707914 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-165130
	I1122 00:56:31.054870  707914 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33792 SSHKeyPath:/home/jenkins/minikube-integration/21934-513600/.minikube/machines/no-preload-165130/id_rsa Username:docker}
	I1122 00:56:31.054880  707914 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33792 SSHKeyPath:/home/jenkins/minikube-integration/21934-513600/.minikube/machines/no-preload-165130/id_rsa Username:docker}
	I1122 00:56:31.271670  707914 ssh_runner.go:195] Run: systemctl --version
	I1122 00:56:31.278258  707914 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1122 00:56:31.317852  707914 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1122 00:56:31.322370  707914 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1122 00:56:31.322465  707914 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1122 00:56:31.334329  707914 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1122 00:56:31.334359  707914 start.go:496] detecting cgroup driver to use...
	I1122 00:56:31.334391  707914 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1122 00:56:31.334449  707914 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1122 00:56:31.350752  707914 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1122 00:56:31.364064  707914 docker.go:218] disabling cri-docker service (if available) ...
	I1122 00:56:31.364136  707914 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1122 00:56:31.380264  707914 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1122 00:56:31.393761  707914 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1122 00:56:31.517259  707914 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1122 00:56:31.632821  707914 docker.go:234] disabling docker service ...
	I1122 00:56:31.632911  707914 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1122 00:56:31.648380  707914 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1122 00:56:31.662440  707914 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1122 00:56:31.780524  707914 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1122 00:56:31.911282  707914 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1122 00:56:31.924438  707914 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1122 00:56:31.939771  707914 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1122 00:56:31.939893  707914 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1122 00:56:31.950512  707914 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1122 00:56:31.950625  707914 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1122 00:56:31.962423  707914 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1122 00:56:31.971970  707914 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1122 00:56:31.981359  707914 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1122 00:56:31.990323  707914 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1122 00:56:32.001707  707914 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1122 00:56:32.012025  707914 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1122 00:56:32.022638  707914 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1122 00:56:32.031136  707914 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1122 00:56:32.039250  707914 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1122 00:56:32.158957  707914 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1122 00:56:32.331257  707914 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1122 00:56:32.331335  707914 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1122 00:56:32.335434  707914 start.go:564] Will wait 60s for crictl version
	I1122 00:56:32.335545  707914 ssh_runner.go:195] Run: which crictl
	I1122 00:56:32.339287  707914 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1122 00:56:32.364158  707914 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.2
	RuntimeApiVersion:  v1
	I1122 00:56:32.364260  707914 ssh_runner.go:195] Run: crio --version
	I1122 00:56:32.393577  707914 ssh_runner.go:195] Run: crio --version
	I1122 00:56:32.425634  707914 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.2 ...
	W1122 00:56:29.951194  703787 node_ready.go:57] node "embed-certs-879000" has "Ready":"False" status (will retry)
	W1122 00:56:31.953271  703787 node_ready.go:57] node "embed-certs-879000" has "Ready":"False" status (will retry)
	I1122 00:56:32.428501  707914 cli_runner.go:164] Run: docker network inspect no-preload-165130 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1122 00:56:32.444729  707914 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I1122 00:56:32.448749  707914 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.85.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1122 00:56:32.459231  707914 kubeadm.go:884] updating cluster {Name:no-preload-165130 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763588073-21934@sha256:19d3da0413e1bfa354cbb88004c6796f8e9772a083e0230b0f6e50212ee04c7e Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:no-preload-165130 Namespace:default APIServerHAVIP: APIServerName:minikubeCA API
ServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker
BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1122 00:56:32.459364  707914 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1122 00:56:32.459407  707914 ssh_runner.go:195] Run: sudo crictl images --output json
	I1122 00:56:32.498778  707914 crio.go:514] all images are preloaded for cri-o runtime.
	I1122 00:56:32.498805  707914 cache_images.go:86] Images are preloaded, skipping loading
	I1122 00:56:32.498812  707914 kubeadm.go:935] updating node { 192.168.85.2 8443 v1.34.1 crio true true} ...
	I1122 00:56:32.498913  707914 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=no-preload-165130 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:no-preload-165130 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1122 00:56:32.498997  707914 ssh_runner.go:195] Run: crio config
	I1122 00:56:32.570406  707914 cni.go:84] Creating CNI manager for ""
	I1122 00:56:32.570432  707914 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1122 00:56:32.570450  707914 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1122 00:56:32.570472  707914 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-165130 NodeName:no-preload-165130 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc
/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1122 00:56:32.570607  707914 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "no-preload-165130"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.85.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1122 00:56:32.570685  707914 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1122 00:56:32.580099  707914 binaries.go:51] Found k8s binaries, skipping transfer
	I1122 00:56:32.580178  707914 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1122 00:56:32.587856  707914 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (367 bytes)
	I1122 00:56:32.601067  707914 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1122 00:56:32.615011  707914 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2214 bytes)
	I1122 00:56:32.628159  707914 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I1122 00:56:32.631893  707914 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1122 00:56:32.641404  707914 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1122 00:56:32.754395  707914 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1122 00:56:32.771217  707914 certs.go:69] Setting up /home/jenkins/minikube-integration/21934-513600/.minikube/profiles/no-preload-165130 for IP: 192.168.85.2
	I1122 00:56:32.771292  707914 certs.go:195] generating shared ca certs ...
	I1122 00:56:32.771323  707914 certs.go:227] acquiring lock for ca certs: {Name:mkaf4c79493334cb2058b349c36be7473837f9b7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1122 00:56:32.771508  707914 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21934-513600/.minikube/ca.key
	I1122 00:56:32.771585  707914 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21934-513600/.minikube/proxy-client-ca.key
	I1122 00:56:32.771621  707914 certs.go:257] generating profile certs ...
	I1122 00:56:32.771748  707914 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21934-513600/.minikube/profiles/no-preload-165130/client.key
	I1122 00:56:32.771862  707914 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21934-513600/.minikube/profiles/no-preload-165130/apiserver.key.f1b30e0b
	I1122 00:56:32.771945  707914 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21934-513600/.minikube/profiles/no-preload-165130/proxy-client.key
	I1122 00:56:32.772082  707914 certs.go:484] found cert: /home/jenkins/minikube-integration/21934-513600/.minikube/certs/516937.pem (1338 bytes)
	W1122 00:56:32.772140  707914 certs.go:480] ignoring /home/jenkins/minikube-integration/21934-513600/.minikube/certs/516937_empty.pem, impossibly tiny 0 bytes
	I1122 00:56:32.772164  707914 certs.go:484] found cert: /home/jenkins/minikube-integration/21934-513600/.minikube/certs/ca-key.pem (1679 bytes)
	I1122 00:56:32.772224  707914 certs.go:484] found cert: /home/jenkins/minikube-integration/21934-513600/.minikube/certs/ca.pem (1078 bytes)
	I1122 00:56:32.772275  707914 certs.go:484] found cert: /home/jenkins/minikube-integration/21934-513600/.minikube/certs/cert.pem (1123 bytes)
	I1122 00:56:32.772333  707914 certs.go:484] found cert: /home/jenkins/minikube-integration/21934-513600/.minikube/certs/key.pem (1675 bytes)
	I1122 00:56:32.772412  707914 certs.go:484] found cert: /home/jenkins/minikube-integration/21934-513600/.minikube/files/etc/ssl/certs/5169372.pem (1708 bytes)
	I1122 00:56:32.773028  707914 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21934-513600/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1122 00:56:32.794889  707914 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21934-513600/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1122 00:56:32.812497  707914 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21934-513600/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1122 00:56:32.835040  707914 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21934-513600/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1122 00:56:32.856431  707914 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21934-513600/.minikube/profiles/no-preload-165130/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1122 00:56:32.879700  707914 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21934-513600/.minikube/profiles/no-preload-165130/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1122 00:56:32.903227  707914 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21934-513600/.minikube/profiles/no-preload-165130/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1122 00:56:32.928545  707914 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21934-513600/.minikube/profiles/no-preload-165130/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1122 00:56:32.954225  707914 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21934-513600/.minikube/files/etc/ssl/certs/5169372.pem --> /usr/share/ca-certificates/5169372.pem (1708 bytes)
	I1122 00:56:32.978869  707914 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21934-513600/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1122 00:56:33.012304  707914 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21934-513600/.minikube/certs/516937.pem --> /usr/share/ca-certificates/516937.pem (1338 bytes)
	I1122 00:56:33.035002  707914 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1122 00:56:33.049152  707914 ssh_runner.go:195] Run: openssl version
	I1122 00:56:33.058251  707914 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/516937.pem && ln -fs /usr/share/ca-certificates/516937.pem /etc/ssl/certs/516937.pem"
	I1122 00:56:33.068379  707914 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/516937.pem
	I1122 00:56:33.074556  707914 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov 21 23:55 /usr/share/ca-certificates/516937.pem
	I1122 00:56:33.074659  707914 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/516937.pem
	I1122 00:56:33.117276  707914 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/516937.pem /etc/ssl/certs/51391683.0"
	I1122 00:56:33.125388  707914 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/5169372.pem && ln -fs /usr/share/ca-certificates/5169372.pem /etc/ssl/certs/5169372.pem"
	I1122 00:56:33.134310  707914 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/5169372.pem
	I1122 00:56:33.138166  707914 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov 21 23:55 /usr/share/ca-certificates/5169372.pem
	I1122 00:56:33.138280  707914 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/5169372.pem
	I1122 00:56:33.180242  707914 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/5169372.pem /etc/ssl/certs/3ec20f2e.0"
	I1122 00:56:33.190456  707914 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1122 00:56:33.199607  707914 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1122 00:56:33.203793  707914 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov 21 23:48 /usr/share/ca-certificates/minikubeCA.pem
	I1122 00:56:33.204031  707914 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1122 00:56:33.250167  707914 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1122 00:56:33.258207  707914 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1122 00:56:33.261904  707914 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1122 00:56:33.303943  707914 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1122 00:56:33.345525  707914 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1122 00:56:33.386920  707914 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1122 00:56:33.433718  707914 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1122 00:56:33.483800  707914 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1122 00:56:33.559321  707914 kubeadm.go:401] StartCluster: {Name:no-preload-165130 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763588073-21934@sha256:19d3da0413e1bfa354cbb88004c6796f8e9772a083e0230b0f6e50212ee04c7e Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:no-preload-165130 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISer
verNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker Bi
naryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1122 00:56:33.559474  707914 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1122 00:56:33.559576  707914 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1122 00:56:33.671129  707914 cri.go:89] found id: "d445939f66bc78fcb625769cedbe045c3808629a73405f7634fc8403e2147225"
	I1122 00:56:33.671211  707914 cri.go:89] found id: "1842c88afa2f92b8f5db1d2172ed30aa44f94bd9a078bc60055ef3ff3665300f"
	I1122 00:56:33.671247  707914 cri.go:89] found id: "4a703cddbc0fe37c59864ce18d11f40681bb2c9564af9cff7e041d5680b0df58"
	I1122 00:56:33.671265  707914 cri.go:89] found id: "447fcc475a7323f248b3a0cdb76a205b4bbe6be27f083fa2031b33a0533a533e"
	I1122 00:56:33.671299  707914 cri.go:89] found id: ""
	I1122 00:56:33.671387  707914 ssh_runner.go:195] Run: sudo runc list -f json
	W1122 00:56:33.692895  707914 kubeadm.go:408] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-22T00:56:33Z" level=error msg="open /run/runc: no such file or directory"
	I1122 00:56:33.693037  707914 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1122 00:56:33.710717  707914 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1122 00:56:33.710789  707914 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1122 00:56:33.710876  707914 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1122 00:56:33.723597  707914 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1122 00:56:33.724563  707914 kubeconfig.go:47] verify endpoint returned: get endpoint: "no-preload-165130" does not appear in /home/jenkins/minikube-integration/21934-513600/kubeconfig
	I1122 00:56:33.725206  707914 kubeconfig.go:62] /home/jenkins/minikube-integration/21934-513600/kubeconfig needs updating (will repair): [kubeconfig missing "no-preload-165130" cluster setting kubeconfig missing "no-preload-165130" context setting]
	I1122 00:56:33.726247  707914 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21934-513600/kubeconfig: {Name:mkcd094d8a765e5a81369c0fb33cb5e696b17bb8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1122 00:56:33.728071  707914 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1122 00:56:33.743684  707914 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.85.2
	I1122 00:56:33.743768  707914 kubeadm.go:602] duration metric: took 32.959224ms to restartPrimaryControlPlane
	I1122 00:56:33.743793  707914 kubeadm.go:403] duration metric: took 184.483831ms to StartCluster
	I1122 00:56:33.743836  707914 settings.go:142] acquiring lock: {Name:mk6c31eb57ec65b047b78b4e1046e03fe7cc77bb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1122 00:56:33.743933  707914 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21934-513600/kubeconfig
	I1122 00:56:33.745563  707914 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21934-513600/kubeconfig: {Name:mkcd094d8a765e5a81369c0fb33cb5e696b17bb8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1122 00:56:33.746096  707914 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1122 00:56:33.746491  707914 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1122 00:56:33.746570  707914 addons.go:70] Setting storage-provisioner=true in profile "no-preload-165130"
	I1122 00:56:33.746584  707914 addons.go:239] Setting addon storage-provisioner=true in "no-preload-165130"
	W1122 00:56:33.746590  707914 addons.go:248] addon storage-provisioner should already be in state true
	I1122 00:56:33.746612  707914 host.go:66] Checking if "no-preload-165130" exists ...
	I1122 00:56:33.747148  707914 cli_runner.go:164] Run: docker container inspect no-preload-165130 --format={{.State.Status}}
	I1122 00:56:33.747577  707914 config.go:182] Loaded profile config "no-preload-165130": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1122 00:56:33.747676  707914 addons.go:70] Setting dashboard=true in profile "no-preload-165130"
	I1122 00:56:33.747724  707914 addons.go:239] Setting addon dashboard=true in "no-preload-165130"
	W1122 00:56:33.747745  707914 addons.go:248] addon dashboard should already be in state true
	I1122 00:56:33.747795  707914 host.go:66] Checking if "no-preload-165130" exists ...
	I1122 00:56:33.748285  707914 cli_runner.go:164] Run: docker container inspect no-preload-165130 --format={{.State.Status}}
	I1122 00:56:33.748675  707914 addons.go:70] Setting default-storageclass=true in profile "no-preload-165130"
	I1122 00:56:33.748703  707914 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "no-preload-165130"
	I1122 00:56:33.749010  707914 cli_runner.go:164] Run: docker container inspect no-preload-165130 --format={{.State.Status}}
	I1122 00:56:33.754993  707914 out.go:179] * Verifying Kubernetes components...
	I1122 00:56:33.759567  707914 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1122 00:56:33.807404  707914 addons.go:239] Setting addon default-storageclass=true in "no-preload-165130"
	W1122 00:56:33.807426  707914 addons.go:248] addon default-storageclass should already be in state true
	I1122 00:56:33.807450  707914 host.go:66] Checking if "no-preload-165130" exists ...
	I1122 00:56:33.807882  707914 cli_runner.go:164] Run: docker container inspect no-preload-165130 --format={{.State.Status}}
	I1122 00:56:33.812187  707914 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1122 00:56:33.812314  707914 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1122 00:56:33.816090  707914 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	I1122 00:56:33.817588  707914 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1122 00:56:33.817617  707914 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1122 00:56:33.817690  707914 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-165130
	I1122 00:56:33.820080  707914 addons.go:436] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1122 00:56:33.820107  707914 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1122 00:56:33.820177  707914 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-165130
	I1122 00:56:33.870030  707914 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1122 00:56:33.870049  707914 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1122 00:56:33.870122  707914 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-165130
	I1122 00:56:33.885924  707914 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33792 SSHKeyPath:/home/jenkins/minikube-integration/21934-513600/.minikube/machines/no-preload-165130/id_rsa Username:docker}
	I1122 00:56:33.887028  707914 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33792 SSHKeyPath:/home/jenkins/minikube-integration/21934-513600/.minikube/machines/no-preload-165130/id_rsa Username:docker}
	I1122 00:56:33.909948  707914 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33792 SSHKeyPath:/home/jenkins/minikube-integration/21934-513600/.minikube/machines/no-preload-165130/id_rsa Username:docker}
	I1122 00:56:34.128338  707914 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1122 00:56:34.146215  707914 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1122 00:56:34.146289  707914 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1122 00:56:34.165885  707914 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1122 00:56:34.169685  707914 node_ready.go:35] waiting up to 6m0s for node "no-preload-165130" to be "Ready" ...
	I1122 00:56:34.220097  707914 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1122 00:56:34.220119  707914 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1122 00:56:34.267362  707914 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1122 00:56:34.282090  707914 addons.go:436] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1122 00:56:34.282115  707914 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1122 00:56:34.339761  707914 addons.go:436] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1122 00:56:34.339784  707914 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1122 00:56:34.391615  707914 addons.go:436] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1122 00:56:34.391641  707914 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1122 00:56:34.419714  707914 addons.go:436] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1122 00:56:34.419744  707914 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1122 00:56:34.443049  707914 addons.go:436] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1122 00:56:34.443073  707914 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1122 00:56:34.471366  707914 addons.go:436] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1122 00:56:34.471391  707914 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1122 00:56:34.499024  707914 addons.go:436] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1122 00:56:34.499049  707914 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1122 00:56:34.532738  707914 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1122 00:56:34.451309  703787 node_ready.go:57] node "embed-certs-879000" has "Ready":"False" status (will retry)
	I1122 00:56:35.950793  703787 node_ready.go:49] node "embed-certs-879000" is "Ready"
	I1122 00:56:35.950823  703787 node_ready.go:38] duration metric: took 40.503146526s for node "embed-certs-879000" to be "Ready" ...
	I1122 00:56:35.950835  703787 api_server.go:52] waiting for apiserver process to appear ...
	I1122 00:56:35.950889  703787 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1122 00:56:35.989847  703787 api_server.go:72] duration metric: took 41.652740761s to wait for apiserver process to appear ...
	I1122 00:56:35.989916  703787 api_server.go:88] waiting for apiserver healthz status ...
	I1122 00:56:35.989952  703787 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1122 00:56:36.002806  703787 api_server.go:279] https://192.168.76.2:8443/healthz returned 200:
	ok
	I1122 00:56:36.011823  703787 api_server.go:141] control plane version: v1.34.1
	I1122 00:56:36.011855  703787 api_server.go:131] duration metric: took 21.91743ms to wait for apiserver health ...
	I1122 00:56:36.011865  703787 system_pods.go:43] waiting for kube-system pods to appear ...
	I1122 00:56:36.029363  703787 system_pods.go:59] 8 kube-system pods found
	I1122 00:56:36.029403  703787 system_pods.go:61] "coredns-66bc5c9577-h2kpd" [5adad534-0ba4-479f-8e5a-7f5a9e26fb1e] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1122 00:56:36.029412  703787 system_pods.go:61] "etcd-embed-certs-879000" [7cebfe87-7413-4cfe-8899-73cdba19a310] Running
	I1122 00:56:36.029417  703787 system_pods.go:61] "kindnet-j8wwg" [29cadb16-a427-4f8b-b121-3af35927f8d5] Running
	I1122 00:56:36.029421  703787 system_pods.go:61] "kube-apiserver-embed-certs-879000" [66ed66fb-cf57-49c6-a5fc-a8814e40c10b] Running
	I1122 00:56:36.029426  703787 system_pods.go:61] "kube-controller-manager-embed-certs-879000" [347609cd-705b-441f-941f-936a0e0574f7] Running
	I1122 00:56:36.029430  703787 system_pods.go:61] "kube-proxy-w9bqj" [f56c390b-4d40-40a3-9862-f5081a6561e5] Running
	I1122 00:56:36.029434  703787 system_pods.go:61] "kube-scheduler-embed-certs-879000" [364d55f5-1b98-4087-999c-c7302863e10f] Running
	I1122 00:56:36.029439  703787 system_pods.go:61] "storage-provisioner" [042e1631-1c8e-4ce0-92e5-cdd4742fa06b] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1122 00:56:36.029445  703787 system_pods.go:74] duration metric: took 17.573968ms to wait for pod list to return data ...
	I1122 00:56:36.029452  703787 default_sa.go:34] waiting for default service account to be created ...
	I1122 00:56:36.057339  703787 default_sa.go:45] found service account: "default"
	I1122 00:56:36.057417  703787 default_sa.go:55] duration metric: took 27.957942ms for default service account to be created ...
	I1122 00:56:36.057442  703787 system_pods.go:116] waiting for k8s-apps to be running ...
	I1122 00:56:36.123244  703787 system_pods.go:86] 8 kube-system pods found
	I1122 00:56:36.123328  703787 system_pods.go:89] "coredns-66bc5c9577-h2kpd" [5adad534-0ba4-479f-8e5a-7f5a9e26fb1e] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1122 00:56:36.123351  703787 system_pods.go:89] "etcd-embed-certs-879000" [7cebfe87-7413-4cfe-8899-73cdba19a310] Running
	I1122 00:56:36.123386  703787 system_pods.go:89] "kindnet-j8wwg" [29cadb16-a427-4f8b-b121-3af35927f8d5] Running
	I1122 00:56:36.123407  703787 system_pods.go:89] "kube-apiserver-embed-certs-879000" [66ed66fb-cf57-49c6-a5fc-a8814e40c10b] Running
	I1122 00:56:36.123425  703787 system_pods.go:89] "kube-controller-manager-embed-certs-879000" [347609cd-705b-441f-941f-936a0e0574f7] Running
	I1122 00:56:36.123444  703787 system_pods.go:89] "kube-proxy-w9bqj" [f56c390b-4d40-40a3-9862-f5081a6561e5] Running
	I1122 00:56:36.123477  703787 system_pods.go:89] "kube-scheduler-embed-certs-879000" [364d55f5-1b98-4087-999c-c7302863e10f] Running
	I1122 00:56:36.123503  703787 system_pods.go:89] "storage-provisioner" [042e1631-1c8e-4ce0-92e5-cdd4742fa06b] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1122 00:56:36.123567  703787 retry.go:31] will retry after 202.225361ms: missing components: kube-dns
	I1122 00:56:36.329622  703787 system_pods.go:86] 8 kube-system pods found
	I1122 00:56:36.329710  703787 system_pods.go:89] "coredns-66bc5c9577-h2kpd" [5adad534-0ba4-479f-8e5a-7f5a9e26fb1e] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1122 00:56:36.329737  703787 system_pods.go:89] "etcd-embed-certs-879000" [7cebfe87-7413-4cfe-8899-73cdba19a310] Running
	I1122 00:56:36.329771  703787 system_pods.go:89] "kindnet-j8wwg" [29cadb16-a427-4f8b-b121-3af35927f8d5] Running
	I1122 00:56:36.329792  703787 system_pods.go:89] "kube-apiserver-embed-certs-879000" [66ed66fb-cf57-49c6-a5fc-a8814e40c10b] Running
	I1122 00:56:36.329856  703787 system_pods.go:89] "kube-controller-manager-embed-certs-879000" [347609cd-705b-441f-941f-936a0e0574f7] Running
	I1122 00:56:36.329891  703787 system_pods.go:89] "kube-proxy-w9bqj" [f56c390b-4d40-40a3-9862-f5081a6561e5] Running
	I1122 00:56:36.329910  703787 system_pods.go:89] "kube-scheduler-embed-certs-879000" [364d55f5-1b98-4087-999c-c7302863e10f] Running
	I1122 00:56:36.329946  703787 system_pods.go:89] "storage-provisioner" [042e1631-1c8e-4ce0-92e5-cdd4742fa06b] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1122 00:56:36.329980  703787 retry.go:31] will retry after 319.883655ms: missing components: kube-dns
	I1122 00:56:36.671640  703787 system_pods.go:86] 8 kube-system pods found
	I1122 00:56:36.671670  703787 system_pods.go:89] "coredns-66bc5c9577-h2kpd" [5adad534-0ba4-479f-8e5a-7f5a9e26fb1e] Running
	I1122 00:56:36.671676  703787 system_pods.go:89] "etcd-embed-certs-879000" [7cebfe87-7413-4cfe-8899-73cdba19a310] Running
	I1122 00:56:36.671680  703787 system_pods.go:89] "kindnet-j8wwg" [29cadb16-a427-4f8b-b121-3af35927f8d5] Running
	I1122 00:56:36.671684  703787 system_pods.go:89] "kube-apiserver-embed-certs-879000" [66ed66fb-cf57-49c6-a5fc-a8814e40c10b] Running
	I1122 00:56:36.671689  703787 system_pods.go:89] "kube-controller-manager-embed-certs-879000" [347609cd-705b-441f-941f-936a0e0574f7] Running
	I1122 00:56:36.671694  703787 system_pods.go:89] "kube-proxy-w9bqj" [f56c390b-4d40-40a3-9862-f5081a6561e5] Running
	I1122 00:56:36.671699  703787 system_pods.go:89] "kube-scheduler-embed-certs-879000" [364d55f5-1b98-4087-999c-c7302863e10f] Running
	I1122 00:56:36.671702  703787 system_pods.go:89] "storage-provisioner" [042e1631-1c8e-4ce0-92e5-cdd4742fa06b] Running
	I1122 00:56:36.671711  703787 system_pods.go:126] duration metric: took 614.250386ms to wait for k8s-apps to be running ...
	I1122 00:56:36.671718  703787 system_svc.go:44] waiting for kubelet service to be running ....
	I1122 00:56:36.671770  703787 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1122 00:56:36.699052  703787 system_svc.go:56] duration metric: took 27.32418ms WaitForService to wait for kubelet
	I1122 00:56:36.699079  703787 kubeadm.go:587] duration metric: took 42.361977176s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1122 00:56:36.699098  703787 node_conditions.go:102] verifying NodePressure condition ...
	I1122 00:56:36.712685  703787 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1122 00:56:36.712717  703787 node_conditions.go:123] node cpu capacity is 2
	I1122 00:56:36.712730  703787 node_conditions.go:105] duration metric: took 13.628081ms to run NodePressure ...
	I1122 00:56:36.712743  703787 start.go:242] waiting for startup goroutines ...
	I1122 00:56:36.712751  703787 start.go:247] waiting for cluster config update ...
	I1122 00:56:36.712764  703787 start.go:256] writing updated cluster config ...
	I1122 00:56:36.713052  703787 ssh_runner.go:195] Run: rm -f paused
	I1122 00:56:36.717247  703787 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1122 00:56:36.765289  703787 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-h2kpd" in "kube-system" namespace to be "Ready" or be gone ...
	I1122 00:56:36.775555  703787 pod_ready.go:94] pod "coredns-66bc5c9577-h2kpd" is "Ready"
	I1122 00:56:36.775632  703787 pod_ready.go:86] duration metric: took 10.265316ms for pod "coredns-66bc5c9577-h2kpd" in "kube-system" namespace to be "Ready" or be gone ...
	I1122 00:56:36.786692  703787 pod_ready.go:83] waiting for pod "etcd-embed-certs-879000" in "kube-system" namespace to be "Ready" or be gone ...
	I1122 00:56:36.798130  703787 pod_ready.go:94] pod "etcd-embed-certs-879000" is "Ready"
	I1122 00:56:36.798206  703787 pod_ready.go:86] duration metric: took 11.436318ms for pod "etcd-embed-certs-879000" in "kube-system" namespace to be "Ready" or be gone ...
	I1122 00:56:36.801113  703787 pod_ready.go:83] waiting for pod "kube-apiserver-embed-certs-879000" in "kube-system" namespace to be "Ready" or be gone ...
	I1122 00:56:36.808561  703787 pod_ready.go:94] pod "kube-apiserver-embed-certs-879000" is "Ready"
	I1122 00:56:36.808643  703787 pod_ready.go:86] duration metric: took 7.459031ms for pod "kube-apiserver-embed-certs-879000" in "kube-system" namespace to be "Ready" or be gone ...
	I1122 00:56:36.811857  703787 pod_ready.go:83] waiting for pod "kube-controller-manager-embed-certs-879000" in "kube-system" namespace to be "Ready" or be gone ...
	I1122 00:56:37.122306  703787 pod_ready.go:94] pod "kube-controller-manager-embed-certs-879000" is "Ready"
	I1122 00:56:37.122381  703787 pod_ready.go:86] duration metric: took 310.449965ms for pod "kube-controller-manager-embed-certs-879000" in "kube-system" namespace to be "Ready" or be gone ...
	I1122 00:56:37.321546  703787 pod_ready.go:83] waiting for pod "kube-proxy-w9bqj" in "kube-system" namespace to be "Ready" or be gone ...
	I1122 00:56:37.721482  703787 pod_ready.go:94] pod "kube-proxy-w9bqj" is "Ready"
	I1122 00:56:37.721559  703787 pod_ready.go:86] duration metric: took 399.938005ms for pod "kube-proxy-w9bqj" in "kube-system" namespace to be "Ready" or be gone ...
	I1122 00:56:37.922446  703787 pod_ready.go:83] waiting for pod "kube-scheduler-embed-certs-879000" in "kube-system" namespace to be "Ready" or be gone ...
	I1122 00:56:38.322013  703787 pod_ready.go:94] pod "kube-scheduler-embed-certs-879000" is "Ready"
	I1122 00:56:38.322038  703787 pod_ready.go:86] duration metric: took 399.514083ms for pod "kube-scheduler-embed-certs-879000" in "kube-system" namespace to be "Ready" or be gone ...
	I1122 00:56:38.322049  703787 pod_ready.go:40] duration metric: took 1.604720908s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1122 00:56:38.413227  703787 start.go:628] kubectl: 1.33.2, cluster: 1.34.1 (minor skew: 1)
	I1122 00:56:38.416954  703787 out.go:179] * Done! kubectl is now configured to use "embed-certs-879000" cluster and "default" namespace by default
	I1122 00:56:38.457772  707914 node_ready.go:49] node "no-preload-165130" is "Ready"
	I1122 00:56:38.457816  707914 node_ready.go:38] duration metric: took 4.288025386s for node "no-preload-165130" to be "Ready" ...
	I1122 00:56:38.457832  707914 api_server.go:52] waiting for apiserver process to appear ...
	I1122 00:56:38.457886  707914 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1122 00:56:40.012384  707914 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (5.744980158s)
	I1122 00:56:40.012764  707914 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (5.479987471s)
	I1122 00:56:40.012951  707914 ssh_runner.go:235] Completed: sudo pgrep -xnf kube-apiserver.*minikube.*: (1.555047696s)
	I1122 00:56:40.012971  707914 api_server.go:72] duration metric: took 6.266804844s to wait for apiserver process to appear ...
	I1122 00:56:40.012978  707914 api_server.go:88] waiting for apiserver healthz status ...
	I1122 00:56:40.012997  707914 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1122 00:56:40.016044  707914 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p no-preload-165130 addons enable metrics-server
	
	I1122 00:56:40.038752  707914 api_server.go:279] https://192.168.85.2:8443/healthz returned 200:
	ok
	I1122 00:56:40.052013  707914 api_server.go:141] control plane version: v1.34.1
	I1122 00:56:40.052109  707914 api_server.go:131] duration metric: took 39.125444ms to wait for apiserver health ...
	I1122 00:56:40.052135  707914 system_pods.go:43] waiting for kube-system pods to appear ...
	I1122 00:56:40.053961  707914 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (5.887985737s)
	I1122 00:56:40.057672  707914 out.go:179] * Enabled addons: dashboard, default-storageclass, storage-provisioner
	I1122 00:56:40.060839  707914 addons.go:530] duration metric: took 6.314342415s for enable addons: enabled=[dashboard default-storageclass storage-provisioner]
	I1122 00:56:40.066577  707914 system_pods.go:59] 8 kube-system pods found
	I1122 00:56:40.066675  707914 system_pods.go:61] "coredns-66bc5c9577-pt27w" [54abb602-6f61-4692-a49d-c67637de05aa] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1122 00:56:40.066702  707914 system_pods.go:61] "etcd-no-preload-165130" [9924c059-bb41-4a3c-87f4-5bbf226dc98f] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1122 00:56:40.066743  707914 system_pods.go:61] "kindnet-2kqbq" [431f8066-47ae-445f-ba11-89e3d9b34f04] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I1122 00:56:40.066782  707914 system_pods.go:61] "kube-apiserver-no-preload-165130" [aaa8100d-a0ba-46d9-975c-7400a36bcc5f] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1122 00:56:40.066829  707914 system_pods.go:61] "kube-controller-manager-no-preload-165130" [597218b5-9e1c-43ed-8c13-3560d3b80422] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1122 00:56:40.066866  707914 system_pods.go:61] "kube-proxy-kr4ll" [b7ff7069-d8ba-4340-b2a8-57db9eb94b57] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1122 00:56:40.066907  707914 system_pods.go:61] "kube-scheduler-no-preload-165130" [6dc003f2-6224-4216-8657-71ebabad3744] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1122 00:56:40.066935  707914 system_pods.go:61] "storage-provisioner" [3cb5ecac-491c-4635-85b4-a7e2719d7aec] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1122 00:56:40.066955  707914 system_pods.go:74] duration metric: took 14.800822ms to wait for pod list to return data ...
	I1122 00:56:40.066991  707914 default_sa.go:34] waiting for default service account to be created ...
	I1122 00:56:40.074398  707914 default_sa.go:45] found service account: "default"
	I1122 00:56:40.074494  707914 default_sa.go:55] duration metric: took 7.465332ms for default service account to be created ...
	I1122 00:56:40.074521  707914 system_pods.go:116] waiting for k8s-apps to be running ...
	I1122 00:56:40.078954  707914 system_pods.go:86] 8 kube-system pods found
	I1122 00:56:40.079067  707914 system_pods.go:89] "coredns-66bc5c9577-pt27w" [54abb602-6f61-4692-a49d-c67637de05aa] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1122 00:56:40.079114  707914 system_pods.go:89] "etcd-no-preload-165130" [9924c059-bb41-4a3c-87f4-5bbf226dc98f] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1122 00:56:40.079145  707914 system_pods.go:89] "kindnet-2kqbq" [431f8066-47ae-445f-ba11-89e3d9b34f04] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I1122 00:56:40.079169  707914 system_pods.go:89] "kube-apiserver-no-preload-165130" [aaa8100d-a0ba-46d9-975c-7400a36bcc5f] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1122 00:56:40.079206  707914 system_pods.go:89] "kube-controller-manager-no-preload-165130" [597218b5-9e1c-43ed-8c13-3560d3b80422] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1122 00:56:40.079261  707914 system_pods.go:89] "kube-proxy-kr4ll" [b7ff7069-d8ba-4340-b2a8-57db9eb94b57] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1122 00:56:40.079297  707914 system_pods.go:89] "kube-scheduler-no-preload-165130" [6dc003f2-6224-4216-8657-71ebabad3744] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1122 00:56:40.079323  707914 system_pods.go:89] "storage-provisioner" [3cb5ecac-491c-4635-85b4-a7e2719d7aec] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1122 00:56:40.079346  707914 system_pods.go:126] duration metric: took 4.781514ms to wait for k8s-apps to be running ...
	I1122 00:56:40.079381  707914 system_svc.go:44] waiting for kubelet service to be running ....
	I1122 00:56:40.079480  707914 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1122 00:56:40.097012  707914 system_svc.go:56] duration metric: took 17.620342ms WaitForService to wait for kubelet
	I1122 00:56:40.097108  707914 kubeadm.go:587] duration metric: took 6.350930882s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1122 00:56:40.097144  707914 node_conditions.go:102] verifying NodePressure condition ...
	I1122 00:56:40.101716  707914 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1122 00:56:40.101825  707914 node_conditions.go:123] node cpu capacity is 2
	I1122 00:56:40.101855  707914 node_conditions.go:105] duration metric: took 4.675179ms to run NodePressure ...
	I1122 00:56:40.101896  707914 start.go:242] waiting for startup goroutines ...
	I1122 00:56:40.101921  707914 start.go:247] waiting for cluster config update ...
	I1122 00:56:40.101949  707914 start.go:256] writing updated cluster config ...
	I1122 00:56:40.102312  707914 ssh_runner.go:195] Run: rm -f paused
	I1122 00:56:40.107555  707914 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1122 00:56:40.121718  707914 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-pt27w" in "kube-system" namespace to be "Ready" or be gone ...
	W1122 00:56:42.139934  707914 pod_ready.go:104] pod "coredns-66bc5c9577-pt27w" is not "Ready", error: <nil>
	W1122 00:56:44.628611  707914 pod_ready.go:104] pod "coredns-66bc5c9577-pt27w" is not "Ready", error: <nil>
	
	
	==> CRI-O <==
	Nov 22 00:56:36 embed-certs-879000 crio[843]: time="2025-11-22T00:56:36.063988796Z" level=info msg="Created container 2ed58bd60f6809c326bde55501d241c8b59d9f7bfb61f8cbee4cc4a362dae61f: kube-system/coredns-66bc5c9577-h2kpd/coredns" id=0acf7fb2-561e-4d9a-b081-261a6e49cfae name=/runtime.v1.RuntimeService/CreateContainer
	Nov 22 00:56:36 embed-certs-879000 crio[843]: time="2025-11-22T00:56:36.070935546Z" level=info msg="Starting container: 2ed58bd60f6809c326bde55501d241c8b59d9f7bfb61f8cbee4cc4a362dae61f" id=7768f456-0f34-4924-b91b-f7371e431bb9 name=/runtime.v1.RuntimeService/StartContainer
	Nov 22 00:56:36 embed-certs-879000 crio[843]: time="2025-11-22T00:56:36.072948673Z" level=info msg="Started container" PID=1714 containerID=2ed58bd60f6809c326bde55501d241c8b59d9f7bfb61f8cbee4cc4a362dae61f description=kube-system/coredns-66bc5c9577-h2kpd/coredns id=7768f456-0f34-4924-b91b-f7371e431bb9 name=/runtime.v1.RuntimeService/StartContainer sandboxID=3034dded06ea5c24d16b4d9f920125acaf54fbd21eb182a7b502d36044374811
	Nov 22 00:56:39 embed-certs-879000 crio[843]: time="2025-11-22T00:56:39.028908129Z" level=info msg="Running pod sandbox: default/busybox/POD" id=78e96b25-6505-45a4-b033-7790cfbe1733 name=/runtime.v1.RuntimeService/RunPodSandbox
	Nov 22 00:56:39 embed-certs-879000 crio[843]: time="2025-11-22T00:56:39.028986576Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 22 00:56:39 embed-certs-879000 crio[843]: time="2025-11-22T00:56:39.046304428Z" level=info msg="Got pod network &{Name:busybox Namespace:default ID:6f0d50a4988f6c7ba5b80bfd05a37e32a976d5adae46e2574f91e404c75c03f7 UID:9c85fc23-1f39-430f-a828-390ca91fd200 NetNS:/var/run/netns/f8904e45-1628-4697-9377-b47f99f2b6f1 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0x4000b547a8}] Aliases:map[]}"
	Nov 22 00:56:39 embed-certs-879000 crio[843]: time="2025-11-22T00:56:39.046517435Z" level=info msg="Adding pod default_busybox to CNI network \"kindnet\" (type=ptp)"
	Nov 22 00:56:39 embed-certs-879000 crio[843]: time="2025-11-22T00:56:39.061653878Z" level=info msg="Got pod network &{Name:busybox Namespace:default ID:6f0d50a4988f6c7ba5b80bfd05a37e32a976d5adae46e2574f91e404c75c03f7 UID:9c85fc23-1f39-430f-a828-390ca91fd200 NetNS:/var/run/netns/f8904e45-1628-4697-9377-b47f99f2b6f1 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0x4000b547a8}] Aliases:map[]}"
	Nov 22 00:56:39 embed-certs-879000 crio[843]: time="2025-11-22T00:56:39.062009552Z" level=info msg="Checking pod default_busybox for CNI network kindnet (type=ptp)"
	Nov 22 00:56:39 embed-certs-879000 crio[843]: time="2025-11-22T00:56:39.073708852Z" level=info msg="Ran pod sandbox 6f0d50a4988f6c7ba5b80bfd05a37e32a976d5adae46e2574f91e404c75c03f7 with infra container: default/busybox/POD" id=78e96b25-6505-45a4-b033-7790cfbe1733 name=/runtime.v1.RuntimeService/RunPodSandbox
	Nov 22 00:56:39 embed-certs-879000 crio[843]: time="2025-11-22T00:56:39.075019189Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=576650e6-86e5-4c4f-a2dc-98c4e0ad3943 name=/runtime.v1.ImageService/ImageStatus
	Nov 22 00:56:39 embed-certs-879000 crio[843]: time="2025-11-22T00:56:39.075317667Z" level=info msg="Image gcr.io/k8s-minikube/busybox:1.28.4-glibc not found" id=576650e6-86e5-4c4f-a2dc-98c4e0ad3943 name=/runtime.v1.ImageService/ImageStatus
	Nov 22 00:56:39 embed-certs-879000 crio[843]: time="2025-11-22T00:56:39.07547406Z" level=info msg="Neither image nor artfiact gcr.io/k8s-minikube/busybox:1.28.4-glibc found" id=576650e6-86e5-4c4f-a2dc-98c4e0ad3943 name=/runtime.v1.ImageService/ImageStatus
	Nov 22 00:56:39 embed-certs-879000 crio[843]: time="2025-11-22T00:56:39.082441233Z" level=info msg="Pulling image: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=4d1a3327-3f41-4d75-9be5-4fecf0f85b24 name=/runtime.v1.ImageService/PullImage
	Nov 22 00:56:39 embed-certs-879000 crio[843]: time="2025-11-22T00:56:39.088806109Z" level=info msg="Trying to access \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\""
	Nov 22 00:56:41 embed-certs-879000 crio[843]: time="2025-11-22T00:56:41.287584257Z" level=info msg="Pulled image: gcr.io/k8s-minikube/busybox@sha256:580b0aa58b210f512f818b7b7ef4f63c803f7a8cd6baf571b1462b79f7b7719e" id=4d1a3327-3f41-4d75-9be5-4fecf0f85b24 name=/runtime.v1.ImageService/PullImage
	Nov 22 00:56:41 embed-certs-879000 crio[843]: time="2025-11-22T00:56:41.288735173Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=630665c7-e046-45fc-a435-a746ebc5c109 name=/runtime.v1.ImageService/ImageStatus
	Nov 22 00:56:41 embed-certs-879000 crio[843]: time="2025-11-22T00:56:41.290758071Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=b7967eb0-24f4-492a-97a0-8021dbf02e6c name=/runtime.v1.ImageService/ImageStatus
	Nov 22 00:56:41 embed-certs-879000 crio[843]: time="2025-11-22T00:56:41.299827262Z" level=info msg="Creating container: default/busybox/busybox" id=d38bf3b4-fd9d-4193-8fc8-419a7e447ac0 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 22 00:56:41 embed-certs-879000 crio[843]: time="2025-11-22T00:56:41.300130819Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 22 00:56:41 embed-certs-879000 crio[843]: time="2025-11-22T00:56:41.308495136Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 22 00:56:41 embed-certs-879000 crio[843]: time="2025-11-22T00:56:41.309285144Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 22 00:56:41 embed-certs-879000 crio[843]: time="2025-11-22T00:56:41.330371189Z" level=info msg="Created container 529a2298cbecc82cee367ed88ff5ad3aefa9539985f944447f6271b32a5d1771: default/busybox/busybox" id=d38bf3b4-fd9d-4193-8fc8-419a7e447ac0 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 22 00:56:41 embed-certs-879000 crio[843]: time="2025-11-22T00:56:41.333191947Z" level=info msg="Starting container: 529a2298cbecc82cee367ed88ff5ad3aefa9539985f944447f6271b32a5d1771" id=0de425f7-6f24-46de-bf98-10d2029b4f6c name=/runtime.v1.RuntimeService/StartContainer
	Nov 22 00:56:41 embed-certs-879000 crio[843]: time="2025-11-22T00:56:41.337124813Z" level=info msg="Started container" PID=1774 containerID=529a2298cbecc82cee367ed88ff5ad3aefa9539985f944447f6271b32a5d1771 description=default/busybox/busybox id=0de425f7-6f24-46de-bf98-10d2029b4f6c name=/runtime.v1.RuntimeService/StartContainer sandboxID=6f0d50a4988f6c7ba5b80bfd05a37e32a976d5adae46e2574f91e404c75c03f7
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED              STATE               NAME                      ATTEMPT             POD ID              POD                                          NAMESPACE
	529a2298cbecc       gcr.io/k8s-minikube/busybox@sha256:580b0aa58b210f512f818b7b7ef4f63c803f7a8cd6baf571b1462b79f7b7719e   7 seconds ago        Running             busybox                   0                   6f0d50a4988f6       busybox                                      default
	2ed58bd60f680       138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc                                      12 seconds ago       Running             coredns                   0                   3034dded06ea5       coredns-66bc5c9577-h2kpd                     kube-system
	dbc4289924505       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                      12 seconds ago       Running             storage-provisioner       0                   5995efe9e308a       storage-provisioner                          kube-system
	2d42dd9e0d04a       05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9                                      53 seconds ago       Running             kube-proxy                0                   d0400d5b8b353       kube-proxy-w9bqj                             kube-system
	0568638004a45       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c                                      53 seconds ago       Running             kindnet-cni               0                   2869fc6b3d34c       kindnet-j8wwg                                kube-system
	58723d057a7c4       7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a                                      About a minute ago   Running             kube-controller-manager   0                   e4cb231b72911       kube-controller-manager-embed-certs-879000   kube-system
	6cb4c23918a87       a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e                                      About a minute ago   Running             etcd                      0                   1da8e52971922       etcd-embed-certs-879000                      kube-system
	f0b35eb74debe       b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0                                      About a minute ago   Running             kube-scheduler            0                   8ee6f1f870dc8       kube-scheduler-embed-certs-879000            kube-system
	e4bdb98bc9ac4       43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196                                      About a minute ago   Running             kube-apiserver            0                   d92da6a181e38       kube-apiserver-embed-certs-879000            kube-system
	
	
	==> coredns [2ed58bd60f6809c326bde55501d241c8b59d9f7bfb61f8cbee4cc4a362dae61f] <==
	maxprocs: Leaving GOMAXPROCS=2: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 3e2243e8b9e7116f563b83b1933f477a68ba9ad4a829ed5d7e54629fb2ce53528b9bc6023030be20be434ad805fd246296dd428c64e9bbef3a70f22b8621f560
	CoreDNS-1.12.1
	linux/arm64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:53556 - 31651 "HINFO IN 2713226815923376159.6319659763435830525. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.012417811s
	
	
	==> describe nodes <==
	Name:               embed-certs-879000
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=embed-certs-879000
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=299bbe887a12c40541707cc636234f35f4ff1785
	                    minikube.k8s.io/name=embed-certs-879000
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_11_22T00_55_51_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 22 Nov 2025 00:55:45 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  embed-certs-879000
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 22 Nov 2025 00:56:41 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 22 Nov 2025 00:56:41 +0000   Sat, 22 Nov 2025 00:55:39 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 22 Nov 2025 00:56:41 +0000   Sat, 22 Nov 2025 00:55:39 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 22 Nov 2025 00:56:41 +0000   Sat, 22 Nov 2025 00:55:39 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 22 Nov 2025 00:56:41 +0000   Sat, 22 Nov 2025 00:56:35 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.76.2
	  Hostname:    embed-certs-879000
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	System Info:
	  Machine ID:                 c92eea4d3c03c0156e01fcf8691e3907
	  System UUID:                37f1296a-29c2-4a0f-8fef-fc1d195b0150
	  Boot ID:                    72ac7385-472f-47d1-a23e-bd80468d6e09
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.2
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         11s
	  kube-system                 coredns-66bc5c9577-h2kpd                      100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     55s
	  kube-system                 etcd-embed-certs-879000                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         60s
	  kube-system                 kindnet-j8wwg                                 100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      55s
	  kube-system                 kube-apiserver-embed-certs-879000             250m (12%)    0 (0%)      0 (0%)           0 (0%)         60s
	  kube-system                 kube-controller-manager-embed-certs-879000    200m (10%)    0 (0%)      0 (0%)           0 (0%)         59s
	  kube-system                 kube-proxy-w9bqj                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         55s
	  kube-system                 kube-scheduler-embed-certs-879000             100m (5%)     0 (0%)      0 (0%)           0 (0%)         63s
	  kube-system                 storage-provisioner                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         54s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  100m (5%)
	  memory             220Mi (2%)  220Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                From             Message
	  ----     ------                   ----               ----             -------
	  Normal   Starting                 53s                kube-proxy       
	  Normal   Starting                 72s                kubelet          Starting kubelet.
	  Warning  CgroupV1                 72s                kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  72s (x8 over 72s)  kubelet          Node embed-certs-879000 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    72s (x8 over 72s)  kubelet          Node embed-certs-879000 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     72s (x8 over 72s)  kubelet          Node embed-certs-879000 status is now: NodeHasSufficientPID
	  Normal   Starting                 59s                kubelet          Starting kubelet.
	  Warning  CgroupV1                 59s                kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  59s                kubelet          Node embed-certs-879000 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    59s                kubelet          Node embed-certs-879000 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     59s                kubelet          Node embed-certs-879000 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           56s                node-controller  Node embed-certs-879000 event: Registered Node embed-certs-879000 in Controller
	  Normal   NodeReady                14s                kubelet          Node embed-certs-879000 status is now: NodeReady
	
	
	==> dmesg <==
	[Nov22 00:32] overlayfs: idmapped layers are currently not supported
	[Nov22 00:33] overlayfs: idmapped layers are currently not supported
	[Nov22 00:35] overlayfs: idmapped layers are currently not supported
	[Nov22 00:36] overlayfs: idmapped layers are currently not supported
	[ +18.168104] overlayfs: idmapped layers are currently not supported
	[Nov22 00:37] overlayfs: idmapped layers are currently not supported
	[ +56.322609] overlayfs: idmapped layers are currently not supported
	[Nov22 00:38] overlayfs: idmapped layers are currently not supported
	[Nov22 00:39] overlayfs: idmapped layers are currently not supported
	[ +23.174928] overlayfs: idmapped layers are currently not supported
	[Nov22 00:41] overlayfs: idmapped layers are currently not supported
	[Nov22 00:42] overlayfs: idmapped layers are currently not supported
	[Nov22 00:44] overlayfs: idmapped layers are currently not supported
	[Nov22 00:45] overlayfs: idmapped layers are currently not supported
	[Nov22 00:46] overlayfs: idmapped layers are currently not supported
	[Nov22 00:48] overlayfs: idmapped layers are currently not supported
	[Nov22 00:50] overlayfs: idmapped layers are currently not supported
	[Nov22 00:51] overlayfs: idmapped layers are currently not supported
	[ +11.900293] overlayfs: idmapped layers are currently not supported
	[ +28.922055] overlayfs: idmapped layers are currently not supported
	[Nov22 00:52] overlayfs: idmapped layers are currently not supported
	[Nov22 00:53] overlayfs: idmapped layers are currently not supported
	[Nov22 00:54] overlayfs: idmapped layers are currently not supported
	[Nov22 00:55] overlayfs: idmapped layers are currently not supported
	[Nov22 00:56] overlayfs: idmapped layers are currently not supported
	
	
	==> etcd [6cb4c23918a874a6a4d2ca075706b0e444a1301d9c19d0308447fbc4d5ca8240] <==
	{"level":"warn","ts":"2025-11-22T00:55:42.899067Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58712","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-22T00:55:42.918514Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58726","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-22T00:55:42.935620Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58752","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-22T00:55:42.985347Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58790","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-22T00:55:42.996875Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58774","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-22T00:55:43.007763Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58812","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-22T00:55:43.035942Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58824","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-22T00:55:43.055690Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58834","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-22T00:55:43.076067Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58852","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-22T00:55:43.093947Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58868","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-22T00:55:43.109077Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58880","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-22T00:55:43.132593Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58898","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-22T00:55:43.163466Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58918","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-22T00:55:43.176611Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58948","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-22T00:55:43.194061Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58956","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-22T00:55:43.225133Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58972","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-22T00:55:43.249762Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58982","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-22T00:55:43.312057Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59002","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-22T00:55:43.345073Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59010","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-22T00:55:43.402349Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59034","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-22T00:55:43.439752Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59046","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-22T00:55:43.504437Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59060","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-22T00:55:43.513508Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59086","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-22T00:55:43.566585Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59106","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-22T00:55:43.719906Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59118","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 00:56:49 up  5:38,  0 user,  load average: 3.99, 3.89, 2.86
	Linux embed-certs-879000 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [0568638004a45a971ec55e785da69fdf48d644ddf4325e311c8ef847c27597a0] <==
	I1122 00:55:55.116845       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1122 00:55:55.117124       1 main.go:139] hostIP = 192.168.76.2
	podIP = 192.168.76.2
	I1122 00:55:55.117269       1 main.go:148] setting mtu 1500 for CNI 
	I1122 00:55:55.117281       1 main.go:178] kindnetd IP family: "ipv4"
	I1122 00:55:55.117291       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-11-22T00:55:55Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1122 00:55:55.316977       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1122 00:55:55.316994       1 controller.go:381] "Waiting for informer caches to sync"
	I1122 00:55:55.317002       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1122 00:55:55.317282       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	E1122 00:56:25.316851       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Namespace: Get \"https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Namespace"
	E1122 00:56:25.316983       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Pod: Get \"https://10.96.0.1:443/api/v1/pods?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Pod"
	E1122 00:56:25.317865       1 reflector.go:200] "Failed to watch" err="failed to list *v1.NetworkPolicy: Get \"https://10.96.0.1:443/apis/networking.k8s.io/v1/networkpolicies?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.NetworkPolicy"
	E1122 00:56:25.317908       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.96.0.1:443/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Node"
	I1122 00:56:26.817958       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1122 00:56:26.818053       1 metrics.go:72] Registering metrics
	I1122 00:56:26.818153       1 controller.go:711] "Syncing nftables rules"
	I1122 00:56:35.323951       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1122 00:56:35.324072       1 main.go:301] handling current node
	I1122 00:56:45.318179       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1122 00:56:45.318226       1 main.go:301] handling current node
	
	
	==> kube-apiserver [e4bdb98bc9ac4d66c5201d67ee5d53156f741b6bf3edbc842a2eaab74d6a938b] <==
	I1122 00:55:45.686349       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I1122 00:55:45.625624       1 shared_informer.go:356] "Caches are synced" controller="kubernetes-service-cidr-controller"
	I1122 00:55:45.686690       1 default_servicecidr_controller.go:166] Creating default ServiceCIDR with CIDRs: [10.96.0.0/12]
	I1122 00:55:45.776036       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1122 00:55:45.794414       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1122 00:55:45.796987       1 default_servicecidr_controller.go:228] Setting default ServiceCIDR condition Ready to True
	I1122 00:55:45.860154       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1122 00:55:45.868366       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1122 00:55:45.954143       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I1122 00:55:46.004505       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I1122 00:55:46.004538       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1122 00:55:47.470446       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1122 00:55:47.637013       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1122 00:55:47.883221       1 alloc.go:328] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W1122 00:55:47.932809       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.76.2]
	I1122 00:55:47.935801       1 controller.go:667] quota admission added evaluator for: endpoints
	I1122 00:55:47.951377       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1122 00:55:48.413050       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1122 00:55:50.383056       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1122 00:55:50.435299       1 alloc.go:328] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I1122 00:55:50.447788       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I1122 00:55:53.556246       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1122 00:55:54.314418       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1122 00:55:54.322050       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1122 00:55:54.508914       1 controller.go:667] quota admission added evaluator for: controllerrevisions.apps
	
	
	==> kube-controller-manager [58723d057a7c4d7a4599412b65ed49e4b817655714dbda1d84a537a9f4a8a6da] <==
	I1122 00:55:53.406954       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-client"
	I1122 00:55:53.406955       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-serving"
	I1122 00:55:53.407022       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-legacy-unknown"
	I1122 00:55:53.408140       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	I1122 00:55:53.408190       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1122 00:55:53.410675       1 shared_informer.go:356] "Caches are synced" controller="expand"
	I1122 00:55:53.422668       1 shared_informer.go:356] "Caches are synced" controller="crt configmap"
	I1122 00:55:53.423181       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1122 00:55:53.423227       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1122 00:55:53.423270       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1122 00:55:53.437987       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1122 00:55:53.438674       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I1122 00:55:53.443366       1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
	I1122 00:55:53.451329       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	I1122 00:55:53.451344       1 shared_informer.go:356] "Caches are synced" controller="validatingadmissionpolicy-status"
	I1122 00:55:53.451468       1 shared_informer.go:356] "Caches are synced" controller="VAC protection"
	I1122 00:55:53.451620       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I1122 00:55:53.451900       1 shared_informer.go:356] "Caches are synced" controller="PVC protection"
	I1122 00:55:53.453853       1 shared_informer.go:356] "Caches are synced" controller="taint-eviction-controller"
	I1122 00:55:53.455067       1 shared_informer.go:356] "Caches are synced" controller="taint"
	I1122 00:55:53.455130       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I1122 00:55:53.455200       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="embed-certs-879000"
	I1122 00:55:53.455242       1 node_lifecycle_controller.go:1025] "Controller detected that all Nodes are not-Ready. Entering master disruption mode" logger="node-lifecycle-controller"
	I1122 00:55:53.460436       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1122 00:56:38.460900       1 node_lifecycle_controller.go:1044] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	
	
	==> kube-proxy [2d42dd9e0d04aef5a7e0925c492a79dc963672b5b17a5fff09978a1dd440bbe8] <==
	I1122 00:55:55.181181       1 server_linux.go:53] "Using iptables proxy"
	I1122 00:55:55.251520       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1122 00:55:55.352064       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1122 00:55:55.352105       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.76.2"]
	E1122 00:55:55.352183       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1122 00:55:55.394534       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1122 00:55:55.394593       1 server_linux.go:132] "Using iptables Proxier"
	I1122 00:55:55.403433       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1122 00:55:55.403804       1 server.go:527] "Version info" version="v1.34.1"
	I1122 00:55:55.403820       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1122 00:55:55.416262       1 config.go:403] "Starting serviceCIDR config controller"
	I1122 00:55:55.416300       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1122 00:55:55.416517       1 config.go:200] "Starting service config controller"
	I1122 00:55:55.416523       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1122 00:55:55.416636       1 config.go:106] "Starting endpoint slice config controller"
	I1122 00:55:55.416645       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1122 00:55:55.417669       1 config.go:309] "Starting node config controller"
	I1122 00:55:55.417695       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1122 00:55:55.417703       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1122 00:55:55.518555       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1122 00:55:55.518607       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1122 00:55:55.518621       1 shared_informer.go:356] "Caches are synced" controller="service config"
	
	
	==> kube-scheduler [f0b35eb74debe9fa89e9f3045ec344e30a3bfcaa5bb70fc56c714c7cc3404c2d] <==
	I1122 00:55:47.264165       1 serving.go:386] Generated self-signed cert in-memory
	I1122 00:55:49.536591       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.1"
	I1122 00:55:49.536636       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1122 00:55:49.541594       1 requestheader_controller.go:180] Starting RequestHeaderAuthRequestController
	I1122 00:55:49.541629       1 shared_informer.go:349] "Waiting for caches to sync" controller="RequestHeaderAuthRequestController"
	I1122 00:55:49.541673       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1122 00:55:49.541685       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1122 00:55:49.541700       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1122 00:55:49.541713       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1122 00:55:49.542840       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1122 00:55:49.542908       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1122 00:55:49.642473       1 shared_informer.go:356] "Caches are synced" controller="RequestHeaderAuthRequestController"
	I1122 00:55:49.642678       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1122 00:55:49.642425       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	
	
	==> kubelet <==
	Nov 22 00:55:53 embed-certs-879000 kubelet[1306]: I1122 00:55:53.427167    1306 kuberuntime_manager.go:1828] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	Nov 22 00:55:53 embed-certs-879000 kubelet[1306]: I1122 00:55:53.427708    1306 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Nov 22 00:55:54 embed-certs-879000 kubelet[1306]: I1122 00:55:54.703328    1306 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/29cadb16-a427-4f8b-b121-3af35927f8d5-cni-cfg\") pod \"kindnet-j8wwg\" (UID: \"29cadb16-a427-4f8b-b121-3af35927f8d5\") " pod="kube-system/kindnet-j8wwg"
	Nov 22 00:55:54 embed-certs-879000 kubelet[1306]: I1122 00:55:54.703905    1306 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/29cadb16-a427-4f8b-b121-3af35927f8d5-xtables-lock\") pod \"kindnet-j8wwg\" (UID: \"29cadb16-a427-4f8b-b121-3af35927f8d5\") " pod="kube-system/kindnet-j8wwg"
	Nov 22 00:55:54 embed-certs-879000 kubelet[1306]: I1122 00:55:54.704043    1306 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/29cadb16-a427-4f8b-b121-3af35927f8d5-lib-modules\") pod \"kindnet-j8wwg\" (UID: \"29cadb16-a427-4f8b-b121-3af35927f8d5\") " pod="kube-system/kindnet-j8wwg"
	Nov 22 00:55:54 embed-certs-879000 kubelet[1306]: I1122 00:55:54.704206    1306 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vgljx\" (UniqueName: \"kubernetes.io/projected/29cadb16-a427-4f8b-b121-3af35927f8d5-kube-api-access-vgljx\") pod \"kindnet-j8wwg\" (UID: \"29cadb16-a427-4f8b-b121-3af35927f8d5\") " pod="kube-system/kindnet-j8wwg"
	Nov 22 00:55:54 embed-certs-879000 kubelet[1306]: I1122 00:55:54.808543    1306 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/f56c390b-4d40-40a3-9862-f5081a6561e5-xtables-lock\") pod \"kube-proxy-w9bqj\" (UID: \"f56c390b-4d40-40a3-9862-f5081a6561e5\") " pod="kube-system/kube-proxy-w9bqj"
	Nov 22 00:55:54 embed-certs-879000 kubelet[1306]: I1122 00:55:54.808615    1306 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/f56c390b-4d40-40a3-9862-f5081a6561e5-kube-proxy\") pod \"kube-proxy-w9bqj\" (UID: \"f56c390b-4d40-40a3-9862-f5081a6561e5\") " pod="kube-system/kube-proxy-w9bqj"
	Nov 22 00:55:54 embed-certs-879000 kubelet[1306]: I1122 00:55:54.808635    1306 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-td7cd\" (UniqueName: \"kubernetes.io/projected/f56c390b-4d40-40a3-9862-f5081a6561e5-kube-api-access-td7cd\") pod \"kube-proxy-w9bqj\" (UID: \"f56c390b-4d40-40a3-9862-f5081a6561e5\") " pod="kube-system/kube-proxy-w9bqj"
	Nov 22 00:55:54 embed-certs-879000 kubelet[1306]: I1122 00:55:54.808673    1306 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/f56c390b-4d40-40a3-9862-f5081a6561e5-lib-modules\") pod \"kube-proxy-w9bqj\" (UID: \"f56c390b-4d40-40a3-9862-f5081a6561e5\") " pod="kube-system/kube-proxy-w9bqj"
	Nov 22 00:55:54 embed-certs-879000 kubelet[1306]: I1122 00:55:54.867518    1306 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory"
	Nov 22 00:55:54 embed-certs-879000 kubelet[1306]: W1122 00:55:54.929895    1306 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/a6fb6b81dce5b3616a7bbeb7273662d346745bb8e0cc70a843d43e0a09366fd0/crio-2869fc6b3d34cd4ac2e461794e25e8e13ff5351c590be131418a655e93fad167 WatchSource:0}: Error finding container 2869fc6b3d34cd4ac2e461794e25e8e13ff5351c590be131418a655e93fad167: Status 404 returned error can't find the container with id 2869fc6b3d34cd4ac2e461794e25e8e13ff5351c590be131418a655e93fad167
	Nov 22 00:55:55 embed-certs-879000 kubelet[1306]: W1122 00:55:55.024474    1306 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/a6fb6b81dce5b3616a7bbeb7273662d346745bb8e0cc70a843d43e0a09366fd0/crio-d0400d5b8b3531f232a3543bbf4cf0d0cba671983903acf65bd93dc85ed90167 WatchSource:0}: Error finding container d0400d5b8b3531f232a3543bbf4cf0d0cba671983903acf65bd93dc85ed90167: Status 404 returned error can't find the container with id d0400d5b8b3531f232a3543bbf4cf0d0cba671983903acf65bd93dc85ed90167
	Nov 22 00:55:55 embed-certs-879000 kubelet[1306]: I1122 00:55:55.590859    1306 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-w9bqj" podStartSLOduration=1.590840145 podStartE2EDuration="1.590840145s" podCreationTimestamp="2025-11-22 00:55:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-22 00:55:55.534082471 +0000 UTC m=+5.297587431" watchObservedRunningTime="2025-11-22 00:55:55.590840145 +0000 UTC m=+5.354345113"
	Nov 22 00:55:55 embed-certs-879000 kubelet[1306]: I1122 00:55:55.671167    1306 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kindnet-j8wwg" podStartSLOduration=1.67114853 podStartE2EDuration="1.67114853s" podCreationTimestamp="2025-11-22 00:55:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-22 00:55:55.667478156 +0000 UTC m=+5.430983124" watchObservedRunningTime="2025-11-22 00:55:55.67114853 +0000 UTC m=+5.434653490"
	Nov 22 00:56:35 embed-certs-879000 kubelet[1306]: I1122 00:56:35.522730    1306 kubelet_node_status.go:439] "Fast updating node status as it just became ready"
	Nov 22 00:56:35 embed-certs-879000 kubelet[1306]: I1122 00:56:35.635156    1306 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/5adad534-0ba4-479f-8e5a-7f5a9e26fb1e-config-volume\") pod \"coredns-66bc5c9577-h2kpd\" (UID: \"5adad534-0ba4-479f-8e5a-7f5a9e26fb1e\") " pod="kube-system/coredns-66bc5c9577-h2kpd"
	Nov 22 00:56:35 embed-certs-879000 kubelet[1306]: I1122 00:56:35.635399    1306 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-46pqx\" (UniqueName: \"kubernetes.io/projected/042e1631-1c8e-4ce0-92e5-cdd4742fa06b-kube-api-access-46pqx\") pod \"storage-provisioner\" (UID: \"042e1631-1c8e-4ce0-92e5-cdd4742fa06b\") " pod="kube-system/storage-provisioner"
	Nov 22 00:56:35 embed-certs-879000 kubelet[1306]: I1122 00:56:35.635506    1306 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2lmrw\" (UniqueName: \"kubernetes.io/projected/5adad534-0ba4-479f-8e5a-7f5a9e26fb1e-kube-api-access-2lmrw\") pod \"coredns-66bc5c9577-h2kpd\" (UID: \"5adad534-0ba4-479f-8e5a-7f5a9e26fb1e\") " pod="kube-system/coredns-66bc5c9577-h2kpd"
	Nov 22 00:56:35 embed-certs-879000 kubelet[1306]: I1122 00:56:35.635595    1306 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/042e1631-1c8e-4ce0-92e5-cdd4742fa06b-tmp\") pod \"storage-provisioner\" (UID: \"042e1631-1c8e-4ce0-92e5-cdd4742fa06b\") " pod="kube-system/storage-provisioner"
	Nov 22 00:56:35 embed-certs-879000 kubelet[1306]: W1122 00:56:35.940782    1306 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/a6fb6b81dce5b3616a7bbeb7273662d346745bb8e0cc70a843d43e0a09366fd0/crio-3034dded06ea5c24d16b4d9f920125acaf54fbd21eb182a7b502d36044374811 WatchSource:0}: Error finding container 3034dded06ea5c24d16b4d9f920125acaf54fbd21eb182a7b502d36044374811: Status 404 returned error can't find the container with id 3034dded06ea5c24d16b4d9f920125acaf54fbd21eb182a7b502d36044374811
	Nov 22 00:56:36 embed-certs-879000 kubelet[1306]: I1122 00:56:36.625160    1306 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/storage-provisioner" podStartSLOduration=41.625143875 podStartE2EDuration="41.625143875s" podCreationTimestamp="2025-11-22 00:55:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-22 00:56:36.608647964 +0000 UTC m=+46.372152932" watchObservedRunningTime="2025-11-22 00:56:36.625143875 +0000 UTC m=+46.388648843"
	Nov 22 00:56:38 embed-certs-879000 kubelet[1306]: I1122 00:56:38.711185    1306 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-h2kpd" podStartSLOduration=44.711168333 podStartE2EDuration="44.711168333s" podCreationTimestamp="2025-11-22 00:55:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-22 00:56:36.626757604 +0000 UTC m=+46.390262580" watchObservedRunningTime="2025-11-22 00:56:38.711168333 +0000 UTC m=+48.474673301"
	Nov 22 00:56:38 embed-certs-879000 kubelet[1306]: I1122 00:56:38.783699    1306 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-775bz\" (UniqueName: \"kubernetes.io/projected/9c85fc23-1f39-430f-a828-390ca91fd200-kube-api-access-775bz\") pod \"busybox\" (UID: \"9c85fc23-1f39-430f-a828-390ca91fd200\") " pod="default/busybox"
	Nov 22 00:56:39 embed-certs-879000 kubelet[1306]: W1122 00:56:39.070030    1306 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/a6fb6b81dce5b3616a7bbeb7273662d346745bb8e0cc70a843d43e0a09366fd0/crio-6f0d50a4988f6c7ba5b80bfd05a37e32a976d5adae46e2574f91e404c75c03f7 WatchSource:0}: Error finding container 6f0d50a4988f6c7ba5b80bfd05a37e32a976d5adae46e2574f91e404c75c03f7: Status 404 returned error can't find the container with id 6f0d50a4988f6c7ba5b80bfd05a37e32a976d5adae46e2574f91e404c75c03f7
	
	
	==> storage-provisioner [dbc4289924505a1a80f14d41c188b956f3a4fa64cf59aeefa0a7f71b501dc820] <==
	I1122 00:56:36.107006       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1122 00:56:36.218569       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1122 00:56:36.218771       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W1122 00:56:36.221460       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1122 00:56:36.231628       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1122 00:56:36.231902       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1122 00:56:36.232056       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"2574480b-6478-427e-83f2-2c518ace1325", APIVersion:"v1", ResourceVersion:"455", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' embed-certs-879000_2081ece5-557e-4d38-aedc-29f00b6c2f29 became leader
	I1122 00:56:36.232824       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_embed-certs-879000_2081ece5-557e-4d38-aedc-29f00b6c2f29!
	W1122 00:56:36.248975       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1122 00:56:36.253244       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1122 00:56:36.333782       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_embed-certs-879000_2081ece5-557e-4d38-aedc-29f00b6c2f29!
	W1122 00:56:38.256236       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1122 00:56:38.260698       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1122 00:56:40.263861       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1122 00:56:40.268701       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1122 00:56:42.272852       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1122 00:56:42.281223       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1122 00:56:44.284362       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1122 00:56:44.289723       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1122 00:56:46.292664       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1122 00:56:46.297569       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1122 00:56:48.301227       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1122 00:56:48.317537       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p embed-certs-879000 -n embed-certs-879000
helpers_test.go:269: (dbg) Run:  kubectl --context embed-certs-879000 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/embed-certs/serial/EnableAddonWhileActive FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (3.73s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Pause (8.29s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 pause -p no-preload-165130 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-arm64 pause -p no-preload-165130 --alsologtostderr -v=1: exit status 80 (2.30939473s)

                                                
                                                
-- stdout --
	* Pausing node no-preload-165130 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1122 00:57:24.695272  712910 out.go:360] Setting OutFile to fd 1 ...
	I1122 00:57:24.695439  712910 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1122 00:57:24.695445  712910 out.go:374] Setting ErrFile to fd 2...
	I1122 00:57:24.695449  712910 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1122 00:57:24.695727  712910 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21934-513600/.minikube/bin
	I1122 00:57:24.695970  712910 out.go:368] Setting JSON to false
	I1122 00:57:24.695986  712910 mustload.go:66] Loading cluster: no-preload-165130
	I1122 00:57:24.697077  712910 config.go:182] Loaded profile config "no-preload-165130": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1122 00:57:24.697695  712910 cli_runner.go:164] Run: docker container inspect no-preload-165130 --format={{.State.Status}}
	I1122 00:57:24.723461  712910 host.go:66] Checking if "no-preload-165130" exists ...
	I1122 00:57:24.723768  712910 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1122 00:57:24.846053  712910 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:2 ContainersRunning:2 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:50 OomKillDisable:true NGoroutines:63 SystemTime:2025-11-22 00:57:24.835729849 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1122 00:57:24.846682  712910 pause.go:60] "namespaces" [kube-system kubernetes-dashboard istio-operator]="keys" map[addons:[] all:%!s(bool=false) apiserver-ips:[] apiserver-name:minikubeCA apiserver-names:[] apiserver-port:%!s(int=8443) auto-pause-interval:1m0s auto-update-drivers:%!s(bool=true) base-image:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763588073-21934@sha256:19d3da0413e1bfa354cbb88004c6796f8e9772a083e0230b0f6e50212ee04c7e binary-mirror: bootstrapper:kubeadm cache-images:%!s(bool=true) cancel-scheduled:%!s(bool=false) cert-expiration:26280h0m0s cni: container-runtime: cpus:2 cri-socket: delete-on-failure:%!s(bool=false) disable-coredns-log:%!s(bool=false) disable-driver-mounts:%!s(bool=false) disable-metrics:%!s(bool=false) disable-optimizations:%!s(bool=false) disk-size:20000mb dns-domain:cluster.local dns-proxy:%!s(bool=false) docker-env:[] docker-opt:[] download-only:%!s(bool=false) driver: dry-run:%!s(bool=false) embed-certs:%!s(bool=false) embedcerts:%!s(bool=false) enable-default-
cni:%!s(bool=false) extra-config: extra-disks:%!s(int=0) feature-gates: force:%!s(bool=false) force-systemd:%!s(bool=false) gpus: ha:%!s(bool=false) host-dns-resolver:%!s(bool=true) host-only-cidr:192.168.59.1/24 host-only-nic-type:virtio hyperkit-vpnkit-sock: hyperkit-vsock-ports:[] hyperv-external-adapter: hyperv-use-external-switch:%!s(bool=false) hyperv-virtual-switch: image-mirror-country: image-repository: insecure-registry:[] install-addons:%!s(bool=true) interactive:%!s(bool=true) iso-url:[https://storage.googleapis.com/minikube-builds/iso/21924/minikube-v1.37.0-1763503576-21924-arm64.iso https://github.com/kubernetes/minikube/releases/download/v1.37.0-1763503576-21924/minikube-v1.37.0-1763503576-21924-arm64.iso https://kubernetes.oss-cn-hangzhou.aliyuncs.com/minikube/iso/minikube-v1.37.0-1763503576-21924-arm64.iso] keep-context:%!s(bool=false) keep-context-active:%!s(bool=false) kubernetes-version: kvm-gpu:%!s(bool=false) kvm-hidden:%!s(bool=false) kvm-network:default kvm-numa-count:%!s(int=1) kvm-qe
mu-uri:qemu:///system listen-address: maxauditentries:%!s(int=1000) memory: mount:%!s(bool=false) mount-9p-version:9p2000.L mount-gid:docker mount-ip: mount-msize:%!s(int=262144) mount-options:[] mount-port:0 mount-string: mount-type:9p mount-uid:docker namespace:default nat-nic-type:virtio native-ssh:%!s(bool=true) network: network-plugin: nfs-share:[] nfs-shares-root:/nfsshares no-kubernetes:%!s(bool=false) no-vtx-check:%!s(bool=false) nodes:%!s(int=1) output:text ports:[] preload:%!s(bool=true) profile:no-preload-165130 purge:%!s(bool=false) qemu-firmware-path: registry-mirror:[] reminderwaitperiodinhours:%!s(int=24) rootless:%!s(bool=false) schedule:0s service-cluster-ip-range:10.96.0.0/12 skip-audit:%!s(bool=false) socket-vmnet-client-path: socket-vmnet-path: ssh-ip-address: ssh-key: ssh-port:%!s(int=22) ssh-user:root static-ip: subnet: trace: user: uuid: vm:%!s(bool=false) vm-driver: wait:[apiserver system_pods] wait-timeout:6m0s wantnonedriverwarning:%!s(bool=true) wantupdatenotification:%!s(bool=true)
wantvirtualboxdriverwarning:%!s(bool=true)]="(MISSING)"
	I1122 00:57:24.852429  712910 out.go:179] * Pausing node no-preload-165130 ... 
	I1122 00:57:24.856673  712910 host.go:66] Checking if "no-preload-165130" exists ...
	I1122 00:57:24.857035  712910 ssh_runner.go:195] Run: systemctl --version
	I1122 00:57:24.857079  712910 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-165130
	I1122 00:57:24.879510  712910 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33792 SSHKeyPath:/home/jenkins/minikube-integration/21934-513600/.minikube/machines/no-preload-165130/id_rsa Username:docker}
	I1122 00:57:24.987192  712910 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1122 00:57:25.003819  712910 pause.go:52] kubelet running: true
	I1122 00:57:25.003902  712910 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1122 00:57:25.349341  712910 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1122 00:57:25.349426  712910 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1122 00:57:25.443192  712910 cri.go:89] found id: "8e14a16d0cdf92563d2c084ac281b0188eb1b47be445498e8be3c53d189c3b19"
	I1122 00:57:25.443282  712910 cri.go:89] found id: "000dd70b698217aa6d95bd509cf47f3362cb467c72c920839b74ece78d579568"
	I1122 00:57:25.443294  712910 cri.go:89] found id: "1b8b4ae3716d0f09f3a149118c900081488b110516bacbfdff2a7628edfb0a3c"
	I1122 00:57:25.443298  712910 cri.go:89] found id: "68cc2c66134d4931da62d73545c07672b9868b40d69e94529e89109da91c6ae0"
	I1122 00:57:25.443302  712910 cri.go:89] found id: "7f4fb030dec090c0c97a17566010db1ca14f9d615fa6361747c4bee8a0793d79"
	I1122 00:57:25.443305  712910 cri.go:89] found id: "d445939f66bc78fcb625769cedbe045c3808629a73405f7634fc8403e2147225"
	I1122 00:57:25.443308  712910 cri.go:89] found id: "1842c88afa2f92b8f5db1d2172ed30aa44f94bd9a078bc60055ef3ff3665300f"
	I1122 00:57:25.443311  712910 cri.go:89] found id: "4a703cddbc0fe37c59864ce18d11f40681bb2c9564af9cff7e041d5680b0df58"
	I1122 00:57:25.443315  712910 cri.go:89] found id: "447fcc475a7323f248b3a0cdb76a205b4bbe6be27f083fa2031b33a0533a533e"
	I1122 00:57:25.443321  712910 cri.go:89] found id: "61a964437806ac8e9ff5933c7a44cae8231f1736a03e970fe7de22ff50d297f6"
	I1122 00:57:25.443325  712910 cri.go:89] found id: "f3789d40f639b65e128847fc1b9af95f63a6b0dd3621ef61be5f39b47cd48613"
	I1122 00:57:25.443329  712910 cri.go:89] found id: ""
	I1122 00:57:25.443383  712910 ssh_runner.go:195] Run: sudo runc list -f json
	I1122 00:57:25.456120  712910 retry.go:31] will retry after 199.753127ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-22T00:57:25Z" level=error msg="open /run/runc: no such file or directory"
	I1122 00:57:25.656592  712910 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1122 00:57:25.670839  712910 pause.go:52] kubelet running: false
	I1122 00:57:25.670977  712910 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1122 00:57:25.886505  712910 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1122 00:57:25.886591  712910 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1122 00:57:25.955981  712910 cri.go:89] found id: "8e14a16d0cdf92563d2c084ac281b0188eb1b47be445498e8be3c53d189c3b19"
	I1122 00:57:25.956047  712910 cri.go:89] found id: "000dd70b698217aa6d95bd509cf47f3362cb467c72c920839b74ece78d579568"
	I1122 00:57:25.956062  712910 cri.go:89] found id: "1b8b4ae3716d0f09f3a149118c900081488b110516bacbfdff2a7628edfb0a3c"
	I1122 00:57:25.956066  712910 cri.go:89] found id: "68cc2c66134d4931da62d73545c07672b9868b40d69e94529e89109da91c6ae0"
	I1122 00:57:25.956069  712910 cri.go:89] found id: "7f4fb030dec090c0c97a17566010db1ca14f9d615fa6361747c4bee8a0793d79"
	I1122 00:57:25.956073  712910 cri.go:89] found id: "d445939f66bc78fcb625769cedbe045c3808629a73405f7634fc8403e2147225"
	I1122 00:57:25.956076  712910 cri.go:89] found id: "1842c88afa2f92b8f5db1d2172ed30aa44f94bd9a078bc60055ef3ff3665300f"
	I1122 00:57:25.956080  712910 cri.go:89] found id: "4a703cddbc0fe37c59864ce18d11f40681bb2c9564af9cff7e041d5680b0df58"
	I1122 00:57:25.956083  712910 cri.go:89] found id: "447fcc475a7323f248b3a0cdb76a205b4bbe6be27f083fa2031b33a0533a533e"
	I1122 00:57:25.956089  712910 cri.go:89] found id: "61a964437806ac8e9ff5933c7a44cae8231f1736a03e970fe7de22ff50d297f6"
	I1122 00:57:25.956092  712910 cri.go:89] found id: "f3789d40f639b65e128847fc1b9af95f63a6b0dd3621ef61be5f39b47cd48613"
	I1122 00:57:25.956096  712910 cri.go:89] found id: ""
	I1122 00:57:25.956172  712910 ssh_runner.go:195] Run: sudo runc list -f json
	I1122 00:57:25.968765  712910 retry.go:31] will retry after 542.079876ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-22T00:57:25Z" level=error msg="open /run/runc: no such file or directory"
	I1122 00:57:26.511142  712910 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1122 00:57:26.531174  712910 pause.go:52] kubelet running: false
	I1122 00:57:26.531330  712910 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1122 00:57:26.760640  712910 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1122 00:57:26.760787  712910 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1122 00:57:26.895848  712910 cri.go:89] found id: "8e14a16d0cdf92563d2c084ac281b0188eb1b47be445498e8be3c53d189c3b19"
	I1122 00:57:26.895921  712910 cri.go:89] found id: "000dd70b698217aa6d95bd509cf47f3362cb467c72c920839b74ece78d579568"
	I1122 00:57:26.895940  712910 cri.go:89] found id: "1b8b4ae3716d0f09f3a149118c900081488b110516bacbfdff2a7628edfb0a3c"
	I1122 00:57:26.895963  712910 cri.go:89] found id: "68cc2c66134d4931da62d73545c07672b9868b40d69e94529e89109da91c6ae0"
	I1122 00:57:26.895981  712910 cri.go:89] found id: "7f4fb030dec090c0c97a17566010db1ca14f9d615fa6361747c4bee8a0793d79"
	I1122 00:57:26.895999  712910 cri.go:89] found id: "d445939f66bc78fcb625769cedbe045c3808629a73405f7634fc8403e2147225"
	I1122 00:57:26.896016  712910 cri.go:89] found id: "1842c88afa2f92b8f5db1d2172ed30aa44f94bd9a078bc60055ef3ff3665300f"
	I1122 00:57:26.896033  712910 cri.go:89] found id: "4a703cddbc0fe37c59864ce18d11f40681bb2c9564af9cff7e041d5680b0df58"
	I1122 00:57:26.896050  712910 cri.go:89] found id: "447fcc475a7323f248b3a0cdb76a205b4bbe6be27f083fa2031b33a0533a533e"
	I1122 00:57:26.896072  712910 cri.go:89] found id: "61a964437806ac8e9ff5933c7a44cae8231f1736a03e970fe7de22ff50d297f6"
	I1122 00:57:26.896089  712910 cri.go:89] found id: "f3789d40f639b65e128847fc1b9af95f63a6b0dd3621ef61be5f39b47cd48613"
	I1122 00:57:26.896106  712910 cri.go:89] found id: ""
	I1122 00:57:26.896173  712910 ssh_runner.go:195] Run: sudo runc list -f json
	I1122 00:57:26.917783  712910 out.go:203] 
	W1122 00:57:26.920387  712910 out.go:285] X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-22T00:57:26Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-22T00:57:26Z" level=error msg="open /run/runc: no such file or directory"
	
	W1122 00:57:26.920412  712910 out.go:285] * 
	* 
	W1122 00:57:26.933360  712910 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1122 00:57:26.936826  712910 out.go:203] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:309: out/minikube-linux-arm64 pause -p no-preload-165130 --alsologtostderr -v=1 failed: exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/no-preload/serial/Pause]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/no-preload/serial/Pause]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect no-preload-165130
helpers_test.go:243: (dbg) docker inspect no-preload-165130:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "1c65dce5fc4b31daf0735f57309cfc4584efb68447875212bc03d5cc1861ca03",
	        "Created": "2025-11-22T00:54:44.324816446Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 708042,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-11-22T00:56:26.217221926Z",
	            "FinishedAt": "2025-11-22T00:56:25.408968932Z"
	        },
	        "Image": "sha256:97061622515a85470dad13cb046ad5fc5021fcd2b05aa921a620abb52951cd5d",
	        "ResolvConfPath": "/var/lib/docker/containers/1c65dce5fc4b31daf0735f57309cfc4584efb68447875212bc03d5cc1861ca03/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/1c65dce5fc4b31daf0735f57309cfc4584efb68447875212bc03d5cc1861ca03/hostname",
	        "HostsPath": "/var/lib/docker/containers/1c65dce5fc4b31daf0735f57309cfc4584efb68447875212bc03d5cc1861ca03/hosts",
	        "LogPath": "/var/lib/docker/containers/1c65dce5fc4b31daf0735f57309cfc4584efb68447875212bc03d5cc1861ca03/1c65dce5fc4b31daf0735f57309cfc4584efb68447875212bc03d5cc1861ca03-json.log",
	        "Name": "/no-preload-165130",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "no-preload-165130:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "no-preload-165130",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "1c65dce5fc4b31daf0735f57309cfc4584efb68447875212bc03d5cc1861ca03",
	                "LowerDir": "/var/lib/docker/overlay2/1fcc7ed347f82b0a593d86e8b13d7b8b6ed58d69e01b67e3031748c6c4f0b12f-init/diff:/var/lib/docker/overlay2/7e8788c6de692bc1c3758a2bb2c4b8da0fbba26855f855c0f3b655bfbdd92f8e/diff",
	                "MergedDir": "/var/lib/docker/overlay2/1fcc7ed347f82b0a593d86e8b13d7b8b6ed58d69e01b67e3031748c6c4f0b12f/merged",
	                "UpperDir": "/var/lib/docker/overlay2/1fcc7ed347f82b0a593d86e8b13d7b8b6ed58d69e01b67e3031748c6c4f0b12f/diff",
	                "WorkDir": "/var/lib/docker/overlay2/1fcc7ed347f82b0a593d86e8b13d7b8b6ed58d69e01b67e3031748c6c4f0b12f/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "no-preload-165130",
	                "Source": "/var/lib/docker/volumes/no-preload-165130/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "no-preload-165130",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763588073-21934@sha256:19d3da0413e1bfa354cbb88004c6796f8e9772a083e0230b0f6e50212ee04c7e",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "no-preload-165130",
	                "name.minikube.sigs.k8s.io": "no-preload-165130",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "d7b03a47c62a7b8c32c7f61807f04ad75303ad145f5825429b5ec3cec82730d2",
	            "SandboxKey": "/var/run/docker/netns/d7b03a47c62a",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33792"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33793"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33796"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33794"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33795"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "no-preload-165130": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.85.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "06:93:52:56:ee:c4",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "0ab9f51973bdd85552219ab44a532b9743aba79f533b4d8124872498c1e7cb0a",
	                    "EndpointID": "564cd05f4317668e6f4ead3bb58f580b7413940d38dbde80038befae2aa1a688",
	                    "Gateway": "192.168.85.1",
	                    "IPAddress": "192.168.85.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "no-preload-165130",
	                        "1c65dce5fc4b"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-165130 -n no-preload-165130
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-165130 -n no-preload-165130: exit status 2 (446.8551ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestStartStop/group/no-preload/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/no-preload/serial/Pause]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p no-preload-165130 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p no-preload-165130 logs -n 25: (1.708644146s)
helpers_test.go:260: TestStartStop/group/no-preload/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬────────────────────────┬─────────┬─────────┬─────────────────────┬────────────────────
─┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │        PROFILE         │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼────────────────────────┼─────────┼─────────┼─────────────────────┼────────────────────
─┤
	│ ssh     │ -p cert-options-002126 -- sudo cat /etc/kubernetes/admin.conf                                                                                                                                                                                 │ cert-options-002126    │ jenkins │ v1.37.0 │ 22 Nov 25 00:52 UTC │ 22 Nov 25 00:52 UTC │
	│ delete  │ -p cert-options-002126                                                                                                                                                                                                                        │ cert-options-002126    │ jenkins │ v1.37.0 │ 22 Nov 25 00:52 UTC │ 22 Nov 25 00:52 UTC │
	│ start   │ -p old-k8s-version-625837 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-625837 │ jenkins │ v1.37.0 │ 22 Nov 25 00:52 UTC │ 22 Nov 25 00:53 UTC │
	│ addons  │ enable metrics-server -p old-k8s-version-625837 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                  │ old-k8s-version-625837 │ jenkins │ v1.37.0 │ 22 Nov 25 00:53 UTC │                     │
	│ stop    │ -p old-k8s-version-625837 --alsologtostderr -v=3                                                                                                                                                                                              │ old-k8s-version-625837 │ jenkins │ v1.37.0 │ 22 Nov 25 00:53 UTC │ 22 Nov 25 00:53 UTC │
	│ addons  │ enable dashboard -p old-k8s-version-625837 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                             │ old-k8s-version-625837 │ jenkins │ v1.37.0 │ 22 Nov 25 00:53 UTC │ 22 Nov 25 00:53 UTC │
	│ start   │ -p old-k8s-version-625837 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-625837 │ jenkins │ v1.37.0 │ 22 Nov 25 00:53 UTC │ 22 Nov 25 00:54 UTC │
	│ image   │ old-k8s-version-625837 image list --format=json                                                                                                                                                                                               │ old-k8s-version-625837 │ jenkins │ v1.37.0 │ 22 Nov 25 00:54 UTC │ 22 Nov 25 00:54 UTC │
	│ pause   │ -p old-k8s-version-625837 --alsologtostderr -v=1                                                                                                                                                                                              │ old-k8s-version-625837 │ jenkins │ v1.37.0 │ 22 Nov 25 00:54 UTC │                     │
	│ start   │ -p cert-expiration-621390 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=crio                                                                                                                                     │ cert-expiration-621390 │ jenkins │ v1.37.0 │ 22 Nov 25 00:54 UTC │ 22 Nov 25 00:55 UTC │
	│ delete  │ -p old-k8s-version-625837                                                                                                                                                                                                                     │ old-k8s-version-625837 │ jenkins │ v1.37.0 │ 22 Nov 25 00:54 UTC │ 22 Nov 25 00:54 UTC │
	│ delete  │ -p old-k8s-version-625837                                                                                                                                                                                                                     │ old-k8s-version-625837 │ jenkins │ v1.37.0 │ 22 Nov 25 00:54 UTC │ 22 Nov 25 00:54 UTC │
	│ start   │ -p no-preload-165130 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                       │ no-preload-165130      │ jenkins │ v1.37.0 │ 22 Nov 25 00:54 UTC │ 22 Nov 25 00:56 UTC │
	│ delete  │ -p cert-expiration-621390                                                                                                                                                                                                                     │ cert-expiration-621390 │ jenkins │ v1.37.0 │ 22 Nov 25 00:55 UTC │ 22 Nov 25 00:55 UTC │
	│ start   │ -p embed-certs-879000 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                        │ embed-certs-879000     │ jenkins │ v1.37.0 │ 22 Nov 25 00:55 UTC │ 22 Nov 25 00:56 UTC │
	│ addons  │ enable metrics-server -p no-preload-165130 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ no-preload-165130      │ jenkins │ v1.37.0 │ 22 Nov 25 00:56 UTC │                     │
	│ stop    │ -p no-preload-165130 --alsologtostderr -v=3                                                                                                                                                                                                   │ no-preload-165130      │ jenkins │ v1.37.0 │ 22 Nov 25 00:56 UTC │ 22 Nov 25 00:56 UTC │
	│ addons  │ enable dashboard -p no-preload-165130 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ no-preload-165130      │ jenkins │ v1.37.0 │ 22 Nov 25 00:56 UTC │ 22 Nov 25 00:56 UTC │
	│ start   │ -p no-preload-165130 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                       │ no-preload-165130      │ jenkins │ v1.37.0 │ 22 Nov 25 00:56 UTC │ 22 Nov 25 00:57 UTC │
	│ addons  │ enable metrics-server -p embed-certs-879000 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                      │ embed-certs-879000     │ jenkins │ v1.37.0 │ 22 Nov 25 00:56 UTC │                     │
	│ stop    │ -p embed-certs-879000 --alsologtostderr -v=3                                                                                                                                                                                                  │ embed-certs-879000     │ jenkins │ v1.37.0 │ 22 Nov 25 00:56 UTC │ 22 Nov 25 00:57 UTC │
	│ addons  │ enable dashboard -p embed-certs-879000 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                 │ embed-certs-879000     │ jenkins │ v1.37.0 │ 22 Nov 25 00:57 UTC │ 22 Nov 25 00:57 UTC │
	│ start   │ -p embed-certs-879000 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                        │ embed-certs-879000     │ jenkins │ v1.37.0 │ 22 Nov 25 00:57 UTC │                     │
	│ image   │ no-preload-165130 image list --format=json                                                                                                                                                                                                    │ no-preload-165130      │ jenkins │ v1.37.0 │ 22 Nov 25 00:57 UTC │ 22 Nov 25 00:57 UTC │
	│ pause   │ -p no-preload-165130 --alsologtostderr -v=1                                                                                                                                                                                                   │ no-preload-165130      │ jenkins │ v1.37.0 │ 22 Nov 25 00:57 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴────────────────────────┴─────────┴─────────┴─────────────────────┴────────────────────
─┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/22 00:57:03
	Running on machine: ip-172-31-21-244
	Binary: Built with gc go1.25.3 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1122 00:57:03.399087  710840 out.go:360] Setting OutFile to fd 1 ...
	I1122 00:57:03.399199  710840 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1122 00:57:03.399209  710840 out.go:374] Setting ErrFile to fd 2...
	I1122 00:57:03.399215  710840 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1122 00:57:03.399478  710840 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21934-513600/.minikube/bin
	I1122 00:57:03.399844  710840 out.go:368] Setting JSON to false
	I1122 00:57:03.400771  710840 start.go:133] hostinfo: {"hostname":"ip-172-31-21-244","uptime":20340,"bootTime":1763752684,"procs":199,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"da8ac1fd-6236-412a-a346-95873c98230d"}
	I1122 00:57:03.400844  710840 start.go:143] virtualization:  
	I1122 00:57:03.403697  710840 out.go:179] * [embed-certs-879000] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1122 00:57:03.407438  710840 out.go:179]   - MINIKUBE_LOCATION=21934
	I1122 00:57:03.407577  710840 notify.go:221] Checking for updates...
	I1122 00:57:03.413397  710840 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1122 00:57:03.416288  710840 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21934-513600/kubeconfig
	I1122 00:57:03.419167  710840 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21934-513600/.minikube
	I1122 00:57:03.422137  710840 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1122 00:57:03.425025  710840 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1122 00:57:03.428470  710840 config.go:182] Loaded profile config "embed-certs-879000": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1122 00:57:03.429053  710840 driver.go:422] Setting default libvirt URI to qemu:///system
	I1122 00:57:03.459740  710840 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1122 00:57:03.459857  710840 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1122 00:57:03.518225  710840 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:41 OomKillDisable:true NGoroutines:52 SystemTime:2025-11-22 00:57:03.50841844 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aa
rch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pa
th:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1122 00:57:03.518349  710840 docker.go:319] overlay module found
	I1122 00:57:03.521573  710840 out.go:179] * Using the docker driver based on existing profile
	I1122 00:57:03.524292  710840 start.go:309] selected driver: docker
	I1122 00:57:03.524312  710840 start.go:930] validating driver "docker" against &{Name:embed-certs-879000 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763588073-21934@sha256:19d3da0413e1bfa354cbb88004c6796f8e9772a083e0230b0f6e50212ee04c7e Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:embed-certs-879000 Namespace:default APIServerHAVIP: APIServerN
ame:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:
9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1122 00:57:03.524403  710840 start.go:941] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1122 00:57:03.525118  710840 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1122 00:57:03.582349  710840 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:41 OomKillDisable:true NGoroutines:52 SystemTime:2025-11-22 00:57:03.573362057 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1122 00:57:03.582684  710840 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1122 00:57:03.582719  710840 cni.go:84] Creating CNI manager for ""
	I1122 00:57:03.582780  710840 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1122 00:57:03.582831  710840 start.go:353] cluster config:
	{Name:embed-certs-879000 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763588073-21934@sha256:19d3da0413e1bfa354cbb88004c6796f8e9772a083e0230b0f6e50212ee04c7e Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:embed-certs-879000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Contain
erRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false
DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1122 00:57:03.587979  710840 out.go:179] * Starting "embed-certs-879000" primary control-plane node in "embed-certs-879000" cluster
	I1122 00:57:03.590773  710840 cache.go:134] Beginning downloading kic base image for docker with crio
	I1122 00:57:03.593670  710840 out.go:179] * Pulling base image v0.0.48-1763588073-21934 ...
	I1122 00:57:03.596459  710840 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1122 00:57:03.596484  710840 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763588073-21934@sha256:19d3da0413e1bfa354cbb88004c6796f8e9772a083e0230b0f6e50212ee04c7e in local docker daemon
	I1122 00:57:03.596518  710840 preload.go:203] Found local preload: /home/jenkins/minikube-integration/21934-513600/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4
	I1122 00:57:03.596539  710840 cache.go:65] Caching tarball of preloaded images
	I1122 00:57:03.596620  710840 preload.go:238] Found /home/jenkins/minikube-integration/21934-513600/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1122 00:57:03.596630  710840 cache.go:68] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1122 00:57:03.596743  710840 profile.go:143] Saving config to /home/jenkins/minikube-integration/21934-513600/.minikube/profiles/embed-certs-879000/config.json ...
	I1122 00:57:03.616904  710840 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763588073-21934@sha256:19d3da0413e1bfa354cbb88004c6796f8e9772a083e0230b0f6e50212ee04c7e in local docker daemon, skipping pull
	I1122 00:57:03.616927  710840 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763588073-21934@sha256:19d3da0413e1bfa354cbb88004c6796f8e9772a083e0230b0f6e50212ee04c7e exists in daemon, skipping load
	I1122 00:57:03.616948  710840 cache.go:243] Successfully downloaded all kic artifacts
	I1122 00:57:03.616971  710840 start.go:360] acquireMachinesLock for embed-certs-879000: {Name:mk05ac8d8898660ab51c5645d9a1c115c537bdda Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1122 00:57:03.617034  710840 start.go:364] duration metric: took 41.049µs to acquireMachinesLock for "embed-certs-879000"
	I1122 00:57:03.617059  710840 start.go:96] Skipping create...Using existing machine configuration
	I1122 00:57:03.617065  710840 fix.go:54] fixHost starting: 
	I1122 00:57:03.617334  710840 cli_runner.go:164] Run: docker container inspect embed-certs-879000 --format={{.State.Status}}
	I1122 00:57:03.637186  710840 fix.go:112] recreateIfNeeded on embed-certs-879000: state=Stopped err=<nil>
	W1122 00:57:03.637216  710840 fix.go:138] unexpected machine state, will restart: <nil>
	W1122 00:57:01.127793  707914 pod_ready.go:104] pod "coredns-66bc5c9577-pt27w" is not "Ready", error: <nil>
	W1122 00:57:03.128662  707914 pod_ready.go:104] pod "coredns-66bc5c9577-pt27w" is not "Ready", error: <nil>
	W1122 00:57:05.628253  707914 pod_ready.go:104] pod "coredns-66bc5c9577-pt27w" is not "Ready", error: <nil>
	I1122 00:57:03.640356  710840 out.go:252] * Restarting existing docker container for "embed-certs-879000" ...
	I1122 00:57:03.640442  710840 cli_runner.go:164] Run: docker start embed-certs-879000
	I1122 00:57:03.913531  710840 cli_runner.go:164] Run: docker container inspect embed-certs-879000 --format={{.State.Status}}
	I1122 00:57:03.933482  710840 kic.go:430] container "embed-certs-879000" state is running.
	I1122 00:57:03.933909  710840 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-879000
	I1122 00:57:03.958177  710840 profile.go:143] Saving config to /home/jenkins/minikube-integration/21934-513600/.minikube/profiles/embed-certs-879000/config.json ...
	I1122 00:57:03.958537  710840 machine.go:94] provisionDockerMachine start ...
	I1122 00:57:03.958655  710840 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-879000
	I1122 00:57:03.980856  710840 main.go:143] libmachine: Using SSH client type: native
	I1122 00:57:03.981176  710840 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33797 <nil> <nil>}
	I1122 00:57:03.981186  710840 main.go:143] libmachine: About to run SSH command:
	hostname
	I1122 00:57:03.982159  710840 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I1122 00:57:07.127422  710840 main.go:143] libmachine: SSH cmd err, output: <nil>: embed-certs-879000
	
	I1122 00:57:07.127445  710840 ubuntu.go:182] provisioning hostname "embed-certs-879000"
	I1122 00:57:07.127505  710840 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-879000
	I1122 00:57:07.144942  710840 main.go:143] libmachine: Using SSH client type: native
	I1122 00:57:07.145328  710840 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33797 <nil> <nil>}
	I1122 00:57:07.145348  710840 main.go:143] libmachine: About to run SSH command:
	sudo hostname embed-certs-879000 && echo "embed-certs-879000" | sudo tee /etc/hostname
	I1122 00:57:07.299001  710840 main.go:143] libmachine: SSH cmd err, output: <nil>: embed-certs-879000
	
	I1122 00:57:07.299075  710840 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-879000
	I1122 00:57:07.317619  710840 main.go:143] libmachine: Using SSH client type: native
	I1122 00:57:07.317958  710840 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33797 <nil> <nil>}
	I1122 00:57:07.317981  710840 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-879000' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-879000/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-879000' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1122 00:57:07.457911  710840 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1122 00:57:07.457978  710840 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21934-513600/.minikube CaCertPath:/home/jenkins/minikube-integration/21934-513600/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21934-513600/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21934-513600/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21934-513600/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21934-513600/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21934-513600/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21934-513600/.minikube}
	I1122 00:57:07.458014  710840 ubuntu.go:190] setting up certificates
	I1122 00:57:07.458056  710840 provision.go:84] configureAuth start
	I1122 00:57:07.458136  710840 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-879000
	I1122 00:57:07.476012  710840 provision.go:143] copyHostCerts
	I1122 00:57:07.476076  710840 exec_runner.go:144] found /home/jenkins/minikube-integration/21934-513600/.minikube/ca.pem, removing ...
	I1122 00:57:07.476096  710840 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21934-513600/.minikube/ca.pem
	I1122 00:57:07.476250  710840 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21934-513600/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21934-513600/.minikube/ca.pem (1078 bytes)
	I1122 00:57:07.476377  710840 exec_runner.go:144] found /home/jenkins/minikube-integration/21934-513600/.minikube/cert.pem, removing ...
	I1122 00:57:07.476384  710840 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21934-513600/.minikube/cert.pem
	I1122 00:57:07.476414  710840 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21934-513600/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21934-513600/.minikube/cert.pem (1123 bytes)
	I1122 00:57:07.476469  710840 exec_runner.go:144] found /home/jenkins/minikube-integration/21934-513600/.minikube/key.pem, removing ...
	I1122 00:57:07.476474  710840 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21934-513600/.minikube/key.pem
	I1122 00:57:07.476496  710840 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21934-513600/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21934-513600/.minikube/key.pem (1675 bytes)
	I1122 00:57:07.476541  710840 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21934-513600/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21934-513600/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21934-513600/.minikube/certs/ca-key.pem org=jenkins.embed-certs-879000 san=[127.0.0.1 192.168.76.2 embed-certs-879000 localhost minikube]
	I1122 00:57:07.704698  710840 provision.go:177] copyRemoteCerts
	I1122 00:57:07.704792  710840 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1122 00:57:07.704848  710840 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-879000
	I1122 00:57:07.722169  710840 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33797 SSHKeyPath:/home/jenkins/minikube-integration/21934-513600/.minikube/machines/embed-certs-879000/id_rsa Username:docker}
	I1122 00:57:07.829878  710840 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21934-513600/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1122 00:57:07.848675  710840 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21934-513600/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I1122 00:57:07.868141  710840 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21934-513600/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1122 00:57:07.886348  710840 provision.go:87] duration metric: took 428.251263ms to configureAuth
	I1122 00:57:07.886380  710840 ubuntu.go:206] setting minikube options for container-runtime
	I1122 00:57:07.886577  710840 config.go:182] Loaded profile config "embed-certs-879000": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1122 00:57:07.886713  710840 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-879000
	I1122 00:57:07.907276  710840 main.go:143] libmachine: Using SSH client type: native
	I1122 00:57:07.907601  710840 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33797 <nil> <nil>}
	I1122 00:57:07.907615  710840 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1122 00:57:08.270080  710840 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1122 00:57:08.270103  710840 machine.go:97] duration metric: took 4.311552598s to provisionDockerMachine
	I1122 00:57:08.270114  710840 start.go:293] postStartSetup for "embed-certs-879000" (driver="docker")
	I1122 00:57:08.270129  710840 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1122 00:57:08.270186  710840 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1122 00:57:08.270226  710840 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-879000
	I1122 00:57:08.289133  710840 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33797 SSHKeyPath:/home/jenkins/minikube-integration/21934-513600/.minikube/machines/embed-certs-879000/id_rsa Username:docker}
	I1122 00:57:08.389689  710840 ssh_runner.go:195] Run: cat /etc/os-release
	I1122 00:57:08.393154  710840 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1122 00:57:08.393189  710840 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1122 00:57:08.393203  710840 filesync.go:126] Scanning /home/jenkins/minikube-integration/21934-513600/.minikube/addons for local assets ...
	I1122 00:57:08.393260  710840 filesync.go:126] Scanning /home/jenkins/minikube-integration/21934-513600/.minikube/files for local assets ...
	I1122 00:57:08.393353  710840 filesync.go:149] local asset: /home/jenkins/minikube-integration/21934-513600/.minikube/files/etc/ssl/certs/5169372.pem -> 5169372.pem in /etc/ssl/certs
	I1122 00:57:08.393458  710840 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1122 00:57:08.401180  710840 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21934-513600/.minikube/files/etc/ssl/certs/5169372.pem --> /etc/ssl/certs/5169372.pem (1708 bytes)
	I1122 00:57:08.419856  710840 start.go:296] duration metric: took 149.727089ms for postStartSetup
	I1122 00:57:08.419936  710840 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1122 00:57:08.419994  710840 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-879000
	I1122 00:57:08.438594  710840 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33797 SSHKeyPath:/home/jenkins/minikube-integration/21934-513600/.minikube/machines/embed-certs-879000/id_rsa Username:docker}
	I1122 00:57:08.539288  710840 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1122 00:57:08.544246  710840 fix.go:56] duration metric: took 4.927174016s for fixHost
	I1122 00:57:08.544290  710840 start.go:83] releasing machines lock for "embed-certs-879000", held for 4.927226198s
	I1122 00:57:08.544362  710840 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-879000
	I1122 00:57:08.561215  710840 ssh_runner.go:195] Run: cat /version.json
	I1122 00:57:08.561271  710840 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-879000
	I1122 00:57:08.561576  710840 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1122 00:57:08.561632  710840 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-879000
	I1122 00:57:08.584530  710840 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33797 SSHKeyPath:/home/jenkins/minikube-integration/21934-513600/.minikube/machines/embed-certs-879000/id_rsa Username:docker}
	I1122 00:57:08.585440  710840 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33797 SSHKeyPath:/home/jenkins/minikube-integration/21934-513600/.minikube/machines/embed-certs-879000/id_rsa Username:docker}
	I1122 00:57:08.779711  710840 ssh_runner.go:195] Run: systemctl --version
	I1122 00:57:08.786262  710840 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1122 00:57:08.824635  710840 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1122 00:57:08.829301  710840 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1122 00:57:08.829425  710840 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1122 00:57:08.837579  710840 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1122 00:57:08.837601  710840 start.go:496] detecting cgroup driver to use...
	I1122 00:57:08.837631  710840 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1122 00:57:08.837684  710840 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1122 00:57:08.854957  710840 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1122 00:57:08.868339  710840 docker.go:218] disabling cri-docker service (if available) ...
	I1122 00:57:08.868401  710840 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1122 00:57:08.883697  710840 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1122 00:57:08.897515  710840 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1122 00:57:09.033037  710840 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1122 00:57:09.163035  710840 docker.go:234] disabling docker service ...
	I1122 00:57:09.163095  710840 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1122 00:57:09.178870  710840 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1122 00:57:09.204580  710840 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1122 00:57:09.328676  710840 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1122 00:57:09.468076  710840 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1122 00:57:09.482064  710840 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1122 00:57:09.498211  710840 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1122 00:57:09.498299  710840 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1122 00:57:09.507951  710840 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1122 00:57:09.508026  710840 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1122 00:57:09.517496  710840 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1122 00:57:09.526611  710840 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1122 00:57:09.536057  710840 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1122 00:57:09.546910  710840 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1122 00:57:09.555650  710840 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1122 00:57:09.564100  710840 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1122 00:57:09.573549  710840 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1122 00:57:09.581049  710840 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1122 00:57:09.588421  710840 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1122 00:57:09.698893  710840 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1122 00:57:09.869188  710840 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1122 00:57:09.869321  710840 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1122 00:57:09.873228  710840 start.go:564] Will wait 60s for crictl version
	I1122 00:57:09.873335  710840 ssh_runner.go:195] Run: which crictl
	I1122 00:57:09.876798  710840 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1122 00:57:09.902164  710840 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.2
	RuntimeApiVersion:  v1
	I1122 00:57:09.902300  710840 ssh_runner.go:195] Run: crio --version
	I1122 00:57:09.934519  710840 ssh_runner.go:195] Run: crio --version
	I1122 00:57:09.965060  710840 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.2 ...
	W1122 00:57:08.128579  707914 pod_ready.go:104] pod "coredns-66bc5c9577-pt27w" is not "Ready", error: <nil>
	W1122 00:57:10.628264  707914 pod_ready.go:104] pod "coredns-66bc5c9577-pt27w" is not "Ready", error: <nil>
	I1122 00:57:09.967932  710840 cli_runner.go:164] Run: docker network inspect embed-certs-879000 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1122 00:57:09.984828  710840 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I1122 00:57:09.989073  710840 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1122 00:57:10.012064  710840 kubeadm.go:884] updating cluster {Name:embed-certs-879000 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763588073-21934@sha256:19d3da0413e1bfa354cbb88004c6796f8e9772a083e0230b0f6e50212ee04c7e Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:embed-certs-879000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA AP
IServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docke
r BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1122 00:57:10.012220  710840 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1122 00:57:10.012289  710840 ssh_runner.go:195] Run: sudo crictl images --output json
	I1122 00:57:10.055609  710840 crio.go:514] all images are preloaded for cri-o runtime.
	I1122 00:57:10.055636  710840 crio.go:433] Images already preloaded, skipping extraction
	I1122 00:57:10.055696  710840 ssh_runner.go:195] Run: sudo crictl images --output json
	I1122 00:57:10.083515  710840 crio.go:514] all images are preloaded for cri-o runtime.
	I1122 00:57:10.083543  710840 cache_images.go:86] Images are preloaded, skipping loading
	I1122 00:57:10.083553  710840 kubeadm.go:935] updating node { 192.168.76.2 8443 v1.34.1 crio true true} ...
	I1122 00:57:10.083656  710840 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=embed-certs-879000 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:embed-certs-879000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1122 00:57:10.083772  710840 ssh_runner.go:195] Run: crio config
	I1122 00:57:10.154744  710840 cni.go:84] Creating CNI manager for ""
	I1122 00:57:10.154769  710840 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1122 00:57:10.154793  710840 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1122 00:57:10.154816  710840 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-879000 NodeName:embed-certs-879000 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/e
tc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1122 00:57:10.154955  710840 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "embed-certs-879000"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.76.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1122 00:57:10.155038  710840 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1122 00:57:10.163422  710840 binaries.go:51] Found k8s binaries, skipping transfer
	I1122 00:57:10.163505  710840 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1122 00:57:10.171793  710840 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (368 bytes)
	I1122 00:57:10.184834  710840 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1122 00:57:10.203743  710840 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2215 bytes)
	I1122 00:57:10.221464  710840 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I1122 00:57:10.225438  710840 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1122 00:57:10.236155  710840 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1122 00:57:10.383021  710840 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1122 00:57:10.400057  710840 certs.go:69] Setting up /home/jenkins/minikube-integration/21934-513600/.minikube/profiles/embed-certs-879000 for IP: 192.168.76.2
	I1122 00:57:10.400080  710840 certs.go:195] generating shared ca certs ...
	I1122 00:57:10.400096  710840 certs.go:227] acquiring lock for ca certs: {Name:mkaf4c79493334cb2058b349c36be7473837f9b7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1122 00:57:10.400237  710840 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21934-513600/.minikube/ca.key
	I1122 00:57:10.400309  710840 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21934-513600/.minikube/proxy-client-ca.key
	I1122 00:57:10.400321  710840 certs.go:257] generating profile certs ...
	I1122 00:57:10.400413  710840 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21934-513600/.minikube/profiles/embed-certs-879000/client.key
	I1122 00:57:10.400487  710840 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21934-513600/.minikube/profiles/embed-certs-879000/apiserver.key.f00c2ee1
	I1122 00:57:10.400542  710840 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21934-513600/.minikube/profiles/embed-certs-879000/proxy-client.key
	I1122 00:57:10.400654  710840 certs.go:484] found cert: /home/jenkins/minikube-integration/21934-513600/.minikube/certs/516937.pem (1338 bytes)
	W1122 00:57:10.400688  710840 certs.go:480] ignoring /home/jenkins/minikube-integration/21934-513600/.minikube/certs/516937_empty.pem, impossibly tiny 0 bytes
	I1122 00:57:10.400701  710840 certs.go:484] found cert: /home/jenkins/minikube-integration/21934-513600/.minikube/certs/ca-key.pem (1679 bytes)
	I1122 00:57:10.400733  710840 certs.go:484] found cert: /home/jenkins/minikube-integration/21934-513600/.minikube/certs/ca.pem (1078 bytes)
	I1122 00:57:10.400760  710840 certs.go:484] found cert: /home/jenkins/minikube-integration/21934-513600/.minikube/certs/cert.pem (1123 bytes)
	I1122 00:57:10.400790  710840 certs.go:484] found cert: /home/jenkins/minikube-integration/21934-513600/.minikube/certs/key.pem (1675 bytes)
	I1122 00:57:10.400846  710840 certs.go:484] found cert: /home/jenkins/minikube-integration/21934-513600/.minikube/files/etc/ssl/certs/5169372.pem (1708 bytes)
	I1122 00:57:10.407254  710840 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21934-513600/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1122 00:57:10.433183  710840 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21934-513600/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1122 00:57:10.470216  710840 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21934-513600/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1122 00:57:10.491628  710840 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21934-513600/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1122 00:57:10.510784  710840 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21934-513600/.minikube/profiles/embed-certs-879000/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I1122 00:57:10.531296  710840 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21934-513600/.minikube/profiles/embed-certs-879000/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1122 00:57:10.553866  710840 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21934-513600/.minikube/profiles/embed-certs-879000/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1122 00:57:10.575750  710840 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21934-513600/.minikube/profiles/embed-certs-879000/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1122 00:57:10.600564  710840 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21934-513600/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1122 00:57:10.628332  710840 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21934-513600/.minikube/certs/516937.pem --> /usr/share/ca-certificates/516937.pem (1338 bytes)
	I1122 00:57:10.659418  710840 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21934-513600/.minikube/files/etc/ssl/certs/5169372.pem --> /usr/share/ca-certificates/5169372.pem (1708 bytes)
	I1122 00:57:10.682500  710840 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1122 00:57:10.708292  710840 ssh_runner.go:195] Run: openssl version
	I1122 00:57:10.714718  710840 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/516937.pem && ln -fs /usr/share/ca-certificates/516937.pem /etc/ssl/certs/516937.pem"
	I1122 00:57:10.723834  710840 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/516937.pem
	I1122 00:57:10.727780  710840 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov 21 23:55 /usr/share/ca-certificates/516937.pem
	I1122 00:57:10.727845  710840 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/516937.pem
	I1122 00:57:10.773353  710840 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/516937.pem /etc/ssl/certs/51391683.0"
	I1122 00:57:10.783087  710840 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/5169372.pem && ln -fs /usr/share/ca-certificates/5169372.pem /etc/ssl/certs/5169372.pem"
	I1122 00:57:10.793870  710840 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/5169372.pem
	I1122 00:57:10.798689  710840 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov 21 23:55 /usr/share/ca-certificates/5169372.pem
	I1122 00:57:10.798819  710840 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/5169372.pem
	I1122 00:57:10.846814  710840 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/5169372.pem /etc/ssl/certs/3ec20f2e.0"
	I1122 00:57:10.855486  710840 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1122 00:57:10.863954  710840 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1122 00:57:10.868193  710840 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov 21 23:48 /usr/share/ca-certificates/minikubeCA.pem
	I1122 00:57:10.868261  710840 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1122 00:57:10.910054  710840 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1122 00:57:10.918636  710840 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1122 00:57:10.923119  710840 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1122 00:57:10.964597  710840 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1122 00:57:11.006470  710840 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1122 00:57:11.048433  710840 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1122 00:57:11.090774  710840 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1122 00:57:11.142676  710840 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1122 00:57:11.211600  710840 kubeadm.go:401] StartCluster: {Name:embed-certs-879000 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763588073-21934@sha256:19d3da0413e1bfa354cbb88004c6796f8e9772a083e0230b0f6e50212ee04c7e Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:embed-certs-879000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISe
rverNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker B
inaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1122 00:57:11.211695  710840 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1122 00:57:11.211753  710840 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1122 00:57:11.325907  710840 cri.go:89] found id: "9852dc8e953e4082535d9da09fe8c7c488642b2923223eea6484d9282094e3ea"
	I1122 00:57:11.325954  710840 cri.go:89] found id: "f660ef303bd4694fbdad76a7eb87133a3cca27093a6685ac673521dce9c9d434"
	I1122 00:57:11.325973  710840 cri.go:89] found id: "d2057db699cba9b5fd5582afa88f5011c61af54cfbf9b6be282bae14ccb3e06b"
	I1122 00:57:11.325994  710840 cri.go:89] found id: "53b12e2f48badfbf5f25cd651f43e00f0d1451191aa045dab44e2461293c766c"
	I1122 00:57:11.326018  710840 cri.go:89] found id: ""
	I1122 00:57:11.326081  710840 ssh_runner.go:195] Run: sudo runc list -f json
	W1122 00:57:11.352405  710840 kubeadm.go:408] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-22T00:57:11Z" level=error msg="open /run/runc: no such file or directory"
	I1122 00:57:11.352536  710840 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1122 00:57:11.374273  710840 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1122 00:57:11.374338  710840 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1122 00:57:11.374405  710840 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1122 00:57:11.386479  710840 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1122 00:57:11.387126  710840 kubeconfig.go:47] verify endpoint returned: get endpoint: "embed-certs-879000" does not appear in /home/jenkins/minikube-integration/21934-513600/kubeconfig
	I1122 00:57:11.387437  710840 kubeconfig.go:62] /home/jenkins/minikube-integration/21934-513600/kubeconfig needs updating (will repair): [kubeconfig missing "embed-certs-879000" cluster setting kubeconfig missing "embed-certs-879000" context setting]
	I1122 00:57:11.387959  710840 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21934-513600/kubeconfig: {Name:mkcd094d8a765e5a81369c0fb33cb5e696b17bb8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1122 00:57:11.389723  710840 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1122 00:57:11.410217  710840 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.76.2
	I1122 00:57:11.410262  710840 kubeadm.go:602] duration metric: took 35.904558ms to restartPrimaryControlPlane
	I1122 00:57:11.410273  710840 kubeadm.go:403] duration metric: took 198.683876ms to StartCluster
	I1122 00:57:11.410313  710840 settings.go:142] acquiring lock: {Name:mk6c31eb57ec65b047b78b4e1046e03fe7cc77bb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1122 00:57:11.410395  710840 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21934-513600/kubeconfig
	I1122 00:57:11.411792  710840 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21934-513600/kubeconfig: {Name:mkcd094d8a765e5a81369c0fb33cb5e696b17bb8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1122 00:57:11.412072  710840 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1122 00:57:11.412466  710840 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1122 00:57:11.412542  710840 addons.go:70] Setting storage-provisioner=true in profile "embed-certs-879000"
	I1122 00:57:11.412556  710840 addons.go:239] Setting addon storage-provisioner=true in "embed-certs-879000"
	W1122 00:57:11.412569  710840 addons.go:248] addon storage-provisioner should already be in state true
	I1122 00:57:11.412592  710840 host.go:66] Checking if "embed-certs-879000" exists ...
	I1122 00:57:11.413068  710840 cli_runner.go:164] Run: docker container inspect embed-certs-879000 --format={{.State.Status}}
	I1122 00:57:11.413391  710840 config.go:182] Loaded profile config "embed-certs-879000": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1122 00:57:11.413462  710840 addons.go:70] Setting dashboard=true in profile "embed-certs-879000"
	I1122 00:57:11.413481  710840 addons.go:239] Setting addon dashboard=true in "embed-certs-879000"
	W1122 00:57:11.413488  710840 addons.go:248] addon dashboard should already be in state true
	I1122 00:57:11.413525  710840 host.go:66] Checking if "embed-certs-879000" exists ...
	I1122 00:57:11.414199  710840 cli_runner.go:164] Run: docker container inspect embed-certs-879000 --format={{.State.Status}}
	I1122 00:57:11.414589  710840 addons.go:70] Setting default-storageclass=true in profile "embed-certs-879000"
	I1122 00:57:11.414613  710840 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-879000"
	I1122 00:57:11.414888  710840 cli_runner.go:164] Run: docker container inspect embed-certs-879000 --format={{.State.Status}}
	I1122 00:57:11.425503  710840 out.go:179] * Verifying Kubernetes components...
	I1122 00:57:11.433314  710840 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1122 00:57:11.471294  710840 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1122 00:57:11.474372  710840 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1122 00:57:11.474392  710840 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1122 00:57:11.474453  710840 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-879000
	I1122 00:57:11.478849  710840 addons.go:239] Setting addon default-storageclass=true in "embed-certs-879000"
	W1122 00:57:11.478870  710840 addons.go:248] addon default-storageclass should already be in state true
	I1122 00:57:11.478893  710840 host.go:66] Checking if "embed-certs-879000" exists ...
	I1122 00:57:11.479457  710840 cli_runner.go:164] Run: docker container inspect embed-certs-879000 --format={{.State.Status}}
	I1122 00:57:11.479767  710840 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1122 00:57:11.482948  710840 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	I1122 00:57:11.148176  707914 pod_ready.go:94] pod "coredns-66bc5c9577-pt27w" is "Ready"
	I1122 00:57:11.148211  707914 pod_ready.go:86] duration metric: took 31.026348234s for pod "coredns-66bc5c9577-pt27w" in "kube-system" namespace to be "Ready" or be gone ...
	I1122 00:57:11.159368  707914 pod_ready.go:83] waiting for pod "etcd-no-preload-165130" in "kube-system" namespace to be "Ready" or be gone ...
	I1122 00:57:11.168009  707914 pod_ready.go:94] pod "etcd-no-preload-165130" is "Ready"
	I1122 00:57:11.168039  707914 pod_ready.go:86] duration metric: took 8.642373ms for pod "etcd-no-preload-165130" in "kube-system" namespace to be "Ready" or be gone ...
	I1122 00:57:11.256649  707914 pod_ready.go:83] waiting for pod "kube-apiserver-no-preload-165130" in "kube-system" namespace to be "Ready" or be gone ...
	I1122 00:57:11.264741  707914 pod_ready.go:94] pod "kube-apiserver-no-preload-165130" is "Ready"
	I1122 00:57:11.264763  707914 pod_ready.go:86] duration metric: took 8.088773ms for pod "kube-apiserver-no-preload-165130" in "kube-system" namespace to be "Ready" or be gone ...
	I1122 00:57:11.267516  707914 pod_ready.go:83] waiting for pod "kube-controller-manager-no-preload-165130" in "kube-system" namespace to be "Ready" or be gone ...
	I1122 00:57:11.325617  707914 pod_ready.go:94] pod "kube-controller-manager-no-preload-165130" is "Ready"
	I1122 00:57:11.325640  707914 pod_ready.go:86] duration metric: took 58.104171ms for pod "kube-controller-manager-no-preload-165130" in "kube-system" namespace to be "Ready" or be gone ...
	I1122 00:57:11.529199  707914 pod_ready.go:83] waiting for pod "kube-proxy-kr4ll" in "kube-system" namespace to be "Ready" or be gone ...
	I1122 00:57:11.925260  707914 pod_ready.go:94] pod "kube-proxy-kr4ll" is "Ready"
	I1122 00:57:11.925290  707914 pod_ready.go:86] duration metric: took 396.058587ms for pod "kube-proxy-kr4ll" in "kube-system" namespace to be "Ready" or be gone ...
	I1122 00:57:12.125736  707914 pod_ready.go:83] waiting for pod "kube-scheduler-no-preload-165130" in "kube-system" namespace to be "Ready" or be gone ...
	I1122 00:57:12.525038  707914 pod_ready.go:94] pod "kube-scheduler-no-preload-165130" is "Ready"
	I1122 00:57:12.525069  707914 pod_ready.go:86] duration metric: took 399.302358ms for pod "kube-scheduler-no-preload-165130" in "kube-system" namespace to be "Ready" or be gone ...
	I1122 00:57:12.525083  707914 pod_ready.go:40] duration metric: took 32.417440406s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1122 00:57:12.628049  707914 start.go:628] kubectl: 1.33.2, cluster: 1.34.1 (minor skew: 1)
	I1122 00:57:12.631398  707914 out.go:179] * Done! kubectl is now configured to use "no-preload-165130" cluster and "default" namespace by default
	I1122 00:57:11.487774  710840 addons.go:436] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1122 00:57:11.487796  710840 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1122 00:57:11.487860  710840 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-879000
	I1122 00:57:11.545941  710840 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33797 SSHKeyPath:/home/jenkins/minikube-integration/21934-513600/.minikube/machines/embed-certs-879000/id_rsa Username:docker}
	I1122 00:57:11.559356  710840 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1122 00:57:11.559389  710840 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1122 00:57:11.559450  710840 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-879000
	I1122 00:57:11.559798  710840 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33797 SSHKeyPath:/home/jenkins/minikube-integration/21934-513600/.minikube/machines/embed-certs-879000/id_rsa Username:docker}
	I1122 00:57:11.589479  710840 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33797 SSHKeyPath:/home/jenkins/minikube-integration/21934-513600/.minikube/machines/embed-certs-879000/id_rsa Username:docker}
	I1122 00:57:11.775851  710840 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1122 00:57:11.802252  710840 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1122 00:57:11.803759  710840 node_ready.go:35] waiting up to 6m0s for node "embed-certs-879000" to be "Ready" ...
	I1122 00:57:11.854750  710840 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1122 00:57:11.854824  710840 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1122 00:57:11.879094  710840 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1122 00:57:11.915188  710840 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1122 00:57:11.915273  710840 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1122 00:57:12.005164  710840 addons.go:436] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1122 00:57:12.005252  710840 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1122 00:57:12.079473  710840 addons.go:436] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1122 00:57:12.079550  710840 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1122 00:57:12.145702  710840 addons.go:436] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1122 00:57:12.145782  710840 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1122 00:57:12.180120  710840 addons.go:436] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1122 00:57:12.180196  710840 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1122 00:57:12.201560  710840 addons.go:436] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1122 00:57:12.201632  710840 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1122 00:57:12.228109  710840 addons.go:436] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1122 00:57:12.228183  710840 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1122 00:57:12.247327  710840 addons.go:436] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1122 00:57:12.247397  710840 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1122 00:57:12.271810  710840 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1122 00:57:16.180749  710840 node_ready.go:49] node "embed-certs-879000" is "Ready"
	I1122 00:57:16.180776  710840 node_ready.go:38] duration metric: took 4.3769276s for node "embed-certs-879000" to be "Ready" ...
	I1122 00:57:16.180791  710840 api_server.go:52] waiting for apiserver process to appear ...
	I1122 00:57:16.180864  710840 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1122 00:57:18.165598  710840 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (6.363309216s)
	I1122 00:57:18.165707  710840 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (6.286542585s)
	I1122 00:57:18.222681  710840 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (5.95079118s)
	I1122 00:57:18.222863  710840 ssh_runner.go:235] Completed: sudo pgrep -xnf kube-apiserver.*minikube.*: (2.041986955s)
	I1122 00:57:18.222906  710840 api_server.go:72] duration metric: took 6.810796314s to wait for apiserver process to appear ...
	I1122 00:57:18.222928  710840 api_server.go:88] waiting for apiserver healthz status ...
	I1122 00:57:18.222959  710840 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1122 00:57:18.226719  710840 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p embed-certs-879000 addons enable metrics-server
	
	I1122 00:57:18.230708  710840 out.go:179] * Enabled addons: storage-provisioner, default-storageclass, dashboard
	I1122 00:57:18.233368  710840 api_server.go:279] https://192.168.76.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1122 00:57:18.233426  710840 api_server.go:103] status: https://192.168.76.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1122 00:57:18.233714  710840 addons.go:530] duration metric: took 6.821248914s for enable addons: enabled=[storage-provisioner default-storageclass dashboard]
	I1122 00:57:18.723959  710840 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1122 00:57:18.731985  710840 api_server.go:279] https://192.168.76.2:8443/healthz returned 200:
	ok
	I1122 00:57:18.733024  710840 api_server.go:141] control plane version: v1.34.1
	I1122 00:57:18.733050  710840 api_server.go:131] duration metric: took 510.103259ms to wait for apiserver health ...
	I1122 00:57:18.733059  710840 system_pods.go:43] waiting for kube-system pods to appear ...
	I1122 00:57:18.743382  710840 system_pods.go:59] 8 kube-system pods found
	I1122 00:57:18.743464  710840 system_pods.go:61] "coredns-66bc5c9577-h2kpd" [5adad534-0ba4-479f-8e5a-7f5a9e26fb1e] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1122 00:57:18.743488  710840 system_pods.go:61] "etcd-embed-certs-879000" [7cebfe87-7413-4cfe-8899-73cdba19a310] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1122 00:57:18.743508  710840 system_pods.go:61] "kindnet-j8wwg" [29cadb16-a427-4f8b-b121-3af35927f8d5] Running
	I1122 00:57:18.743542  710840 system_pods.go:61] "kube-apiserver-embed-certs-879000" [66ed66fb-cf57-49c6-a5fc-a8814e40c10b] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1122 00:57:18.743567  710840 system_pods.go:61] "kube-controller-manager-embed-certs-879000" [347609cd-705b-441f-941f-936a0e0574f7] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1122 00:57:18.743588  710840 system_pods.go:61] "kube-proxy-w9bqj" [f56c390b-4d40-40a3-9862-f5081a6561e5] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1122 00:57:18.743609  710840 system_pods.go:61] "kube-scheduler-embed-certs-879000" [364d55f5-1b98-4087-999c-c7302863e10f] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1122 00:57:18.743629  710840 system_pods.go:61] "storage-provisioner" [042e1631-1c8e-4ce0-92e5-cdd4742fa06b] Running
	I1122 00:57:18.743662  710840 system_pods.go:74] duration metric: took 10.596728ms to wait for pod list to return data ...
	I1122 00:57:18.743683  710840 default_sa.go:34] waiting for default service account to be created ...
	I1122 00:57:18.746874  710840 default_sa.go:45] found service account: "default"
	I1122 00:57:18.746900  710840 default_sa.go:55] duration metric: took 3.199947ms for default service account to be created ...
	I1122 00:57:18.746910  710840 system_pods.go:116] waiting for k8s-apps to be running ...
	I1122 00:57:18.749794  710840 system_pods.go:86] 8 kube-system pods found
	I1122 00:57:18.749865  710840 system_pods.go:89] "coredns-66bc5c9577-h2kpd" [5adad534-0ba4-479f-8e5a-7f5a9e26fb1e] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1122 00:57:18.749875  710840 system_pods.go:89] "etcd-embed-certs-879000" [7cebfe87-7413-4cfe-8899-73cdba19a310] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1122 00:57:18.749881  710840 system_pods.go:89] "kindnet-j8wwg" [29cadb16-a427-4f8b-b121-3af35927f8d5] Running
	I1122 00:57:18.749888  710840 system_pods.go:89] "kube-apiserver-embed-certs-879000" [66ed66fb-cf57-49c6-a5fc-a8814e40c10b] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1122 00:57:18.749895  710840 system_pods.go:89] "kube-controller-manager-embed-certs-879000" [347609cd-705b-441f-941f-936a0e0574f7] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1122 00:57:18.749905  710840 system_pods.go:89] "kube-proxy-w9bqj" [f56c390b-4d40-40a3-9862-f5081a6561e5] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1122 00:57:18.749911  710840 system_pods.go:89] "kube-scheduler-embed-certs-879000" [364d55f5-1b98-4087-999c-c7302863e10f] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1122 00:57:18.749919  710840 system_pods.go:89] "storage-provisioner" [042e1631-1c8e-4ce0-92e5-cdd4742fa06b] Running
	I1122 00:57:18.749927  710840 system_pods.go:126] duration metric: took 3.010783ms to wait for k8s-apps to be running ...
	I1122 00:57:18.749940  710840 system_svc.go:44] waiting for kubelet service to be running ....
	I1122 00:57:18.749997  710840 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1122 00:57:18.763126  710840 system_svc.go:56] duration metric: took 13.165736ms WaitForService to wait for kubelet
	I1122 00:57:18.763192  710840 kubeadm.go:587] duration metric: took 7.3510799s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1122 00:57:18.763234  710840 node_conditions.go:102] verifying NodePressure condition ...
	I1122 00:57:18.766578  710840 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1122 00:57:18.766650  710840 node_conditions.go:123] node cpu capacity is 2
	I1122 00:57:18.766676  710840 node_conditions.go:105] duration metric: took 3.410994ms to run NodePressure ...
	I1122 00:57:18.766701  710840 start.go:242] waiting for startup goroutines ...
	I1122 00:57:18.766736  710840 start.go:247] waiting for cluster config update ...
	I1122 00:57:18.766766  710840 start.go:256] writing updated cluster config ...
	I1122 00:57:18.767064  710840 ssh_runner.go:195] Run: rm -f paused
	I1122 00:57:18.771051  710840 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1122 00:57:18.775605  710840 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-h2kpd" in "kube-system" namespace to be "Ready" or be gone ...
	W1122 00:57:20.781042  710840 pod_ready.go:104] pod "coredns-66bc5c9577-h2kpd" is not "Ready", error: <nil>
	W1122 00:57:22.782264  710840 pod_ready.go:104] pod "coredns-66bc5c9577-h2kpd" is not "Ready", error: <nil>
	
	
	==> CRI-O <==
	Nov 22 00:57:09 no-preload-165130 crio[654]: time="2025-11-22T00:57:09.207013506Z" level=info msg="Removed container bb223c78c451293e80b95ccadbece2f1a33e69511725ec01523fe5179e972fc5: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-zvt5d/dashboard-metrics-scraper" id=ae6db141-dc1e-4172-b48a-c487b0c920d6 name=/runtime.v1.RuntimeService/RemoveContainer
	Nov 22 00:57:09 no-preload-165130 conmon[1151]: conmon 68cc2c66134d4931da62 <ninfo>: container 1159 exited with status 1
	Nov 22 00:57:10 no-preload-165130 crio[654]: time="2025-11-22T00:57:10.195841569Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=5ddd92d7-80ad-400f-afc9-9a9e5af8ff2a name=/runtime.v1.ImageService/ImageStatus
	Nov 22 00:57:10 no-preload-165130 crio[654]: time="2025-11-22T00:57:10.199303253Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=7668790b-b40f-4332-b1ba-4d58f00d68fd name=/runtime.v1.ImageService/ImageStatus
	Nov 22 00:57:10 no-preload-165130 crio[654]: time="2025-11-22T00:57:10.200414047Z" level=info msg="Creating container: kube-system/storage-provisioner/storage-provisioner" id=b4f391e9-f06e-4af5-80ea-a02c8171c8da name=/runtime.v1.RuntimeService/CreateContainer
	Nov 22 00:57:10 no-preload-165130 crio[654]: time="2025-11-22T00:57:10.200524517Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 22 00:57:10 no-preload-165130 crio[654]: time="2025-11-22T00:57:10.213839533Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 22 00:57:10 no-preload-165130 crio[654]: time="2025-11-22T00:57:10.215800862Z" level=warning msg="Failed to open /etc/passwd: open /var/lib/containers/storage/overlay/844ef4f756dabfb7c04eb582c6846d8219999d6fe7f1bd4f6a2a91ef33e3eea6/merged/etc/passwd: no such file or directory"
	Nov 22 00:57:10 no-preload-165130 crio[654]: time="2025-11-22T00:57:10.215837333Z" level=warning msg="Failed to open /etc/group: open /var/lib/containers/storage/overlay/844ef4f756dabfb7c04eb582c6846d8219999d6fe7f1bd4f6a2a91ef33e3eea6/merged/etc/group: no such file or directory"
	Nov 22 00:57:10 no-preload-165130 crio[654]: time="2025-11-22T00:57:10.216097871Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 22 00:57:10 no-preload-165130 crio[654]: time="2025-11-22T00:57:10.257193678Z" level=info msg="Created container 8e14a16d0cdf92563d2c084ac281b0188eb1b47be445498e8be3c53d189c3b19: kube-system/storage-provisioner/storage-provisioner" id=b4f391e9-f06e-4af5-80ea-a02c8171c8da name=/runtime.v1.RuntimeService/CreateContainer
	Nov 22 00:57:10 no-preload-165130 crio[654]: time="2025-11-22T00:57:10.258367936Z" level=info msg="Starting container: 8e14a16d0cdf92563d2c084ac281b0188eb1b47be445498e8be3c53d189c3b19" id=2631d0bb-0542-4542-a59e-60d0627a6c69 name=/runtime.v1.RuntimeService/StartContainer
	Nov 22 00:57:10 no-preload-165130 crio[654]: time="2025-11-22T00:57:10.260302723Z" level=info msg="Started container" PID=1644 containerID=8e14a16d0cdf92563d2c084ac281b0188eb1b47be445498e8be3c53d189c3b19 description=kube-system/storage-provisioner/storage-provisioner id=2631d0bb-0542-4542-a59e-60d0627a6c69 name=/runtime.v1.RuntimeService/StartContainer sandboxID=de917186bd543dff6663546b1fe8f75c626d3b36755cb5cd520ac14c3040abdf
	Nov 22 00:57:19 no-preload-165130 crio[654]: time="2025-11-22T00:57:19.739550882Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 22 00:57:19 no-preload-165130 crio[654]: time="2025-11-22T00:57:19.745872658Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 22 00:57:19 no-preload-165130 crio[654]: time="2025-11-22T00:57:19.745912107Z" level=info msg="Updated default CNI network name to kindnet"
	Nov 22 00:57:19 no-preload-165130 crio[654]: time="2025-11-22T00:57:19.745933777Z" level=info msg="CNI monitoring event WRITE         \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 22 00:57:19 no-preload-165130 crio[654]: time="2025-11-22T00:57:19.748909525Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 22 00:57:19 no-preload-165130 crio[654]: time="2025-11-22T00:57:19.748943157Z" level=info msg="Updated default CNI network name to kindnet"
	Nov 22 00:57:19 no-preload-165130 crio[654]: time="2025-11-22T00:57:19.748963473Z" level=info msg="CNI monitoring event RENAME        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 22 00:57:19 no-preload-165130 crio[654]: time="2025-11-22T00:57:19.751781729Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 22 00:57:19 no-preload-165130 crio[654]: time="2025-11-22T00:57:19.751812357Z" level=info msg="Updated default CNI network name to kindnet"
	Nov 22 00:57:19 no-preload-165130 crio[654]: time="2025-11-22T00:57:19.751835971Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist\" ← \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 22 00:57:19 no-preload-165130 crio[654]: time="2025-11-22T00:57:19.755182835Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 22 00:57:19 no-preload-165130 crio[654]: time="2025-11-22T00:57:19.755214695Z" level=info msg="Updated default CNI network name to kindnet"
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                      CREATED             STATE               NAME                        ATTEMPT             POD ID              POD                                          NAMESPACE
	8e14a16d0cdf9       66749159455b3f08c8318fe0233122f54d0f5889f9c5fdfb73c3fd9d99895b51                                           18 seconds ago      Running             storage-provisioner         2                   de917186bd543       storage-provisioner                          kube-system
	61a964437806a       a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7                                           19 seconds ago      Exited              dashboard-metrics-scraper   2                   30f6804605129       dashboard-metrics-scraper-6ffb444bf9-zvt5d   kubernetes-dashboard
	f3789d40f639b       docker.io/kubernetesui/dashboard@sha256:5c52c60663b473628bd98e4ffee7a747ef1f88d8c7bcee957b089fb3f61bdedf   40 seconds ago      Running             kubernetes-dashboard        0                   dad8eb59e956e       kubernetes-dashboard-855c9754f9-xsqns        kubernetes-dashboard
	000dd70b69821       138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc                                           48 seconds ago      Running             coredns                     1                   62f021300a0ba       coredns-66bc5c9577-pt27w                     kube-system
	a49954f61edb7       1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c                                           48 seconds ago      Running             busybox                     1                   d7eb53eae797e       busybox                                      default
	1b8b4ae3716d0       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c                                           48 seconds ago      Running             kindnet-cni                 1                   cc9ad59dade89       kindnet-2kqbq                                kube-system
	68cc2c66134d4       66749159455b3f08c8318fe0233122f54d0f5889f9c5fdfb73c3fd9d99895b51                                           48 seconds ago      Exited              storage-provisioner         1                   de917186bd543       storage-provisioner                          kube-system
	7f4fb030dec09       05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9                                           48 seconds ago      Running             kube-proxy                  1                   e194a404d72ac       kube-proxy-kr4ll                             kube-system
	d445939f66bc7       b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0                                           54 seconds ago      Running             kube-scheduler              1                   b36f8cf63dd12       kube-scheduler-no-preload-165130             kube-system
	1842c88afa2f9       a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e                                           54 seconds ago      Running             etcd                        1                   fc4c7d2dd1cf7       etcd-no-preload-165130                       kube-system
	4a703cddbc0fe       43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196                                           54 seconds ago      Running             kube-apiserver              1                   448f5420faef6       kube-apiserver-no-preload-165130             kube-system
	447fcc475a732       7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a                                           54 seconds ago      Running             kube-controller-manager     1                   af171786ca9dd       kube-controller-manager-no-preload-165130    kube-system
	
	
	==> coredns [000dd70b698217aa6d95bd509cf47f3362cb467c72c920839b74ece78d579568] <==
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = fa9a0cdcdddcb4be74a0eaf7cfcb211c40e29ddf5507e03bbfc0065bade31f0f2641a2513136e246f32328dd126fc93236fb5c595246f0763926a524386705e8
	CoreDNS-1.12.1
	linux/arm64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:33127 - 21090 "HINFO IN 1763815445658371244.547625588154392815. udp 56 false 512" NXDOMAIN qr,rd,ra 56 0.025430509s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	
	
	==> describe nodes <==
	Name:               no-preload-165130
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=no-preload-165130
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=299bbe887a12c40541707cc636234f35f4ff1785
	                    minikube.k8s.io/name=no-preload-165130
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_11_22T00_55_38_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 22 Nov 2025 00:55:34 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  no-preload-165130
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 22 Nov 2025 00:57:19 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 22 Nov 2025 00:57:09 +0000   Sat, 22 Nov 2025 00:55:27 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 22 Nov 2025 00:57:09 +0000   Sat, 22 Nov 2025 00:55:27 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 22 Nov 2025 00:57:09 +0000   Sat, 22 Nov 2025 00:55:27 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 22 Nov 2025 00:57:09 +0000   Sat, 22 Nov 2025 00:55:59 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.85.2
	  Hostname:    no-preload-165130
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	System Info:
	  Machine ID:                 c92eea4d3c03c0156e01fcf8691e3907
	  System UUID:                194834cc-9098-4e11-a16d-906d0fa2db99
	  Boot ID:                    72ac7385-472f-47d1-a23e-bd80468d6e09
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.2
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         86s
	  kube-system                 coredns-66bc5c9577-pt27w                      100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     105s
	  kube-system                 etcd-no-preload-165130                        100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         111s
	  kube-system                 kindnet-2kqbq                                 100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      105s
	  kube-system                 kube-apiserver-no-preload-165130              250m (12%)    0 (0%)      0 (0%)           0 (0%)         111s
	  kube-system                 kube-controller-manager-no-preload-165130     200m (10%)    0 (0%)      0 (0%)           0 (0%)         111s
	  kube-system                 kube-proxy-kr4ll                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         105s
	  kube-system                 kube-scheduler-no-preload-165130              100m (5%)     0 (0%)      0 (0%)           0 (0%)         111s
	  kube-system                 storage-provisioner                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         103s
	  kubernetes-dashboard        dashboard-metrics-scraper-6ffb444bf9-zvt5d    0 (0%)        0 (0%)      0 (0%)           0 (0%)         46s
	  kubernetes-dashboard        kubernetes-dashboard-855c9754f9-xsqns         0 (0%)        0 (0%)      0 (0%)           0 (0%)         46s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  100m (5%)
	  memory             220Mi (2%)  220Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                  From             Message
	  ----     ------                   ----                 ----             -------
	  Normal   Starting                 103s                 kube-proxy       
	  Normal   Starting                 48s                  kube-proxy       
	  Normal   NodeHasSufficientMemory  2m2s (x8 over 2m2s)  kubelet          Node no-preload-165130 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    2m2s (x8 over 2m2s)  kubelet          Node no-preload-165130 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     2m2s (x8 over 2m2s)  kubelet          Node no-preload-165130 status is now: NodeHasSufficientPID
	  Normal   NodeHasNoDiskPressure    111s                 kubelet          Node no-preload-165130 status is now: NodeHasNoDiskPressure
	  Warning  CgroupV1                 111s                 kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  111s                 kubelet          Node no-preload-165130 status is now: NodeHasSufficientMemory
	  Normal   NodeHasSufficientPID     111s                 kubelet          Node no-preload-165130 status is now: NodeHasSufficientPID
	  Normal   Starting                 111s                 kubelet          Starting kubelet.
	  Normal   RegisteredNode           106s                 node-controller  Node no-preload-165130 event: Registered Node no-preload-165130 in Controller
	  Normal   NodeReady                89s                  kubelet          Node no-preload-165130 status is now: NodeReady
	  Normal   Starting                 56s                  kubelet          Starting kubelet.
	  Warning  CgroupV1                 56s                  kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  55s (x8 over 56s)    kubelet          Node no-preload-165130 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    55s (x8 over 56s)    kubelet          Node no-preload-165130 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     55s (x8 over 56s)    kubelet          Node no-preload-165130 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           47s                  node-controller  Node no-preload-165130 event: Registered Node no-preload-165130 in Controller
	
	
	==> dmesg <==
	[Nov22 00:33] overlayfs: idmapped layers are currently not supported
	[Nov22 00:35] overlayfs: idmapped layers are currently not supported
	[Nov22 00:36] overlayfs: idmapped layers are currently not supported
	[ +18.168104] overlayfs: idmapped layers are currently not supported
	[Nov22 00:37] overlayfs: idmapped layers are currently not supported
	[ +56.322609] overlayfs: idmapped layers are currently not supported
	[Nov22 00:38] overlayfs: idmapped layers are currently not supported
	[Nov22 00:39] overlayfs: idmapped layers are currently not supported
	[ +23.174928] overlayfs: idmapped layers are currently not supported
	[Nov22 00:41] overlayfs: idmapped layers are currently not supported
	[Nov22 00:42] overlayfs: idmapped layers are currently not supported
	[Nov22 00:44] overlayfs: idmapped layers are currently not supported
	[Nov22 00:45] overlayfs: idmapped layers are currently not supported
	[Nov22 00:46] overlayfs: idmapped layers are currently not supported
	[Nov22 00:48] overlayfs: idmapped layers are currently not supported
	[Nov22 00:50] overlayfs: idmapped layers are currently not supported
	[Nov22 00:51] overlayfs: idmapped layers are currently not supported
	[ +11.900293] overlayfs: idmapped layers are currently not supported
	[ +28.922055] overlayfs: idmapped layers are currently not supported
	[Nov22 00:52] overlayfs: idmapped layers are currently not supported
	[Nov22 00:53] overlayfs: idmapped layers are currently not supported
	[Nov22 00:54] overlayfs: idmapped layers are currently not supported
	[Nov22 00:55] overlayfs: idmapped layers are currently not supported
	[Nov22 00:56] overlayfs: idmapped layers are currently not supported
	[Nov22 00:57] overlayfs: idmapped layers are currently not supported
	
	
	==> etcd [1842c88afa2f92b8f5db1d2172ed30aa44f94bd9a078bc60055ef3ff3665300f] <==
	{"level":"warn","ts":"2025-11-22T00:56:36.848374Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33316","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-22T00:56:36.864652Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33340","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-22T00:56:36.886097Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33358","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-22T00:56:36.912368Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33368","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-22T00:56:36.927317Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33384","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-22T00:56:36.944974Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33406","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-22T00:56:36.963100Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33414","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-22T00:56:36.974385Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33420","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-22T00:56:37.003121Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33442","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-22T00:56:37.038872Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33458","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-22T00:56:37.063854Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33460","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-22T00:56:37.079512Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33478","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-22T00:56:37.095869Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33492","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-22T00:56:37.107150Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33518","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-22T00:56:37.130037Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33530","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-22T00:56:37.182385Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33552","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-22T00:56:37.209981Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33570","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-22T00:56:37.233016Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33574","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-22T00:56:37.255126Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33606","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-22T00:56:37.265775Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33618","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-22T00:56:37.314423Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33642","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-22T00:56:37.354433Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33656","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-22T00:56:37.374977Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33666","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-22T00:56:37.419109Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33684","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-22T00:56:37.450895Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33700","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 00:57:28 up  5:39,  0 user,  load average: 4.05, 3.97, 2.93
	Linux no-preload-165130 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [1b8b4ae3716d0f09f3a149118c900081488b110516bacbfdff2a7628edfb0a3c] <==
	I1122 00:56:39.558156       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1122 00:56:39.565680       1 main.go:139] hostIP = 192.168.85.2
	podIP = 192.168.85.2
	I1122 00:56:39.566334       1 main.go:148] setting mtu 1500 for CNI 
	I1122 00:56:39.566381       1 main.go:178] kindnetd IP family: "ipv4"
	I1122 00:56:39.566399       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-11-22T00:56:39Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1122 00:56:39.737099       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1122 00:56:39.737173       1 controller.go:381] "Waiting for informer caches to sync"
	I1122 00:56:39.737205       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1122 00:56:39.738080       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	E1122 00:57:09.737468       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Namespace: Get \"https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Namespace"
	E1122 00:57:09.737603       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.96.0.1:443/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Node"
	E1122 00:57:09.738882       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Pod: Get \"https://10.96.0.1:443/api/v1/pods?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Pod"
	E1122 00:57:09.738995       1 reflector.go:200] "Failed to watch" err="failed to list *v1.NetworkPolicy: Get \"https://10.96.0.1:443/apis/networking.k8s.io/v1/networkpolicies?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.NetworkPolicy"
	I1122 00:57:11.037906       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1122 00:57:11.037943       1 metrics.go:72] Registering metrics
	I1122 00:57:11.037996       1 controller.go:711] "Syncing nftables rules"
	I1122 00:57:19.739010       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1122 00:57:19.739125       1 main.go:301] handling current node
	
	
	==> kube-apiserver [4a703cddbc0fe37c59864ce18d11f40681bb2c9564af9cff7e041d5680b0df58] <==
	I1122 00:56:38.602657       1 shared_informer.go:356] "Caches are synced" controller="kubernetes-service-cidr-controller"
	I1122 00:56:38.602704       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1122 00:56:38.618920       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1122 00:56:38.618998       1 shared_informer.go:356] "Caches are synced" controller="node_authorizer"
	I1122 00:56:38.619015       1 shared_informer.go:356] "Caches are synced" controller="*generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]"
	I1122 00:56:38.619026       1 policy_source.go:240] refreshing policies
	I1122 00:56:38.627280       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1122 00:56:38.651052       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	E1122 00:56:38.664554       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I1122 00:56:38.681902       1 cache.go:39] Caches are synced for LocalAvailability controller
	I1122 00:56:38.687209       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I1122 00:56:38.687299       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I1122 00:56:38.687307       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I1122 00:56:38.687649       1 shared_informer.go:356] "Caches are synced" controller="cluster_authentication_trust_controller"
	I1122 00:56:38.988109       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1122 00:56:39.146307       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1122 00:56:39.468663       1 controller.go:667] quota admission added evaluator for: namespaces
	I1122 00:56:39.568932       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1122 00:56:39.655335       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1122 00:56:39.680990       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1122 00:56:39.806523       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.96.165.203"}
	I1122 00:56:39.835393       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.109.173.182"}
	I1122 00:56:42.144851       1 controller.go:667] quota admission added evaluator for: endpoints
	I1122 00:56:42.185645       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1122 00:56:42.393898       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	
	
	==> kube-controller-manager [447fcc475a7323f248b3a0cdb76a205b4bbe6be27f083fa2031b33a0533a533e] <==
	I1122 00:56:41.787371       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="no-preload-165130"
	I1122 00:56:41.787423       1 node_lifecycle_controller.go:1067] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I1122 00:56:41.788434       1 shared_informer.go:356] "Caches are synced" controller="crt configmap"
	I1122 00:56:41.788730       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I1122 00:56:41.789079       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I1122 00:56:41.790959       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I1122 00:56:41.796414       1 shared_informer.go:356] "Caches are synced" controller="TTL"
	I1122 00:56:41.796502       1 shared_informer.go:356] "Caches are synced" controller="disruption"
	I1122 00:56:41.796530       1 shared_informer.go:356] "Caches are synced" controller="service-cidr-controller"
	I1122 00:56:41.798469       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I1122 00:56:41.798659       1 shared_informer.go:356] "Caches are synced" controller="bootstrap_signer"
	I1122 00:56:41.801065       1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
	I1122 00:56:41.811680       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I1122 00:56:41.813358       1 shared_informer.go:356] "Caches are synced" controller="ReplicationController"
	I1122 00:56:41.814531       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1122 00:56:41.819274       1 shared_informer.go:356] "Caches are synced" controller="cronjob"
	I1122 00:56:41.820174       1 shared_informer.go:356] "Caches are synced" controller="HPA"
	I1122 00:56:41.821333       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1122 00:56:41.821349       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1122 00:56:41.821357       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1122 00:56:41.831351       1 shared_informer.go:356] "Caches are synced" controller="node"
	I1122 00:56:41.831407       1 range_allocator.go:177] "Sending events to api server" logger="node-ipam-controller"
	I1122 00:56:41.831429       1 range_allocator.go:183] "Starting range CIDR allocator" logger="node-ipam-controller"
	I1122 00:56:41.831434       1 shared_informer.go:349] "Waiting for caches to sync" controller="cidrallocator"
	I1122 00:56:41.831439       1 shared_informer.go:356] "Caches are synced" controller="cidrallocator"
	
	
	==> kube-proxy [7f4fb030dec090c0c97a17566010db1ca14f9d615fa6361747c4bee8a0793d79] <==
	I1122 00:56:39.799443       1 server_linux.go:53] "Using iptables proxy"
	I1122 00:56:40.034449       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1122 00:56:40.150155       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1122 00:56:40.150285       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.85.2"]
	E1122 00:56:40.150401       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1122 00:56:40.190136       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1122 00:56:40.190206       1 server_linux.go:132] "Using iptables Proxier"
	I1122 00:56:40.195433       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1122 00:56:40.195812       1 server.go:527] "Version info" version="v1.34.1"
	I1122 00:56:40.195836       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1122 00:56:40.199582       1 config.go:200] "Starting service config controller"
	I1122 00:56:40.199606       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1122 00:56:40.199855       1 config.go:106] "Starting endpoint slice config controller"
	I1122 00:56:40.199871       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1122 00:56:40.200027       1 config.go:403] "Starting serviceCIDR config controller"
	I1122 00:56:40.200047       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1122 00:56:40.203147       1 config.go:309] "Starting node config controller"
	I1122 00:56:40.203171       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1122 00:56:40.203180       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1122 00:56:40.300697       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1122 00:56:40.300701       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1122 00:56:40.300718       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [d445939f66bc78fcb625769cedbe045c3808629a73405f7634fc8403e2147225] <==
	I1122 00:56:36.847814       1 serving.go:386] Generated self-signed cert in-memory
	W1122 00:56:38.348529       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1122 00:56:38.348556       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1122 00:56:38.348566       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1122 00:56:38.348574       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1122 00:56:38.583633       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.1"
	I1122 00:56:38.583659       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1122 00:56:38.609097       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1122 00:56:38.609138       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1122 00:56:38.627054       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1122 00:56:38.627553       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1122 00:56:38.709908       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Nov 22 00:56:39 no-preload-165130 kubelet[776]: W1122 00:56:39.417743     776 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/1c65dce5fc4b31daf0735f57309cfc4584efb68447875212bc03d5cc1861ca03/crio-d7eb53eae797e9eee32adc178c6a77341dba0bb7ab302a4cc127af543491f907 WatchSource:0}: Error finding container d7eb53eae797e9eee32adc178c6a77341dba0bb7ab302a4cc127af543491f907: Status 404 returned error can't find the container with id d7eb53eae797e9eee32adc178c6a77341dba0bb7ab302a4cc127af543491f907
	Nov 22 00:56:42 no-preload-165130 kubelet[776]: I1122 00:56:42.399872     776 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/fc63b412-889b-418a-a30a-c1de29e57030-tmp-volume\") pod \"kubernetes-dashboard-855c9754f9-xsqns\" (UID: \"fc63b412-889b-418a-a30a-c1de29e57030\") " pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-xsqns"
	Nov 22 00:56:42 no-preload-165130 kubelet[776]: I1122 00:56:42.399929     776 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ch4sg\" (UniqueName: \"kubernetes.io/projected/fc63b412-889b-418a-a30a-c1de29e57030-kube-api-access-ch4sg\") pod \"kubernetes-dashboard-855c9754f9-xsqns\" (UID: \"fc63b412-889b-418a-a30a-c1de29e57030\") " pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-xsqns"
	Nov 22 00:56:42 no-preload-165130 kubelet[776]: I1122 00:56:42.500481     776 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/59b54b7f-dca2-49aa-978b-e4e4de474d1a-tmp-volume\") pod \"dashboard-metrics-scraper-6ffb444bf9-zvt5d\" (UID: \"59b54b7f-dca2-49aa-978b-e4e4de474d1a\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-zvt5d"
	Nov 22 00:56:42 no-preload-165130 kubelet[776]: I1122 00:56:42.500542     776 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7rg4c\" (UniqueName: \"kubernetes.io/projected/59b54b7f-dca2-49aa-978b-e4e4de474d1a-kube-api-access-7rg4c\") pod \"dashboard-metrics-scraper-6ffb444bf9-zvt5d\" (UID: \"59b54b7f-dca2-49aa-978b-e4e4de474d1a\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-zvt5d"
	Nov 22 00:56:42 no-preload-165130 kubelet[776]: W1122 00:56:42.701478     776 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/1c65dce5fc4b31daf0735f57309cfc4584efb68447875212bc03d5cc1861ca03/crio-30f680460512915134f36b7b5310bf8d51951c44492e62e9dad1ef964684e8ae WatchSource:0}: Error finding container 30f680460512915134f36b7b5310bf8d51951c44492e62e9dad1ef964684e8ae: Status 404 returned error can't find the container with id 30f680460512915134f36b7b5310bf8d51951c44492e62e9dad1ef964684e8ae
	Nov 22 00:56:49 no-preload-165130 kubelet[776]: I1122 00:56:49.150685     776 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-xsqns" podStartSLOduration=1.727092721 podStartE2EDuration="7.150664865s" podCreationTimestamp="2025-11-22 00:56:42 +0000 UTC" firstStartedPulling="2025-11-22 00:56:42.683307197 +0000 UTC m=+9.906384531" lastFinishedPulling="2025-11-22 00:56:48.106879267 +0000 UTC m=+15.329956675" observedRunningTime="2025-11-22 00:56:49.150226101 +0000 UTC m=+16.373303427" watchObservedRunningTime="2025-11-22 00:56:49.150664865 +0000 UTC m=+16.373742191"
	Nov 22 00:56:54 no-preload-165130 kubelet[776]: I1122 00:56:54.146642     776 scope.go:117] "RemoveContainer" containerID="540420a0ee9ba05e522139f848415db83ff67a2fd96f1890f6827aafb9fb380c"
	Nov 22 00:56:55 no-preload-165130 kubelet[776]: I1122 00:56:55.150853     776 scope.go:117] "RemoveContainer" containerID="540420a0ee9ba05e522139f848415db83ff67a2fd96f1890f6827aafb9fb380c"
	Nov 22 00:56:55 no-preload-165130 kubelet[776]: I1122 00:56:55.151156     776 scope.go:117] "RemoveContainer" containerID="bb223c78c451293e80b95ccadbece2f1a33e69511725ec01523fe5179e972fc5"
	Nov 22 00:56:55 no-preload-165130 kubelet[776]: E1122 00:56:55.151325     776 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-zvt5d_kubernetes-dashboard(59b54b7f-dca2-49aa-978b-e4e4de474d1a)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-zvt5d" podUID="59b54b7f-dca2-49aa-978b-e4e4de474d1a"
	Nov 22 00:56:56 no-preload-165130 kubelet[776]: I1122 00:56:56.154131     776 scope.go:117] "RemoveContainer" containerID="bb223c78c451293e80b95ccadbece2f1a33e69511725ec01523fe5179e972fc5"
	Nov 22 00:56:56 no-preload-165130 kubelet[776]: E1122 00:56:56.154289     776 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-zvt5d_kubernetes-dashboard(59b54b7f-dca2-49aa-978b-e4e4de474d1a)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-zvt5d" podUID="59b54b7f-dca2-49aa-978b-e4e4de474d1a"
	Nov 22 00:56:57 no-preload-165130 kubelet[776]: I1122 00:56:57.156906     776 scope.go:117] "RemoveContainer" containerID="bb223c78c451293e80b95ccadbece2f1a33e69511725ec01523fe5179e972fc5"
	Nov 22 00:56:57 no-preload-165130 kubelet[776]: E1122 00:56:57.157089     776 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-zvt5d_kubernetes-dashboard(59b54b7f-dca2-49aa-978b-e4e4de474d1a)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-zvt5d" podUID="59b54b7f-dca2-49aa-978b-e4e4de474d1a"
	Nov 22 00:57:08 no-preload-165130 kubelet[776]: I1122 00:57:08.981784     776 scope.go:117] "RemoveContainer" containerID="bb223c78c451293e80b95ccadbece2f1a33e69511725ec01523fe5179e972fc5"
	Nov 22 00:57:09 no-preload-165130 kubelet[776]: I1122 00:57:09.189423     776 scope.go:117] "RemoveContainer" containerID="bb223c78c451293e80b95ccadbece2f1a33e69511725ec01523fe5179e972fc5"
	Nov 22 00:57:09 no-preload-165130 kubelet[776]: I1122 00:57:09.190111     776 scope.go:117] "RemoveContainer" containerID="61a964437806ac8e9ff5933c7a44cae8231f1736a03e970fe7de22ff50d297f6"
	Nov 22 00:57:09 no-preload-165130 kubelet[776]: E1122 00:57:09.190835     776 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-zvt5d_kubernetes-dashboard(59b54b7f-dca2-49aa-978b-e4e4de474d1a)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-zvt5d" podUID="59b54b7f-dca2-49aa-978b-e4e4de474d1a"
	Nov 22 00:57:10 no-preload-165130 kubelet[776]: I1122 00:57:10.195209     776 scope.go:117] "RemoveContainer" containerID="68cc2c66134d4931da62d73545c07672b9868b40d69e94529e89109da91c6ae0"
	Nov 22 00:57:15 no-preload-165130 kubelet[776]: I1122 00:57:15.624148     776 scope.go:117] "RemoveContainer" containerID="61a964437806ac8e9ff5933c7a44cae8231f1736a03e970fe7de22ff50d297f6"
	Nov 22 00:57:15 no-preload-165130 kubelet[776]: E1122 00:57:15.625341     776 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-zvt5d_kubernetes-dashboard(59b54b7f-dca2-49aa-978b-e4e4de474d1a)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-zvt5d" podUID="59b54b7f-dca2-49aa-978b-e4e4de474d1a"
	Nov 22 00:57:25 no-preload-165130 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Nov 22 00:57:25 no-preload-165130 systemd[1]: kubelet.service: Deactivated successfully.
	Nov 22 00:57:25 no-preload-165130 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	
	
	==> kubernetes-dashboard [f3789d40f639b65e128847fc1b9af95f63a6b0dd3621ef61be5f39b47cd48613] <==
	2025/11/22 00:56:48 Using namespace: kubernetes-dashboard
	2025/11/22 00:56:48 Using in-cluster config to connect to apiserver
	2025/11/22 00:56:48 Using secret token for csrf signing
	2025/11/22 00:56:48 Initializing csrf token from kubernetes-dashboard-csrf secret
	2025/11/22 00:56:48 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2025/11/22 00:56:48 Successful initial request to the apiserver, version: v1.34.1
	2025/11/22 00:56:48 Generating JWE encryption key
	2025/11/22 00:56:48 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2025/11/22 00:56:48 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2025/11/22 00:56:51 Initializing JWE encryption key from synchronized object
	2025/11/22 00:56:51 Creating in-cluster Sidecar client
	2025/11/22 00:56:51 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/11/22 00:56:51 Serving insecurely on HTTP port: 9090
	2025/11/22 00:57:21 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/11/22 00:56:48 Starting overwatch
	
	
	==> storage-provisioner [68cc2c66134d4931da62d73545c07672b9868b40d69e94529e89109da91c6ae0] <==
	I1122 00:56:39.714477       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1122 00:57:09.719573       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	
	
	==> storage-provisioner [8e14a16d0cdf92563d2c084ac281b0188eb1b47be445498e8be3c53d189c3b19] <==
	I1122 00:57:10.295430       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1122 00:57:10.320147       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1122 00:57:10.320269       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W1122 00:57:10.322908       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1122 00:57:13.787364       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1122 00:57:18.050051       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1122 00:57:21.649011       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1122 00:57:24.703098       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1122 00:57:27.724799       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1122 00:57:27.766152       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1122 00:57:27.766307       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1122 00:57:27.771236       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_no-preload-165130_2bc0a3bc-8de2-490b-b7e4-2857944b60fb!
	I1122 00:57:27.770532       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"fa43c339-2ef6-4277-ae88-e611a28aa232", APIVersion:"v1", ResourceVersion:"673", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' no-preload-165130_2bc0a3bc-8de2-490b-b7e4-2857944b60fb became leader
	W1122 00:57:27.775327       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1122 00:57:27.787195       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1122 00:57:27.888050       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_no-preload-165130_2bc0a3bc-8de2-490b-b7e4-2857944b60fb!
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-165130 -n no-preload-165130
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-165130 -n no-preload-165130: exit status 2 (534.278224ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:269: (dbg) Run:  kubectl --context no-preload-165130 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/no-preload/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/no-preload/serial/Pause]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/no-preload/serial/Pause]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect no-preload-165130
helpers_test.go:243: (dbg) docker inspect no-preload-165130:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "1c65dce5fc4b31daf0735f57309cfc4584efb68447875212bc03d5cc1861ca03",
	        "Created": "2025-11-22T00:54:44.324816446Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 708042,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-11-22T00:56:26.217221926Z",
	            "FinishedAt": "2025-11-22T00:56:25.408968932Z"
	        },
	        "Image": "sha256:97061622515a85470dad13cb046ad5fc5021fcd2b05aa921a620abb52951cd5d",
	        "ResolvConfPath": "/var/lib/docker/containers/1c65dce5fc4b31daf0735f57309cfc4584efb68447875212bc03d5cc1861ca03/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/1c65dce5fc4b31daf0735f57309cfc4584efb68447875212bc03d5cc1861ca03/hostname",
	        "HostsPath": "/var/lib/docker/containers/1c65dce5fc4b31daf0735f57309cfc4584efb68447875212bc03d5cc1861ca03/hosts",
	        "LogPath": "/var/lib/docker/containers/1c65dce5fc4b31daf0735f57309cfc4584efb68447875212bc03d5cc1861ca03/1c65dce5fc4b31daf0735f57309cfc4584efb68447875212bc03d5cc1861ca03-json.log",
	        "Name": "/no-preload-165130",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "no-preload-165130:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "no-preload-165130",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "1c65dce5fc4b31daf0735f57309cfc4584efb68447875212bc03d5cc1861ca03",
	                "LowerDir": "/var/lib/docker/overlay2/1fcc7ed347f82b0a593d86e8b13d7b8b6ed58d69e01b67e3031748c6c4f0b12f-init/diff:/var/lib/docker/overlay2/7e8788c6de692bc1c3758a2bb2c4b8da0fbba26855f855c0f3b655bfbdd92f8e/diff",
	                "MergedDir": "/var/lib/docker/overlay2/1fcc7ed347f82b0a593d86e8b13d7b8b6ed58d69e01b67e3031748c6c4f0b12f/merged",
	                "UpperDir": "/var/lib/docker/overlay2/1fcc7ed347f82b0a593d86e8b13d7b8b6ed58d69e01b67e3031748c6c4f0b12f/diff",
	                "WorkDir": "/var/lib/docker/overlay2/1fcc7ed347f82b0a593d86e8b13d7b8b6ed58d69e01b67e3031748c6c4f0b12f/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "no-preload-165130",
	                "Source": "/var/lib/docker/volumes/no-preload-165130/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "no-preload-165130",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763588073-21934@sha256:19d3da0413e1bfa354cbb88004c6796f8e9772a083e0230b0f6e50212ee04c7e",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "no-preload-165130",
	                "name.minikube.sigs.k8s.io": "no-preload-165130",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "d7b03a47c62a7b8c32c7f61807f04ad75303ad145f5825429b5ec3cec82730d2",
	            "SandboxKey": "/var/run/docker/netns/d7b03a47c62a",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33792"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33793"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33796"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33794"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33795"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "no-preload-165130": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.85.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "06:93:52:56:ee:c4",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "0ab9f51973bdd85552219ab44a532b9743aba79f533b4d8124872498c1e7cb0a",
	                    "EndpointID": "564cd05f4317668e6f4ead3bb58f580b7413940d38dbde80038befae2aa1a688",
	                    "Gateway": "192.168.85.1",
	                    "IPAddress": "192.168.85.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "no-preload-165130",
	                        "1c65dce5fc4b"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-165130 -n no-preload-165130
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-165130 -n no-preload-165130: exit status 2 (481.518833ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestStartStop/group/no-preload/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/no-preload/serial/Pause]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p no-preload-165130 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p no-preload-165130 logs -n 25: (1.923846383s)
helpers_test.go:260: TestStartStop/group/no-preload/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬────────────────────────┬─────────┬─────────┬─────────────────────┬────────────────────
─┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │        PROFILE         │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼────────────────────────┼─────────┼─────────┼─────────────────────┼────────────────────
─┤
	│ ssh     │ -p cert-options-002126 -- sudo cat /etc/kubernetes/admin.conf                                                                                                                                                                                 │ cert-options-002126    │ jenkins │ v1.37.0 │ 22 Nov 25 00:52 UTC │ 22 Nov 25 00:52 UTC │
	│ delete  │ -p cert-options-002126                                                                                                                                                                                                                        │ cert-options-002126    │ jenkins │ v1.37.0 │ 22 Nov 25 00:52 UTC │ 22 Nov 25 00:52 UTC │
	│ start   │ -p old-k8s-version-625837 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-625837 │ jenkins │ v1.37.0 │ 22 Nov 25 00:52 UTC │ 22 Nov 25 00:53 UTC │
	│ addons  │ enable metrics-server -p old-k8s-version-625837 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                  │ old-k8s-version-625837 │ jenkins │ v1.37.0 │ 22 Nov 25 00:53 UTC │                     │
	│ stop    │ -p old-k8s-version-625837 --alsologtostderr -v=3                                                                                                                                                                                              │ old-k8s-version-625837 │ jenkins │ v1.37.0 │ 22 Nov 25 00:53 UTC │ 22 Nov 25 00:53 UTC │
	│ addons  │ enable dashboard -p old-k8s-version-625837 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                             │ old-k8s-version-625837 │ jenkins │ v1.37.0 │ 22 Nov 25 00:53 UTC │ 22 Nov 25 00:53 UTC │
	│ start   │ -p old-k8s-version-625837 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-625837 │ jenkins │ v1.37.0 │ 22 Nov 25 00:53 UTC │ 22 Nov 25 00:54 UTC │
	│ image   │ old-k8s-version-625837 image list --format=json                                                                                                                                                                                               │ old-k8s-version-625837 │ jenkins │ v1.37.0 │ 22 Nov 25 00:54 UTC │ 22 Nov 25 00:54 UTC │
	│ pause   │ -p old-k8s-version-625837 --alsologtostderr -v=1                                                                                                                                                                                              │ old-k8s-version-625837 │ jenkins │ v1.37.0 │ 22 Nov 25 00:54 UTC │                     │
	│ start   │ -p cert-expiration-621390 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=crio                                                                                                                                     │ cert-expiration-621390 │ jenkins │ v1.37.0 │ 22 Nov 25 00:54 UTC │ 22 Nov 25 00:55 UTC │
	│ delete  │ -p old-k8s-version-625837                                                                                                                                                                                                                     │ old-k8s-version-625837 │ jenkins │ v1.37.0 │ 22 Nov 25 00:54 UTC │ 22 Nov 25 00:54 UTC │
	│ delete  │ -p old-k8s-version-625837                                                                                                                                                                                                                     │ old-k8s-version-625837 │ jenkins │ v1.37.0 │ 22 Nov 25 00:54 UTC │ 22 Nov 25 00:54 UTC │
	│ start   │ -p no-preload-165130 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                       │ no-preload-165130      │ jenkins │ v1.37.0 │ 22 Nov 25 00:54 UTC │ 22 Nov 25 00:56 UTC │
	│ delete  │ -p cert-expiration-621390                                                                                                                                                                                                                     │ cert-expiration-621390 │ jenkins │ v1.37.0 │ 22 Nov 25 00:55 UTC │ 22 Nov 25 00:55 UTC │
	│ start   │ -p embed-certs-879000 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                        │ embed-certs-879000     │ jenkins │ v1.37.0 │ 22 Nov 25 00:55 UTC │ 22 Nov 25 00:56 UTC │
	│ addons  │ enable metrics-server -p no-preload-165130 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ no-preload-165130      │ jenkins │ v1.37.0 │ 22 Nov 25 00:56 UTC │                     │
	│ stop    │ -p no-preload-165130 --alsologtostderr -v=3                                                                                                                                                                                                   │ no-preload-165130      │ jenkins │ v1.37.0 │ 22 Nov 25 00:56 UTC │ 22 Nov 25 00:56 UTC │
	│ addons  │ enable dashboard -p no-preload-165130 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ no-preload-165130      │ jenkins │ v1.37.0 │ 22 Nov 25 00:56 UTC │ 22 Nov 25 00:56 UTC │
	│ start   │ -p no-preload-165130 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                       │ no-preload-165130      │ jenkins │ v1.37.0 │ 22 Nov 25 00:56 UTC │ 22 Nov 25 00:57 UTC │
	│ addons  │ enable metrics-server -p embed-certs-879000 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                      │ embed-certs-879000     │ jenkins │ v1.37.0 │ 22 Nov 25 00:56 UTC │                     │
	│ stop    │ -p embed-certs-879000 --alsologtostderr -v=3                                                                                                                                                                                                  │ embed-certs-879000     │ jenkins │ v1.37.0 │ 22 Nov 25 00:56 UTC │ 22 Nov 25 00:57 UTC │
	│ addons  │ enable dashboard -p embed-certs-879000 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                 │ embed-certs-879000     │ jenkins │ v1.37.0 │ 22 Nov 25 00:57 UTC │ 22 Nov 25 00:57 UTC │
	│ start   │ -p embed-certs-879000 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                        │ embed-certs-879000     │ jenkins │ v1.37.0 │ 22 Nov 25 00:57 UTC │                     │
	│ image   │ no-preload-165130 image list --format=json                                                                                                                                                                                                    │ no-preload-165130      │ jenkins │ v1.37.0 │ 22 Nov 25 00:57 UTC │ 22 Nov 25 00:57 UTC │
	│ pause   │ -p no-preload-165130 --alsologtostderr -v=1                                                                                                                                                                                                   │ no-preload-165130      │ jenkins │ v1.37.0 │ 22 Nov 25 00:57 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴────────────────────────┴─────────┴─────────┴─────────────────────┴────────────────────
─┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/22 00:57:03
	Running on machine: ip-172-31-21-244
	Binary: Built with gc go1.25.3 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1122 00:57:03.399087  710840 out.go:360] Setting OutFile to fd 1 ...
	I1122 00:57:03.399199  710840 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1122 00:57:03.399209  710840 out.go:374] Setting ErrFile to fd 2...
	I1122 00:57:03.399215  710840 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1122 00:57:03.399478  710840 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21934-513600/.minikube/bin
	I1122 00:57:03.399844  710840 out.go:368] Setting JSON to false
	I1122 00:57:03.400771  710840 start.go:133] hostinfo: {"hostname":"ip-172-31-21-244","uptime":20340,"bootTime":1763752684,"procs":199,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"da8ac1fd-6236-412a-a346-95873c98230d"}
	I1122 00:57:03.400844  710840 start.go:143] virtualization:  
	I1122 00:57:03.403697  710840 out.go:179] * [embed-certs-879000] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1122 00:57:03.407438  710840 out.go:179]   - MINIKUBE_LOCATION=21934
	I1122 00:57:03.407577  710840 notify.go:221] Checking for updates...
	I1122 00:57:03.413397  710840 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1122 00:57:03.416288  710840 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21934-513600/kubeconfig
	I1122 00:57:03.419167  710840 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21934-513600/.minikube
	I1122 00:57:03.422137  710840 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1122 00:57:03.425025  710840 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1122 00:57:03.428470  710840 config.go:182] Loaded profile config "embed-certs-879000": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1122 00:57:03.429053  710840 driver.go:422] Setting default libvirt URI to qemu:///system
	I1122 00:57:03.459740  710840 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1122 00:57:03.459857  710840 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1122 00:57:03.518225  710840 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:41 OomKillDisable:true NGoroutines:52 SystemTime:2025-11-22 00:57:03.50841844 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aa
rch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pa
th:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1122 00:57:03.518349  710840 docker.go:319] overlay module found
	I1122 00:57:03.521573  710840 out.go:179] * Using the docker driver based on existing profile
	I1122 00:57:03.524292  710840 start.go:309] selected driver: docker
	I1122 00:57:03.524312  710840 start.go:930] validating driver "docker" against &{Name:embed-certs-879000 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763588073-21934@sha256:19d3da0413e1bfa354cbb88004c6796f8e9772a083e0230b0f6e50212ee04c7e Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:embed-certs-879000 Namespace:default APIServerHAVIP: APIServerN
ame:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:
9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1122 00:57:03.524403  710840 start.go:941] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1122 00:57:03.525118  710840 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1122 00:57:03.582349  710840 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:41 OomKillDisable:true NGoroutines:52 SystemTime:2025-11-22 00:57:03.573362057 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1122 00:57:03.582684  710840 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1122 00:57:03.582719  710840 cni.go:84] Creating CNI manager for ""
	I1122 00:57:03.582780  710840 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1122 00:57:03.582831  710840 start.go:353] cluster config:
	{Name:embed-certs-879000 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763588073-21934@sha256:19d3da0413e1bfa354cbb88004c6796f8e9772a083e0230b0f6e50212ee04c7e Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:embed-certs-879000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Contain
erRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false
DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1122 00:57:03.587979  710840 out.go:179] * Starting "embed-certs-879000" primary control-plane node in "embed-certs-879000" cluster
	I1122 00:57:03.590773  710840 cache.go:134] Beginning downloading kic base image for docker with crio
	I1122 00:57:03.593670  710840 out.go:179] * Pulling base image v0.0.48-1763588073-21934 ...
	I1122 00:57:03.596459  710840 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1122 00:57:03.596484  710840 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763588073-21934@sha256:19d3da0413e1bfa354cbb88004c6796f8e9772a083e0230b0f6e50212ee04c7e in local docker daemon
	I1122 00:57:03.596518  710840 preload.go:203] Found local preload: /home/jenkins/minikube-integration/21934-513600/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4
	I1122 00:57:03.596539  710840 cache.go:65] Caching tarball of preloaded images
	I1122 00:57:03.596620  710840 preload.go:238] Found /home/jenkins/minikube-integration/21934-513600/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1122 00:57:03.596630  710840 cache.go:68] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1122 00:57:03.596743  710840 profile.go:143] Saving config to /home/jenkins/minikube-integration/21934-513600/.minikube/profiles/embed-certs-879000/config.json ...
	I1122 00:57:03.616904  710840 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763588073-21934@sha256:19d3da0413e1bfa354cbb88004c6796f8e9772a083e0230b0f6e50212ee04c7e in local docker daemon, skipping pull
	I1122 00:57:03.616927  710840 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763588073-21934@sha256:19d3da0413e1bfa354cbb88004c6796f8e9772a083e0230b0f6e50212ee04c7e exists in daemon, skipping load
	I1122 00:57:03.616948  710840 cache.go:243] Successfully downloaded all kic artifacts
	I1122 00:57:03.616971  710840 start.go:360] acquireMachinesLock for embed-certs-879000: {Name:mk05ac8d8898660ab51c5645d9a1c115c537bdda Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1122 00:57:03.617034  710840 start.go:364] duration metric: took 41.049µs to acquireMachinesLock for "embed-certs-879000"
	I1122 00:57:03.617059  710840 start.go:96] Skipping create...Using existing machine configuration
	I1122 00:57:03.617065  710840 fix.go:54] fixHost starting: 
	I1122 00:57:03.617334  710840 cli_runner.go:164] Run: docker container inspect embed-certs-879000 --format={{.State.Status}}
	I1122 00:57:03.637186  710840 fix.go:112] recreateIfNeeded on embed-certs-879000: state=Stopped err=<nil>
	W1122 00:57:03.637216  710840 fix.go:138] unexpected machine state, will restart: <nil>
	W1122 00:57:01.127793  707914 pod_ready.go:104] pod "coredns-66bc5c9577-pt27w" is not "Ready", error: <nil>
	W1122 00:57:03.128662  707914 pod_ready.go:104] pod "coredns-66bc5c9577-pt27w" is not "Ready", error: <nil>
	W1122 00:57:05.628253  707914 pod_ready.go:104] pod "coredns-66bc5c9577-pt27w" is not "Ready", error: <nil>
	I1122 00:57:03.640356  710840 out.go:252] * Restarting existing docker container for "embed-certs-879000" ...
	I1122 00:57:03.640442  710840 cli_runner.go:164] Run: docker start embed-certs-879000
	I1122 00:57:03.913531  710840 cli_runner.go:164] Run: docker container inspect embed-certs-879000 --format={{.State.Status}}
	I1122 00:57:03.933482  710840 kic.go:430] container "embed-certs-879000" state is running.
	I1122 00:57:03.933909  710840 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-879000
	I1122 00:57:03.958177  710840 profile.go:143] Saving config to /home/jenkins/minikube-integration/21934-513600/.minikube/profiles/embed-certs-879000/config.json ...
	I1122 00:57:03.958537  710840 machine.go:94] provisionDockerMachine start ...
	I1122 00:57:03.958655  710840 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-879000
	I1122 00:57:03.980856  710840 main.go:143] libmachine: Using SSH client type: native
	I1122 00:57:03.981176  710840 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33797 <nil> <nil>}
	I1122 00:57:03.981186  710840 main.go:143] libmachine: About to run SSH command:
	hostname
	I1122 00:57:03.982159  710840 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I1122 00:57:07.127422  710840 main.go:143] libmachine: SSH cmd err, output: <nil>: embed-certs-879000
	
	I1122 00:57:07.127445  710840 ubuntu.go:182] provisioning hostname "embed-certs-879000"
	I1122 00:57:07.127505  710840 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-879000
	I1122 00:57:07.144942  710840 main.go:143] libmachine: Using SSH client type: native
	I1122 00:57:07.145328  710840 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33797 <nil> <nil>}
	I1122 00:57:07.145348  710840 main.go:143] libmachine: About to run SSH command:
	sudo hostname embed-certs-879000 && echo "embed-certs-879000" | sudo tee /etc/hostname
	I1122 00:57:07.299001  710840 main.go:143] libmachine: SSH cmd err, output: <nil>: embed-certs-879000
	
	I1122 00:57:07.299075  710840 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-879000
	I1122 00:57:07.317619  710840 main.go:143] libmachine: Using SSH client type: native
	I1122 00:57:07.317958  710840 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33797 <nil> <nil>}
	I1122 00:57:07.317981  710840 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-879000' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-879000/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-879000' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1122 00:57:07.457911  710840 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1122 00:57:07.457978  710840 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21934-513600/.minikube CaCertPath:/home/jenkins/minikube-integration/21934-513600/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21934-513600/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21934-513600/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21934-513600/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21934-513600/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21934-513600/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21934-513600/.minikube}
	I1122 00:57:07.458014  710840 ubuntu.go:190] setting up certificates
	I1122 00:57:07.458056  710840 provision.go:84] configureAuth start
	I1122 00:57:07.458136  710840 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-879000
	I1122 00:57:07.476012  710840 provision.go:143] copyHostCerts
	I1122 00:57:07.476076  710840 exec_runner.go:144] found /home/jenkins/minikube-integration/21934-513600/.minikube/ca.pem, removing ...
	I1122 00:57:07.476096  710840 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21934-513600/.minikube/ca.pem
	I1122 00:57:07.476250  710840 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21934-513600/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21934-513600/.minikube/ca.pem (1078 bytes)
	I1122 00:57:07.476377  710840 exec_runner.go:144] found /home/jenkins/minikube-integration/21934-513600/.minikube/cert.pem, removing ...
	I1122 00:57:07.476384  710840 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21934-513600/.minikube/cert.pem
	I1122 00:57:07.476414  710840 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21934-513600/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21934-513600/.minikube/cert.pem (1123 bytes)
	I1122 00:57:07.476469  710840 exec_runner.go:144] found /home/jenkins/minikube-integration/21934-513600/.minikube/key.pem, removing ...
	I1122 00:57:07.476474  710840 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21934-513600/.minikube/key.pem
	I1122 00:57:07.476496  710840 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21934-513600/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21934-513600/.minikube/key.pem (1675 bytes)
	I1122 00:57:07.476541  710840 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21934-513600/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21934-513600/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21934-513600/.minikube/certs/ca-key.pem org=jenkins.embed-certs-879000 san=[127.0.0.1 192.168.76.2 embed-certs-879000 localhost minikube]
	I1122 00:57:07.704698  710840 provision.go:177] copyRemoteCerts
	I1122 00:57:07.704792  710840 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1122 00:57:07.704848  710840 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-879000
	I1122 00:57:07.722169  710840 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33797 SSHKeyPath:/home/jenkins/minikube-integration/21934-513600/.minikube/machines/embed-certs-879000/id_rsa Username:docker}
	I1122 00:57:07.829878  710840 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21934-513600/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1122 00:57:07.848675  710840 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21934-513600/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I1122 00:57:07.868141  710840 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21934-513600/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1122 00:57:07.886348  710840 provision.go:87] duration metric: took 428.251263ms to configureAuth
	I1122 00:57:07.886380  710840 ubuntu.go:206] setting minikube options for container-runtime
	I1122 00:57:07.886577  710840 config.go:182] Loaded profile config "embed-certs-879000": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1122 00:57:07.886713  710840 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-879000
	I1122 00:57:07.907276  710840 main.go:143] libmachine: Using SSH client type: native
	I1122 00:57:07.907601  710840 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33797 <nil> <nil>}
	I1122 00:57:07.907615  710840 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1122 00:57:08.270080  710840 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1122 00:57:08.270103  710840 machine.go:97] duration metric: took 4.311552598s to provisionDockerMachine
	I1122 00:57:08.270114  710840 start.go:293] postStartSetup for "embed-certs-879000" (driver="docker")
	I1122 00:57:08.270129  710840 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1122 00:57:08.270186  710840 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1122 00:57:08.270226  710840 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-879000
	I1122 00:57:08.289133  710840 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33797 SSHKeyPath:/home/jenkins/minikube-integration/21934-513600/.minikube/machines/embed-certs-879000/id_rsa Username:docker}
	I1122 00:57:08.389689  710840 ssh_runner.go:195] Run: cat /etc/os-release
	I1122 00:57:08.393154  710840 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1122 00:57:08.393189  710840 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1122 00:57:08.393203  710840 filesync.go:126] Scanning /home/jenkins/minikube-integration/21934-513600/.minikube/addons for local assets ...
	I1122 00:57:08.393260  710840 filesync.go:126] Scanning /home/jenkins/minikube-integration/21934-513600/.minikube/files for local assets ...
	I1122 00:57:08.393353  710840 filesync.go:149] local asset: /home/jenkins/minikube-integration/21934-513600/.minikube/files/etc/ssl/certs/5169372.pem -> 5169372.pem in /etc/ssl/certs
	I1122 00:57:08.393458  710840 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1122 00:57:08.401180  710840 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21934-513600/.minikube/files/etc/ssl/certs/5169372.pem --> /etc/ssl/certs/5169372.pem (1708 bytes)
	I1122 00:57:08.419856  710840 start.go:296] duration metric: took 149.727089ms for postStartSetup
	I1122 00:57:08.419936  710840 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1122 00:57:08.419994  710840 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-879000
	I1122 00:57:08.438594  710840 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33797 SSHKeyPath:/home/jenkins/minikube-integration/21934-513600/.minikube/machines/embed-certs-879000/id_rsa Username:docker}
	I1122 00:57:08.539288  710840 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1122 00:57:08.544246  710840 fix.go:56] duration metric: took 4.927174016s for fixHost
	I1122 00:57:08.544290  710840 start.go:83] releasing machines lock for "embed-certs-879000", held for 4.927226198s
	I1122 00:57:08.544362  710840 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-879000
	I1122 00:57:08.561215  710840 ssh_runner.go:195] Run: cat /version.json
	I1122 00:57:08.561271  710840 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-879000
	I1122 00:57:08.561576  710840 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1122 00:57:08.561632  710840 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-879000
	I1122 00:57:08.584530  710840 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33797 SSHKeyPath:/home/jenkins/minikube-integration/21934-513600/.minikube/machines/embed-certs-879000/id_rsa Username:docker}
	I1122 00:57:08.585440  710840 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33797 SSHKeyPath:/home/jenkins/minikube-integration/21934-513600/.minikube/machines/embed-certs-879000/id_rsa Username:docker}
	I1122 00:57:08.779711  710840 ssh_runner.go:195] Run: systemctl --version
	I1122 00:57:08.786262  710840 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1122 00:57:08.824635  710840 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1122 00:57:08.829301  710840 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1122 00:57:08.829425  710840 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1122 00:57:08.837579  710840 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1122 00:57:08.837601  710840 start.go:496] detecting cgroup driver to use...
	I1122 00:57:08.837631  710840 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1122 00:57:08.837684  710840 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1122 00:57:08.854957  710840 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1122 00:57:08.868339  710840 docker.go:218] disabling cri-docker service (if available) ...
	I1122 00:57:08.868401  710840 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1122 00:57:08.883697  710840 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1122 00:57:08.897515  710840 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1122 00:57:09.033037  710840 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1122 00:57:09.163035  710840 docker.go:234] disabling docker service ...
	I1122 00:57:09.163095  710840 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1122 00:57:09.178870  710840 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1122 00:57:09.204580  710840 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1122 00:57:09.328676  710840 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1122 00:57:09.468076  710840 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1122 00:57:09.482064  710840 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1122 00:57:09.498211  710840 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1122 00:57:09.498299  710840 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1122 00:57:09.507951  710840 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1122 00:57:09.508026  710840 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1122 00:57:09.517496  710840 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1122 00:57:09.526611  710840 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1122 00:57:09.536057  710840 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1122 00:57:09.546910  710840 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1122 00:57:09.555650  710840 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1122 00:57:09.564100  710840 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1122 00:57:09.573549  710840 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1122 00:57:09.581049  710840 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1122 00:57:09.588421  710840 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1122 00:57:09.698893  710840 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1122 00:57:09.869188  710840 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1122 00:57:09.869321  710840 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1122 00:57:09.873228  710840 start.go:564] Will wait 60s for crictl version
	I1122 00:57:09.873335  710840 ssh_runner.go:195] Run: which crictl
	I1122 00:57:09.876798  710840 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1122 00:57:09.902164  710840 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.2
	RuntimeApiVersion:  v1
	I1122 00:57:09.902300  710840 ssh_runner.go:195] Run: crio --version
	I1122 00:57:09.934519  710840 ssh_runner.go:195] Run: crio --version
	I1122 00:57:09.965060  710840 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.2 ...
	W1122 00:57:08.128579  707914 pod_ready.go:104] pod "coredns-66bc5c9577-pt27w" is not "Ready", error: <nil>
	W1122 00:57:10.628264  707914 pod_ready.go:104] pod "coredns-66bc5c9577-pt27w" is not "Ready", error: <nil>
	I1122 00:57:09.967932  710840 cli_runner.go:164] Run: docker network inspect embed-certs-879000 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1122 00:57:09.984828  710840 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I1122 00:57:09.989073  710840 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1122 00:57:10.012064  710840 kubeadm.go:884] updating cluster {Name:embed-certs-879000 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763588073-21934@sha256:19d3da0413e1bfa354cbb88004c6796f8e9772a083e0230b0f6e50212ee04c7e Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:embed-certs-879000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA AP
IServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docke
r BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1122 00:57:10.012220  710840 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1122 00:57:10.012289  710840 ssh_runner.go:195] Run: sudo crictl images --output json
	I1122 00:57:10.055609  710840 crio.go:514] all images are preloaded for cri-o runtime.
	I1122 00:57:10.055636  710840 crio.go:433] Images already preloaded, skipping extraction
	I1122 00:57:10.055696  710840 ssh_runner.go:195] Run: sudo crictl images --output json
	I1122 00:57:10.083515  710840 crio.go:514] all images are preloaded for cri-o runtime.
	I1122 00:57:10.083543  710840 cache_images.go:86] Images are preloaded, skipping loading
	I1122 00:57:10.083553  710840 kubeadm.go:935] updating node { 192.168.76.2 8443 v1.34.1 crio true true} ...
	I1122 00:57:10.083656  710840 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=embed-certs-879000 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:embed-certs-879000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1122 00:57:10.083772  710840 ssh_runner.go:195] Run: crio config
	I1122 00:57:10.154744  710840 cni.go:84] Creating CNI manager for ""
	I1122 00:57:10.154769  710840 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1122 00:57:10.154793  710840 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1122 00:57:10.154816  710840 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-879000 NodeName:embed-certs-879000 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/e
tc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1122 00:57:10.154955  710840 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "embed-certs-879000"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.76.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1122 00:57:10.155038  710840 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1122 00:57:10.163422  710840 binaries.go:51] Found k8s binaries, skipping transfer
	I1122 00:57:10.163505  710840 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1122 00:57:10.171793  710840 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (368 bytes)
	I1122 00:57:10.184834  710840 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1122 00:57:10.203743  710840 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2215 bytes)
	I1122 00:57:10.221464  710840 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I1122 00:57:10.225438  710840 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1122 00:57:10.236155  710840 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1122 00:57:10.383021  710840 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1122 00:57:10.400057  710840 certs.go:69] Setting up /home/jenkins/minikube-integration/21934-513600/.minikube/profiles/embed-certs-879000 for IP: 192.168.76.2
	I1122 00:57:10.400080  710840 certs.go:195] generating shared ca certs ...
	I1122 00:57:10.400096  710840 certs.go:227] acquiring lock for ca certs: {Name:mkaf4c79493334cb2058b349c36be7473837f9b7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1122 00:57:10.400237  710840 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21934-513600/.minikube/ca.key
	I1122 00:57:10.400309  710840 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21934-513600/.minikube/proxy-client-ca.key
	I1122 00:57:10.400321  710840 certs.go:257] generating profile certs ...
	I1122 00:57:10.400413  710840 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21934-513600/.minikube/profiles/embed-certs-879000/client.key
	I1122 00:57:10.400487  710840 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21934-513600/.minikube/profiles/embed-certs-879000/apiserver.key.f00c2ee1
	I1122 00:57:10.400542  710840 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21934-513600/.minikube/profiles/embed-certs-879000/proxy-client.key
	I1122 00:57:10.400654  710840 certs.go:484] found cert: /home/jenkins/minikube-integration/21934-513600/.minikube/certs/516937.pem (1338 bytes)
	W1122 00:57:10.400688  710840 certs.go:480] ignoring /home/jenkins/minikube-integration/21934-513600/.minikube/certs/516937_empty.pem, impossibly tiny 0 bytes
	I1122 00:57:10.400701  710840 certs.go:484] found cert: /home/jenkins/minikube-integration/21934-513600/.minikube/certs/ca-key.pem (1679 bytes)
	I1122 00:57:10.400733  710840 certs.go:484] found cert: /home/jenkins/minikube-integration/21934-513600/.minikube/certs/ca.pem (1078 bytes)
	I1122 00:57:10.400760  710840 certs.go:484] found cert: /home/jenkins/minikube-integration/21934-513600/.minikube/certs/cert.pem (1123 bytes)
	I1122 00:57:10.400790  710840 certs.go:484] found cert: /home/jenkins/minikube-integration/21934-513600/.minikube/certs/key.pem (1675 bytes)
	I1122 00:57:10.400846  710840 certs.go:484] found cert: /home/jenkins/minikube-integration/21934-513600/.minikube/files/etc/ssl/certs/5169372.pem (1708 bytes)
	I1122 00:57:10.407254  710840 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21934-513600/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1122 00:57:10.433183  710840 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21934-513600/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1122 00:57:10.470216  710840 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21934-513600/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1122 00:57:10.491628  710840 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21934-513600/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1122 00:57:10.510784  710840 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21934-513600/.minikube/profiles/embed-certs-879000/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I1122 00:57:10.531296  710840 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21934-513600/.minikube/profiles/embed-certs-879000/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1122 00:57:10.553866  710840 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21934-513600/.minikube/profiles/embed-certs-879000/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1122 00:57:10.575750  710840 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21934-513600/.minikube/profiles/embed-certs-879000/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1122 00:57:10.600564  710840 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21934-513600/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1122 00:57:10.628332  710840 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21934-513600/.minikube/certs/516937.pem --> /usr/share/ca-certificates/516937.pem (1338 bytes)
	I1122 00:57:10.659418  710840 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21934-513600/.minikube/files/etc/ssl/certs/5169372.pem --> /usr/share/ca-certificates/5169372.pem (1708 bytes)
	I1122 00:57:10.682500  710840 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1122 00:57:10.708292  710840 ssh_runner.go:195] Run: openssl version
	I1122 00:57:10.714718  710840 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/516937.pem && ln -fs /usr/share/ca-certificates/516937.pem /etc/ssl/certs/516937.pem"
	I1122 00:57:10.723834  710840 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/516937.pem
	I1122 00:57:10.727780  710840 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov 21 23:55 /usr/share/ca-certificates/516937.pem
	I1122 00:57:10.727845  710840 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/516937.pem
	I1122 00:57:10.773353  710840 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/516937.pem /etc/ssl/certs/51391683.0"
	I1122 00:57:10.783087  710840 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/5169372.pem && ln -fs /usr/share/ca-certificates/5169372.pem /etc/ssl/certs/5169372.pem"
	I1122 00:57:10.793870  710840 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/5169372.pem
	I1122 00:57:10.798689  710840 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov 21 23:55 /usr/share/ca-certificates/5169372.pem
	I1122 00:57:10.798819  710840 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/5169372.pem
	I1122 00:57:10.846814  710840 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/5169372.pem /etc/ssl/certs/3ec20f2e.0"
	I1122 00:57:10.855486  710840 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1122 00:57:10.863954  710840 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1122 00:57:10.868193  710840 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov 21 23:48 /usr/share/ca-certificates/minikubeCA.pem
	I1122 00:57:10.868261  710840 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1122 00:57:10.910054  710840 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1122 00:57:10.918636  710840 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1122 00:57:10.923119  710840 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1122 00:57:10.964597  710840 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1122 00:57:11.006470  710840 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1122 00:57:11.048433  710840 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1122 00:57:11.090774  710840 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1122 00:57:11.142676  710840 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1122 00:57:11.211600  710840 kubeadm.go:401] StartCluster: {Name:embed-certs-879000 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763588073-21934@sha256:19d3da0413e1bfa354cbb88004c6796f8e9772a083e0230b0f6e50212ee04c7e Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:embed-certs-879000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISe
rverNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker B
inaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1122 00:57:11.211695  710840 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1122 00:57:11.211753  710840 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1122 00:57:11.325907  710840 cri.go:89] found id: "9852dc8e953e4082535d9da09fe8c7c488642b2923223eea6484d9282094e3ea"
	I1122 00:57:11.325954  710840 cri.go:89] found id: "f660ef303bd4694fbdad76a7eb87133a3cca27093a6685ac673521dce9c9d434"
	I1122 00:57:11.325973  710840 cri.go:89] found id: "d2057db699cba9b5fd5582afa88f5011c61af54cfbf9b6be282bae14ccb3e06b"
	I1122 00:57:11.325994  710840 cri.go:89] found id: "53b12e2f48badfbf5f25cd651f43e00f0d1451191aa045dab44e2461293c766c"
	I1122 00:57:11.326018  710840 cri.go:89] found id: ""
	I1122 00:57:11.326081  710840 ssh_runner.go:195] Run: sudo runc list -f json
	W1122 00:57:11.352405  710840 kubeadm.go:408] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-22T00:57:11Z" level=error msg="open /run/runc: no such file or directory"
	I1122 00:57:11.352536  710840 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1122 00:57:11.374273  710840 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1122 00:57:11.374338  710840 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1122 00:57:11.374405  710840 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1122 00:57:11.386479  710840 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1122 00:57:11.387126  710840 kubeconfig.go:47] verify endpoint returned: get endpoint: "embed-certs-879000" does not appear in /home/jenkins/minikube-integration/21934-513600/kubeconfig
	I1122 00:57:11.387437  710840 kubeconfig.go:62] /home/jenkins/minikube-integration/21934-513600/kubeconfig needs updating (will repair): [kubeconfig missing "embed-certs-879000" cluster setting kubeconfig missing "embed-certs-879000" context setting]
	I1122 00:57:11.387959  710840 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21934-513600/kubeconfig: {Name:mkcd094d8a765e5a81369c0fb33cb5e696b17bb8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1122 00:57:11.389723  710840 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1122 00:57:11.410217  710840 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.76.2
	I1122 00:57:11.410262  710840 kubeadm.go:602] duration metric: took 35.904558ms to restartPrimaryControlPlane
	I1122 00:57:11.410273  710840 kubeadm.go:403] duration metric: took 198.683876ms to StartCluster
	I1122 00:57:11.410313  710840 settings.go:142] acquiring lock: {Name:mk6c31eb57ec65b047b78b4e1046e03fe7cc77bb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1122 00:57:11.410395  710840 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21934-513600/kubeconfig
	I1122 00:57:11.411792  710840 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21934-513600/kubeconfig: {Name:mkcd094d8a765e5a81369c0fb33cb5e696b17bb8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1122 00:57:11.412072  710840 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1122 00:57:11.412466  710840 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1122 00:57:11.412542  710840 addons.go:70] Setting storage-provisioner=true in profile "embed-certs-879000"
	I1122 00:57:11.412556  710840 addons.go:239] Setting addon storage-provisioner=true in "embed-certs-879000"
	W1122 00:57:11.412569  710840 addons.go:248] addon storage-provisioner should already be in state true
	I1122 00:57:11.412592  710840 host.go:66] Checking if "embed-certs-879000" exists ...
	I1122 00:57:11.413068  710840 cli_runner.go:164] Run: docker container inspect embed-certs-879000 --format={{.State.Status}}
	I1122 00:57:11.413391  710840 config.go:182] Loaded profile config "embed-certs-879000": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1122 00:57:11.413462  710840 addons.go:70] Setting dashboard=true in profile "embed-certs-879000"
	I1122 00:57:11.413481  710840 addons.go:239] Setting addon dashboard=true in "embed-certs-879000"
	W1122 00:57:11.413488  710840 addons.go:248] addon dashboard should already be in state true
	I1122 00:57:11.413525  710840 host.go:66] Checking if "embed-certs-879000" exists ...
	I1122 00:57:11.414199  710840 cli_runner.go:164] Run: docker container inspect embed-certs-879000 --format={{.State.Status}}
	I1122 00:57:11.414589  710840 addons.go:70] Setting default-storageclass=true in profile "embed-certs-879000"
	I1122 00:57:11.414613  710840 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-879000"
	I1122 00:57:11.414888  710840 cli_runner.go:164] Run: docker container inspect embed-certs-879000 --format={{.State.Status}}
	I1122 00:57:11.425503  710840 out.go:179] * Verifying Kubernetes components...
	I1122 00:57:11.433314  710840 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1122 00:57:11.471294  710840 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1122 00:57:11.474372  710840 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1122 00:57:11.474392  710840 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1122 00:57:11.474453  710840 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-879000
	I1122 00:57:11.478849  710840 addons.go:239] Setting addon default-storageclass=true in "embed-certs-879000"
	W1122 00:57:11.478870  710840 addons.go:248] addon default-storageclass should already be in state true
	I1122 00:57:11.478893  710840 host.go:66] Checking if "embed-certs-879000" exists ...
	I1122 00:57:11.479457  710840 cli_runner.go:164] Run: docker container inspect embed-certs-879000 --format={{.State.Status}}
	I1122 00:57:11.479767  710840 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1122 00:57:11.482948  710840 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	I1122 00:57:11.148176  707914 pod_ready.go:94] pod "coredns-66bc5c9577-pt27w" is "Ready"
	I1122 00:57:11.148211  707914 pod_ready.go:86] duration metric: took 31.026348234s for pod "coredns-66bc5c9577-pt27w" in "kube-system" namespace to be "Ready" or be gone ...
	I1122 00:57:11.159368  707914 pod_ready.go:83] waiting for pod "etcd-no-preload-165130" in "kube-system" namespace to be "Ready" or be gone ...
	I1122 00:57:11.168009  707914 pod_ready.go:94] pod "etcd-no-preload-165130" is "Ready"
	I1122 00:57:11.168039  707914 pod_ready.go:86] duration metric: took 8.642373ms for pod "etcd-no-preload-165130" in "kube-system" namespace to be "Ready" or be gone ...
	I1122 00:57:11.256649  707914 pod_ready.go:83] waiting for pod "kube-apiserver-no-preload-165130" in "kube-system" namespace to be "Ready" or be gone ...
	I1122 00:57:11.264741  707914 pod_ready.go:94] pod "kube-apiserver-no-preload-165130" is "Ready"
	I1122 00:57:11.264763  707914 pod_ready.go:86] duration metric: took 8.088773ms for pod "kube-apiserver-no-preload-165130" in "kube-system" namespace to be "Ready" or be gone ...
	I1122 00:57:11.267516  707914 pod_ready.go:83] waiting for pod "kube-controller-manager-no-preload-165130" in "kube-system" namespace to be "Ready" or be gone ...
	I1122 00:57:11.325617  707914 pod_ready.go:94] pod "kube-controller-manager-no-preload-165130" is "Ready"
	I1122 00:57:11.325640  707914 pod_ready.go:86] duration metric: took 58.104171ms for pod "kube-controller-manager-no-preload-165130" in "kube-system" namespace to be "Ready" or be gone ...
	I1122 00:57:11.529199  707914 pod_ready.go:83] waiting for pod "kube-proxy-kr4ll" in "kube-system" namespace to be "Ready" or be gone ...
	I1122 00:57:11.925260  707914 pod_ready.go:94] pod "kube-proxy-kr4ll" is "Ready"
	I1122 00:57:11.925290  707914 pod_ready.go:86] duration metric: took 396.058587ms for pod "kube-proxy-kr4ll" in "kube-system" namespace to be "Ready" or be gone ...
	I1122 00:57:12.125736  707914 pod_ready.go:83] waiting for pod "kube-scheduler-no-preload-165130" in "kube-system" namespace to be "Ready" or be gone ...
	I1122 00:57:12.525038  707914 pod_ready.go:94] pod "kube-scheduler-no-preload-165130" is "Ready"
	I1122 00:57:12.525069  707914 pod_ready.go:86] duration metric: took 399.302358ms for pod "kube-scheduler-no-preload-165130" in "kube-system" namespace to be "Ready" or be gone ...
	I1122 00:57:12.525083  707914 pod_ready.go:40] duration metric: took 32.417440406s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1122 00:57:12.628049  707914 start.go:628] kubectl: 1.33.2, cluster: 1.34.1 (minor skew: 1)
	I1122 00:57:12.631398  707914 out.go:179] * Done! kubectl is now configured to use "no-preload-165130" cluster and "default" namespace by default
	I1122 00:57:11.487774  710840 addons.go:436] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1122 00:57:11.487796  710840 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1122 00:57:11.487860  710840 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-879000
	I1122 00:57:11.545941  710840 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33797 SSHKeyPath:/home/jenkins/minikube-integration/21934-513600/.minikube/machines/embed-certs-879000/id_rsa Username:docker}
	I1122 00:57:11.559356  710840 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1122 00:57:11.559389  710840 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1122 00:57:11.559450  710840 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-879000
	I1122 00:57:11.559798  710840 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33797 SSHKeyPath:/home/jenkins/minikube-integration/21934-513600/.minikube/machines/embed-certs-879000/id_rsa Username:docker}
	I1122 00:57:11.589479  710840 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33797 SSHKeyPath:/home/jenkins/minikube-integration/21934-513600/.minikube/machines/embed-certs-879000/id_rsa Username:docker}
	I1122 00:57:11.775851  710840 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1122 00:57:11.802252  710840 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1122 00:57:11.803759  710840 node_ready.go:35] waiting up to 6m0s for node "embed-certs-879000" to be "Ready" ...
	I1122 00:57:11.854750  710840 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1122 00:57:11.854824  710840 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1122 00:57:11.879094  710840 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1122 00:57:11.915188  710840 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1122 00:57:11.915273  710840 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1122 00:57:12.005164  710840 addons.go:436] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1122 00:57:12.005252  710840 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1122 00:57:12.079473  710840 addons.go:436] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1122 00:57:12.079550  710840 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1122 00:57:12.145702  710840 addons.go:436] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1122 00:57:12.145782  710840 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1122 00:57:12.180120  710840 addons.go:436] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1122 00:57:12.180196  710840 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1122 00:57:12.201560  710840 addons.go:436] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1122 00:57:12.201632  710840 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1122 00:57:12.228109  710840 addons.go:436] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1122 00:57:12.228183  710840 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1122 00:57:12.247327  710840 addons.go:436] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1122 00:57:12.247397  710840 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1122 00:57:12.271810  710840 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1122 00:57:16.180749  710840 node_ready.go:49] node "embed-certs-879000" is "Ready"
	I1122 00:57:16.180776  710840 node_ready.go:38] duration metric: took 4.3769276s for node "embed-certs-879000" to be "Ready" ...
	I1122 00:57:16.180791  710840 api_server.go:52] waiting for apiserver process to appear ...
	I1122 00:57:16.180864  710840 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1122 00:57:18.165598  710840 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (6.363309216s)
	I1122 00:57:18.165707  710840 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (6.286542585s)
	I1122 00:57:18.222681  710840 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (5.95079118s)
	I1122 00:57:18.222863  710840 ssh_runner.go:235] Completed: sudo pgrep -xnf kube-apiserver.*minikube.*: (2.041986955s)
	I1122 00:57:18.222906  710840 api_server.go:72] duration metric: took 6.810796314s to wait for apiserver process to appear ...
	I1122 00:57:18.222928  710840 api_server.go:88] waiting for apiserver healthz status ...
	I1122 00:57:18.222959  710840 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1122 00:57:18.226719  710840 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p embed-certs-879000 addons enable metrics-server
	
	I1122 00:57:18.230708  710840 out.go:179] * Enabled addons: storage-provisioner, default-storageclass, dashboard
	I1122 00:57:18.233368  710840 api_server.go:279] https://192.168.76.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1122 00:57:18.233426  710840 api_server.go:103] status: https://192.168.76.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1122 00:57:18.233714  710840 addons.go:530] duration metric: took 6.821248914s for enable addons: enabled=[storage-provisioner default-storageclass dashboard]
	I1122 00:57:18.723959  710840 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1122 00:57:18.731985  710840 api_server.go:279] https://192.168.76.2:8443/healthz returned 200:
	ok
	I1122 00:57:18.733024  710840 api_server.go:141] control plane version: v1.34.1
	I1122 00:57:18.733050  710840 api_server.go:131] duration metric: took 510.103259ms to wait for apiserver health ...
	I1122 00:57:18.733059  710840 system_pods.go:43] waiting for kube-system pods to appear ...
	I1122 00:57:18.743382  710840 system_pods.go:59] 8 kube-system pods found
	I1122 00:57:18.743464  710840 system_pods.go:61] "coredns-66bc5c9577-h2kpd" [5adad534-0ba4-479f-8e5a-7f5a9e26fb1e] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1122 00:57:18.743488  710840 system_pods.go:61] "etcd-embed-certs-879000" [7cebfe87-7413-4cfe-8899-73cdba19a310] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1122 00:57:18.743508  710840 system_pods.go:61] "kindnet-j8wwg" [29cadb16-a427-4f8b-b121-3af35927f8d5] Running
	I1122 00:57:18.743542  710840 system_pods.go:61] "kube-apiserver-embed-certs-879000" [66ed66fb-cf57-49c6-a5fc-a8814e40c10b] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1122 00:57:18.743567  710840 system_pods.go:61] "kube-controller-manager-embed-certs-879000" [347609cd-705b-441f-941f-936a0e0574f7] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1122 00:57:18.743588  710840 system_pods.go:61] "kube-proxy-w9bqj" [f56c390b-4d40-40a3-9862-f5081a6561e5] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1122 00:57:18.743609  710840 system_pods.go:61] "kube-scheduler-embed-certs-879000" [364d55f5-1b98-4087-999c-c7302863e10f] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1122 00:57:18.743629  710840 system_pods.go:61] "storage-provisioner" [042e1631-1c8e-4ce0-92e5-cdd4742fa06b] Running
	I1122 00:57:18.743662  710840 system_pods.go:74] duration metric: took 10.596728ms to wait for pod list to return data ...
	I1122 00:57:18.743683  710840 default_sa.go:34] waiting for default service account to be created ...
	I1122 00:57:18.746874  710840 default_sa.go:45] found service account: "default"
	I1122 00:57:18.746900  710840 default_sa.go:55] duration metric: took 3.199947ms for default service account to be created ...
	I1122 00:57:18.746910  710840 system_pods.go:116] waiting for k8s-apps to be running ...
	I1122 00:57:18.749794  710840 system_pods.go:86] 8 kube-system pods found
	I1122 00:57:18.749865  710840 system_pods.go:89] "coredns-66bc5c9577-h2kpd" [5adad534-0ba4-479f-8e5a-7f5a9e26fb1e] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1122 00:57:18.749875  710840 system_pods.go:89] "etcd-embed-certs-879000" [7cebfe87-7413-4cfe-8899-73cdba19a310] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1122 00:57:18.749881  710840 system_pods.go:89] "kindnet-j8wwg" [29cadb16-a427-4f8b-b121-3af35927f8d5] Running
	I1122 00:57:18.749888  710840 system_pods.go:89] "kube-apiserver-embed-certs-879000" [66ed66fb-cf57-49c6-a5fc-a8814e40c10b] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1122 00:57:18.749895  710840 system_pods.go:89] "kube-controller-manager-embed-certs-879000" [347609cd-705b-441f-941f-936a0e0574f7] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1122 00:57:18.749905  710840 system_pods.go:89] "kube-proxy-w9bqj" [f56c390b-4d40-40a3-9862-f5081a6561e5] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1122 00:57:18.749911  710840 system_pods.go:89] "kube-scheduler-embed-certs-879000" [364d55f5-1b98-4087-999c-c7302863e10f] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1122 00:57:18.749919  710840 system_pods.go:89] "storage-provisioner" [042e1631-1c8e-4ce0-92e5-cdd4742fa06b] Running
	I1122 00:57:18.749927  710840 system_pods.go:126] duration metric: took 3.010783ms to wait for k8s-apps to be running ...
	I1122 00:57:18.749940  710840 system_svc.go:44] waiting for kubelet service to be running ....
	I1122 00:57:18.749997  710840 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1122 00:57:18.763126  710840 system_svc.go:56] duration metric: took 13.165736ms WaitForService to wait for kubelet
	I1122 00:57:18.763192  710840 kubeadm.go:587] duration metric: took 7.3510799s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1122 00:57:18.763234  710840 node_conditions.go:102] verifying NodePressure condition ...
	I1122 00:57:18.766578  710840 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1122 00:57:18.766650  710840 node_conditions.go:123] node cpu capacity is 2
	I1122 00:57:18.766676  710840 node_conditions.go:105] duration metric: took 3.410994ms to run NodePressure ...
	I1122 00:57:18.766701  710840 start.go:242] waiting for startup goroutines ...
	I1122 00:57:18.766736  710840 start.go:247] waiting for cluster config update ...
	I1122 00:57:18.766766  710840 start.go:256] writing updated cluster config ...
	I1122 00:57:18.767064  710840 ssh_runner.go:195] Run: rm -f paused
	I1122 00:57:18.771051  710840 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1122 00:57:18.775605  710840 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-h2kpd" in "kube-system" namespace to be "Ready" or be gone ...
	W1122 00:57:20.781042  710840 pod_ready.go:104] pod "coredns-66bc5c9577-h2kpd" is not "Ready", error: <nil>
	W1122 00:57:22.782264  710840 pod_ready.go:104] pod "coredns-66bc5c9577-h2kpd" is not "Ready", error: <nil>
	W1122 00:57:24.789060  710840 pod_ready.go:104] pod "coredns-66bc5c9577-h2kpd" is not "Ready", error: <nil>
	W1122 00:57:27.283114  710840 pod_ready.go:104] pod "coredns-66bc5c9577-h2kpd" is not "Ready", error: <nil>
	
	
	==> CRI-O <==
	Nov 22 00:57:09 no-preload-165130 crio[654]: time="2025-11-22T00:57:09.207013506Z" level=info msg="Removed container bb223c78c451293e80b95ccadbece2f1a33e69511725ec01523fe5179e972fc5: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-zvt5d/dashboard-metrics-scraper" id=ae6db141-dc1e-4172-b48a-c487b0c920d6 name=/runtime.v1.RuntimeService/RemoveContainer
	Nov 22 00:57:09 no-preload-165130 conmon[1151]: conmon 68cc2c66134d4931da62 <ninfo>: container 1159 exited with status 1
	Nov 22 00:57:10 no-preload-165130 crio[654]: time="2025-11-22T00:57:10.195841569Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=5ddd92d7-80ad-400f-afc9-9a9e5af8ff2a name=/runtime.v1.ImageService/ImageStatus
	Nov 22 00:57:10 no-preload-165130 crio[654]: time="2025-11-22T00:57:10.199303253Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=7668790b-b40f-4332-b1ba-4d58f00d68fd name=/runtime.v1.ImageService/ImageStatus
	Nov 22 00:57:10 no-preload-165130 crio[654]: time="2025-11-22T00:57:10.200414047Z" level=info msg="Creating container: kube-system/storage-provisioner/storage-provisioner" id=b4f391e9-f06e-4af5-80ea-a02c8171c8da name=/runtime.v1.RuntimeService/CreateContainer
	Nov 22 00:57:10 no-preload-165130 crio[654]: time="2025-11-22T00:57:10.200524517Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 22 00:57:10 no-preload-165130 crio[654]: time="2025-11-22T00:57:10.213839533Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 22 00:57:10 no-preload-165130 crio[654]: time="2025-11-22T00:57:10.215800862Z" level=warning msg="Failed to open /etc/passwd: open /var/lib/containers/storage/overlay/844ef4f756dabfb7c04eb582c6846d8219999d6fe7f1bd4f6a2a91ef33e3eea6/merged/etc/passwd: no such file or directory"
	Nov 22 00:57:10 no-preload-165130 crio[654]: time="2025-11-22T00:57:10.215837333Z" level=warning msg="Failed to open /etc/group: open /var/lib/containers/storage/overlay/844ef4f756dabfb7c04eb582c6846d8219999d6fe7f1bd4f6a2a91ef33e3eea6/merged/etc/group: no such file or directory"
	Nov 22 00:57:10 no-preload-165130 crio[654]: time="2025-11-22T00:57:10.216097871Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 22 00:57:10 no-preload-165130 crio[654]: time="2025-11-22T00:57:10.257193678Z" level=info msg="Created container 8e14a16d0cdf92563d2c084ac281b0188eb1b47be445498e8be3c53d189c3b19: kube-system/storage-provisioner/storage-provisioner" id=b4f391e9-f06e-4af5-80ea-a02c8171c8da name=/runtime.v1.RuntimeService/CreateContainer
	Nov 22 00:57:10 no-preload-165130 crio[654]: time="2025-11-22T00:57:10.258367936Z" level=info msg="Starting container: 8e14a16d0cdf92563d2c084ac281b0188eb1b47be445498e8be3c53d189c3b19" id=2631d0bb-0542-4542-a59e-60d0627a6c69 name=/runtime.v1.RuntimeService/StartContainer
	Nov 22 00:57:10 no-preload-165130 crio[654]: time="2025-11-22T00:57:10.260302723Z" level=info msg="Started container" PID=1644 containerID=8e14a16d0cdf92563d2c084ac281b0188eb1b47be445498e8be3c53d189c3b19 description=kube-system/storage-provisioner/storage-provisioner id=2631d0bb-0542-4542-a59e-60d0627a6c69 name=/runtime.v1.RuntimeService/StartContainer sandboxID=de917186bd543dff6663546b1fe8f75c626d3b36755cb5cd520ac14c3040abdf
	Nov 22 00:57:19 no-preload-165130 crio[654]: time="2025-11-22T00:57:19.739550882Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 22 00:57:19 no-preload-165130 crio[654]: time="2025-11-22T00:57:19.745872658Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 22 00:57:19 no-preload-165130 crio[654]: time="2025-11-22T00:57:19.745912107Z" level=info msg="Updated default CNI network name to kindnet"
	Nov 22 00:57:19 no-preload-165130 crio[654]: time="2025-11-22T00:57:19.745933777Z" level=info msg="CNI monitoring event WRITE         \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 22 00:57:19 no-preload-165130 crio[654]: time="2025-11-22T00:57:19.748909525Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 22 00:57:19 no-preload-165130 crio[654]: time="2025-11-22T00:57:19.748943157Z" level=info msg="Updated default CNI network name to kindnet"
	Nov 22 00:57:19 no-preload-165130 crio[654]: time="2025-11-22T00:57:19.748963473Z" level=info msg="CNI monitoring event RENAME        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 22 00:57:19 no-preload-165130 crio[654]: time="2025-11-22T00:57:19.751781729Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 22 00:57:19 no-preload-165130 crio[654]: time="2025-11-22T00:57:19.751812357Z" level=info msg="Updated default CNI network name to kindnet"
	Nov 22 00:57:19 no-preload-165130 crio[654]: time="2025-11-22T00:57:19.751835971Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist\" ← \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 22 00:57:19 no-preload-165130 crio[654]: time="2025-11-22T00:57:19.755182835Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 22 00:57:19 no-preload-165130 crio[654]: time="2025-11-22T00:57:19.755214695Z" level=info msg="Updated default CNI network name to kindnet"
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                      CREATED             STATE               NAME                        ATTEMPT             POD ID              POD                                          NAMESPACE
	8e14a16d0cdf9       66749159455b3f08c8318fe0233122f54d0f5889f9c5fdfb73c3fd9d99895b51                                           20 seconds ago      Running             storage-provisioner         2                   de917186bd543       storage-provisioner                          kube-system
	61a964437806a       a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7                                           22 seconds ago      Exited              dashboard-metrics-scraper   2                   30f6804605129       dashboard-metrics-scraper-6ffb444bf9-zvt5d   kubernetes-dashboard
	f3789d40f639b       docker.io/kubernetesui/dashboard@sha256:5c52c60663b473628bd98e4ffee7a747ef1f88d8c7bcee957b089fb3f61bdedf   43 seconds ago      Running             kubernetes-dashboard        0                   dad8eb59e956e       kubernetes-dashboard-855c9754f9-xsqns        kubernetes-dashboard
	000dd70b69821       138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc                                           51 seconds ago      Running             coredns                     1                   62f021300a0ba       coredns-66bc5c9577-pt27w                     kube-system
	a49954f61edb7       1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c                                           51 seconds ago      Running             busybox                     1                   d7eb53eae797e       busybox                                      default
	1b8b4ae3716d0       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c                                           51 seconds ago      Running             kindnet-cni                 1                   cc9ad59dade89       kindnet-2kqbq                                kube-system
	68cc2c66134d4       66749159455b3f08c8318fe0233122f54d0f5889f9c5fdfb73c3fd9d99895b51                                           51 seconds ago      Exited              storage-provisioner         1                   de917186bd543       storage-provisioner                          kube-system
	7f4fb030dec09       05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9                                           51 seconds ago      Running             kube-proxy                  1                   e194a404d72ac       kube-proxy-kr4ll                             kube-system
	d445939f66bc7       b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0                                           57 seconds ago      Running             kube-scheduler              1                   b36f8cf63dd12       kube-scheduler-no-preload-165130             kube-system
	1842c88afa2f9       a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e                                           57 seconds ago      Running             etcd                        1                   fc4c7d2dd1cf7       etcd-no-preload-165130                       kube-system
	4a703cddbc0fe       43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196                                           57 seconds ago      Running             kube-apiserver              1                   448f5420faef6       kube-apiserver-no-preload-165130             kube-system
	447fcc475a732       7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a                                           57 seconds ago      Running             kube-controller-manager     1                   af171786ca9dd       kube-controller-manager-no-preload-165130    kube-system
	
	
	==> coredns [000dd70b698217aa6d95bd509cf47f3362cb467c72c920839b74ece78d579568] <==
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = fa9a0cdcdddcb4be74a0eaf7cfcb211c40e29ddf5507e03bbfc0065bade31f0f2641a2513136e246f32328dd126fc93236fb5c595246f0763926a524386705e8
	CoreDNS-1.12.1
	linux/arm64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:33127 - 21090 "HINFO IN 1763815445658371244.547625588154392815. udp 56 false 512" NXDOMAIN qr,rd,ra 56 0.025430509s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	
	
	==> describe nodes <==
	Name:               no-preload-165130
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=no-preload-165130
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=299bbe887a12c40541707cc636234f35f4ff1785
	                    minikube.k8s.io/name=no-preload-165130
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_11_22T00_55_38_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 22 Nov 2025 00:55:34 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  no-preload-165130
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 22 Nov 2025 00:57:19 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 22 Nov 2025 00:57:09 +0000   Sat, 22 Nov 2025 00:55:27 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 22 Nov 2025 00:57:09 +0000   Sat, 22 Nov 2025 00:55:27 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 22 Nov 2025 00:57:09 +0000   Sat, 22 Nov 2025 00:55:27 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 22 Nov 2025 00:57:09 +0000   Sat, 22 Nov 2025 00:55:59 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.85.2
	  Hostname:    no-preload-165130
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	System Info:
	  Machine ID:                 c92eea4d3c03c0156e01fcf8691e3907
	  System UUID:                194834cc-9098-4e11-a16d-906d0fa2db99
	  Boot ID:                    72ac7385-472f-47d1-a23e-bd80468d6e09
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.2
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         89s
	  kube-system                 coredns-66bc5c9577-pt27w                      100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     108s
	  kube-system                 etcd-no-preload-165130                        100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         114s
	  kube-system                 kindnet-2kqbq                                 100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      108s
	  kube-system                 kube-apiserver-no-preload-165130              250m (12%)    0 (0%)      0 (0%)           0 (0%)         114s
	  kube-system                 kube-controller-manager-no-preload-165130     200m (10%)    0 (0%)      0 (0%)           0 (0%)         114s
	  kube-system                 kube-proxy-kr4ll                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         108s
	  kube-system                 kube-scheduler-no-preload-165130              100m (5%)     0 (0%)      0 (0%)           0 (0%)         114s
	  kube-system                 storage-provisioner                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         106s
	  kubernetes-dashboard        dashboard-metrics-scraper-6ffb444bf9-zvt5d    0 (0%)        0 (0%)      0 (0%)           0 (0%)         49s
	  kubernetes-dashboard        kubernetes-dashboard-855c9754f9-xsqns         0 (0%)        0 (0%)      0 (0%)           0 (0%)         49s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  100m (5%)
	  memory             220Mi (2%)  220Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                  From             Message
	  ----     ------                   ----                 ----             -------
	  Normal   Starting                 106s                 kube-proxy       
	  Normal   Starting                 51s                  kube-proxy       
	  Normal   NodeHasSufficientMemory  2m5s (x8 over 2m5s)  kubelet          Node no-preload-165130 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    2m5s (x8 over 2m5s)  kubelet          Node no-preload-165130 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     2m5s (x8 over 2m5s)  kubelet          Node no-preload-165130 status is now: NodeHasSufficientPID
	  Normal   NodeHasNoDiskPressure    114s                 kubelet          Node no-preload-165130 status is now: NodeHasNoDiskPressure
	  Warning  CgroupV1                 114s                 kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  114s                 kubelet          Node no-preload-165130 status is now: NodeHasSufficientMemory
	  Normal   NodeHasSufficientPID     114s                 kubelet          Node no-preload-165130 status is now: NodeHasSufficientPID
	  Normal   Starting                 114s                 kubelet          Starting kubelet.
	  Normal   RegisteredNode           109s                 node-controller  Node no-preload-165130 event: Registered Node no-preload-165130 in Controller
	  Normal   NodeReady                92s                  kubelet          Node no-preload-165130 status is now: NodeReady
	  Normal   Starting                 59s                  kubelet          Starting kubelet.
	  Warning  CgroupV1                 59s                  kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  58s (x8 over 59s)    kubelet          Node no-preload-165130 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    58s (x8 over 59s)    kubelet          Node no-preload-165130 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     58s (x8 over 59s)    kubelet          Node no-preload-165130 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           50s                  node-controller  Node no-preload-165130 event: Registered Node no-preload-165130 in Controller
	
	
	==> dmesg <==
	[Nov22 00:33] overlayfs: idmapped layers are currently not supported
	[Nov22 00:35] overlayfs: idmapped layers are currently not supported
	[Nov22 00:36] overlayfs: idmapped layers are currently not supported
	[ +18.168104] overlayfs: idmapped layers are currently not supported
	[Nov22 00:37] overlayfs: idmapped layers are currently not supported
	[ +56.322609] overlayfs: idmapped layers are currently not supported
	[Nov22 00:38] overlayfs: idmapped layers are currently not supported
	[Nov22 00:39] overlayfs: idmapped layers are currently not supported
	[ +23.174928] overlayfs: idmapped layers are currently not supported
	[Nov22 00:41] overlayfs: idmapped layers are currently not supported
	[Nov22 00:42] overlayfs: idmapped layers are currently not supported
	[Nov22 00:44] overlayfs: idmapped layers are currently not supported
	[Nov22 00:45] overlayfs: idmapped layers are currently not supported
	[Nov22 00:46] overlayfs: idmapped layers are currently not supported
	[Nov22 00:48] overlayfs: idmapped layers are currently not supported
	[Nov22 00:50] overlayfs: idmapped layers are currently not supported
	[Nov22 00:51] overlayfs: idmapped layers are currently not supported
	[ +11.900293] overlayfs: idmapped layers are currently not supported
	[ +28.922055] overlayfs: idmapped layers are currently not supported
	[Nov22 00:52] overlayfs: idmapped layers are currently not supported
	[Nov22 00:53] overlayfs: idmapped layers are currently not supported
	[Nov22 00:54] overlayfs: idmapped layers are currently not supported
	[Nov22 00:55] overlayfs: idmapped layers are currently not supported
	[Nov22 00:56] overlayfs: idmapped layers are currently not supported
	[Nov22 00:57] overlayfs: idmapped layers are currently not supported
	
	
	==> etcd [1842c88afa2f92b8f5db1d2172ed30aa44f94bd9a078bc60055ef3ff3665300f] <==
	{"level":"warn","ts":"2025-11-22T00:56:36.848374Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33316","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-22T00:56:36.864652Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33340","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-22T00:56:36.886097Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33358","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-22T00:56:36.912368Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33368","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-22T00:56:36.927317Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33384","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-22T00:56:36.944974Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33406","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-22T00:56:36.963100Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33414","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-22T00:56:36.974385Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33420","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-22T00:56:37.003121Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33442","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-22T00:56:37.038872Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33458","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-22T00:56:37.063854Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33460","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-22T00:56:37.079512Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33478","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-22T00:56:37.095869Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33492","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-22T00:56:37.107150Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33518","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-22T00:56:37.130037Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33530","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-22T00:56:37.182385Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33552","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-22T00:56:37.209981Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33570","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-22T00:56:37.233016Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33574","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-22T00:56:37.255126Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33606","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-22T00:56:37.265775Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33618","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-22T00:56:37.314423Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33642","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-22T00:56:37.354433Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33656","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-22T00:56:37.374977Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33666","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-22T00:56:37.419109Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33684","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-22T00:56:37.450895Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33700","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 00:57:31 up  5:39,  0 user,  load average: 4.20, 4.00, 2.95
	Linux no-preload-165130 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [1b8b4ae3716d0f09f3a149118c900081488b110516bacbfdff2a7628edfb0a3c] <==
	I1122 00:56:39.558156       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1122 00:56:39.565680       1 main.go:139] hostIP = 192.168.85.2
	podIP = 192.168.85.2
	I1122 00:56:39.566334       1 main.go:148] setting mtu 1500 for CNI 
	I1122 00:56:39.566381       1 main.go:178] kindnetd IP family: "ipv4"
	I1122 00:56:39.566399       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-11-22T00:56:39Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1122 00:56:39.737099       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1122 00:56:39.737173       1 controller.go:381] "Waiting for informer caches to sync"
	I1122 00:56:39.737205       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1122 00:56:39.738080       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	E1122 00:57:09.737468       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Namespace: Get \"https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Namespace"
	E1122 00:57:09.737603       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.96.0.1:443/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Node"
	E1122 00:57:09.738882       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Pod: Get \"https://10.96.0.1:443/api/v1/pods?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Pod"
	E1122 00:57:09.738995       1 reflector.go:200] "Failed to watch" err="failed to list *v1.NetworkPolicy: Get \"https://10.96.0.1:443/apis/networking.k8s.io/v1/networkpolicies?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.NetworkPolicy"
	I1122 00:57:11.037906       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1122 00:57:11.037943       1 metrics.go:72] Registering metrics
	I1122 00:57:11.037996       1 controller.go:711] "Syncing nftables rules"
	I1122 00:57:19.739010       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1122 00:57:19.739125       1 main.go:301] handling current node
	I1122 00:57:29.743632       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1122 00:57:29.743664       1 main.go:301] handling current node
	
	
	==> kube-apiserver [4a703cddbc0fe37c59864ce18d11f40681bb2c9564af9cff7e041d5680b0df58] <==
	I1122 00:56:38.602657       1 shared_informer.go:356] "Caches are synced" controller="kubernetes-service-cidr-controller"
	I1122 00:56:38.602704       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1122 00:56:38.618920       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1122 00:56:38.618998       1 shared_informer.go:356] "Caches are synced" controller="node_authorizer"
	I1122 00:56:38.619015       1 shared_informer.go:356] "Caches are synced" controller="*generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]"
	I1122 00:56:38.619026       1 policy_source.go:240] refreshing policies
	I1122 00:56:38.627280       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1122 00:56:38.651052       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	E1122 00:56:38.664554       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I1122 00:56:38.681902       1 cache.go:39] Caches are synced for LocalAvailability controller
	I1122 00:56:38.687209       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I1122 00:56:38.687299       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I1122 00:56:38.687307       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I1122 00:56:38.687649       1 shared_informer.go:356] "Caches are synced" controller="cluster_authentication_trust_controller"
	I1122 00:56:38.988109       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1122 00:56:39.146307       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1122 00:56:39.468663       1 controller.go:667] quota admission added evaluator for: namespaces
	I1122 00:56:39.568932       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1122 00:56:39.655335       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1122 00:56:39.680990       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1122 00:56:39.806523       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.96.165.203"}
	I1122 00:56:39.835393       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.109.173.182"}
	I1122 00:56:42.144851       1 controller.go:667] quota admission added evaluator for: endpoints
	I1122 00:56:42.185645       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1122 00:56:42.393898       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	
	
	==> kube-controller-manager [447fcc475a7323f248b3a0cdb76a205b4bbe6be27f083fa2031b33a0533a533e] <==
	I1122 00:56:41.787371       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="no-preload-165130"
	I1122 00:56:41.787423       1 node_lifecycle_controller.go:1067] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I1122 00:56:41.788434       1 shared_informer.go:356] "Caches are synced" controller="crt configmap"
	I1122 00:56:41.788730       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I1122 00:56:41.789079       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I1122 00:56:41.790959       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I1122 00:56:41.796414       1 shared_informer.go:356] "Caches are synced" controller="TTL"
	I1122 00:56:41.796502       1 shared_informer.go:356] "Caches are synced" controller="disruption"
	I1122 00:56:41.796530       1 shared_informer.go:356] "Caches are synced" controller="service-cidr-controller"
	I1122 00:56:41.798469       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I1122 00:56:41.798659       1 shared_informer.go:356] "Caches are synced" controller="bootstrap_signer"
	I1122 00:56:41.801065       1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
	I1122 00:56:41.811680       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I1122 00:56:41.813358       1 shared_informer.go:356] "Caches are synced" controller="ReplicationController"
	I1122 00:56:41.814531       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1122 00:56:41.819274       1 shared_informer.go:356] "Caches are synced" controller="cronjob"
	I1122 00:56:41.820174       1 shared_informer.go:356] "Caches are synced" controller="HPA"
	I1122 00:56:41.821333       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1122 00:56:41.821349       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1122 00:56:41.821357       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1122 00:56:41.831351       1 shared_informer.go:356] "Caches are synced" controller="node"
	I1122 00:56:41.831407       1 range_allocator.go:177] "Sending events to api server" logger="node-ipam-controller"
	I1122 00:56:41.831429       1 range_allocator.go:183] "Starting range CIDR allocator" logger="node-ipam-controller"
	I1122 00:56:41.831434       1 shared_informer.go:349] "Waiting for caches to sync" controller="cidrallocator"
	I1122 00:56:41.831439       1 shared_informer.go:356] "Caches are synced" controller="cidrallocator"
	
	
	==> kube-proxy [7f4fb030dec090c0c97a17566010db1ca14f9d615fa6361747c4bee8a0793d79] <==
	I1122 00:56:39.799443       1 server_linux.go:53] "Using iptables proxy"
	I1122 00:56:40.034449       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1122 00:56:40.150155       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1122 00:56:40.150285       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.85.2"]
	E1122 00:56:40.150401       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1122 00:56:40.190136       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1122 00:56:40.190206       1 server_linux.go:132] "Using iptables Proxier"
	I1122 00:56:40.195433       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1122 00:56:40.195812       1 server.go:527] "Version info" version="v1.34.1"
	I1122 00:56:40.195836       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1122 00:56:40.199582       1 config.go:200] "Starting service config controller"
	I1122 00:56:40.199606       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1122 00:56:40.199855       1 config.go:106] "Starting endpoint slice config controller"
	I1122 00:56:40.199871       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1122 00:56:40.200027       1 config.go:403] "Starting serviceCIDR config controller"
	I1122 00:56:40.200047       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1122 00:56:40.203147       1 config.go:309] "Starting node config controller"
	I1122 00:56:40.203171       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1122 00:56:40.203180       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1122 00:56:40.300697       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1122 00:56:40.300701       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1122 00:56:40.300718       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [d445939f66bc78fcb625769cedbe045c3808629a73405f7634fc8403e2147225] <==
	I1122 00:56:36.847814       1 serving.go:386] Generated self-signed cert in-memory
	W1122 00:56:38.348529       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1122 00:56:38.348556       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1122 00:56:38.348566       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1122 00:56:38.348574       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1122 00:56:38.583633       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.1"
	I1122 00:56:38.583659       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1122 00:56:38.609097       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1122 00:56:38.609138       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1122 00:56:38.627054       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1122 00:56:38.627553       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1122 00:56:38.709908       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Nov 22 00:56:39 no-preload-165130 kubelet[776]: W1122 00:56:39.417743     776 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/1c65dce5fc4b31daf0735f57309cfc4584efb68447875212bc03d5cc1861ca03/crio-d7eb53eae797e9eee32adc178c6a77341dba0bb7ab302a4cc127af543491f907 WatchSource:0}: Error finding container d7eb53eae797e9eee32adc178c6a77341dba0bb7ab302a4cc127af543491f907: Status 404 returned error can't find the container with id d7eb53eae797e9eee32adc178c6a77341dba0bb7ab302a4cc127af543491f907
	Nov 22 00:56:42 no-preload-165130 kubelet[776]: I1122 00:56:42.399872     776 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/fc63b412-889b-418a-a30a-c1de29e57030-tmp-volume\") pod \"kubernetes-dashboard-855c9754f9-xsqns\" (UID: \"fc63b412-889b-418a-a30a-c1de29e57030\") " pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-xsqns"
	Nov 22 00:56:42 no-preload-165130 kubelet[776]: I1122 00:56:42.399929     776 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ch4sg\" (UniqueName: \"kubernetes.io/projected/fc63b412-889b-418a-a30a-c1de29e57030-kube-api-access-ch4sg\") pod \"kubernetes-dashboard-855c9754f9-xsqns\" (UID: \"fc63b412-889b-418a-a30a-c1de29e57030\") " pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-xsqns"
	Nov 22 00:56:42 no-preload-165130 kubelet[776]: I1122 00:56:42.500481     776 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/59b54b7f-dca2-49aa-978b-e4e4de474d1a-tmp-volume\") pod \"dashboard-metrics-scraper-6ffb444bf9-zvt5d\" (UID: \"59b54b7f-dca2-49aa-978b-e4e4de474d1a\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-zvt5d"
	Nov 22 00:56:42 no-preload-165130 kubelet[776]: I1122 00:56:42.500542     776 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7rg4c\" (UniqueName: \"kubernetes.io/projected/59b54b7f-dca2-49aa-978b-e4e4de474d1a-kube-api-access-7rg4c\") pod \"dashboard-metrics-scraper-6ffb444bf9-zvt5d\" (UID: \"59b54b7f-dca2-49aa-978b-e4e4de474d1a\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-zvt5d"
	Nov 22 00:56:42 no-preload-165130 kubelet[776]: W1122 00:56:42.701478     776 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/1c65dce5fc4b31daf0735f57309cfc4584efb68447875212bc03d5cc1861ca03/crio-30f680460512915134f36b7b5310bf8d51951c44492e62e9dad1ef964684e8ae WatchSource:0}: Error finding container 30f680460512915134f36b7b5310bf8d51951c44492e62e9dad1ef964684e8ae: Status 404 returned error can't find the container with id 30f680460512915134f36b7b5310bf8d51951c44492e62e9dad1ef964684e8ae
	Nov 22 00:56:49 no-preload-165130 kubelet[776]: I1122 00:56:49.150685     776 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-xsqns" podStartSLOduration=1.727092721 podStartE2EDuration="7.150664865s" podCreationTimestamp="2025-11-22 00:56:42 +0000 UTC" firstStartedPulling="2025-11-22 00:56:42.683307197 +0000 UTC m=+9.906384531" lastFinishedPulling="2025-11-22 00:56:48.106879267 +0000 UTC m=+15.329956675" observedRunningTime="2025-11-22 00:56:49.150226101 +0000 UTC m=+16.373303427" watchObservedRunningTime="2025-11-22 00:56:49.150664865 +0000 UTC m=+16.373742191"
	Nov 22 00:56:54 no-preload-165130 kubelet[776]: I1122 00:56:54.146642     776 scope.go:117] "RemoveContainer" containerID="540420a0ee9ba05e522139f848415db83ff67a2fd96f1890f6827aafb9fb380c"
	Nov 22 00:56:55 no-preload-165130 kubelet[776]: I1122 00:56:55.150853     776 scope.go:117] "RemoveContainer" containerID="540420a0ee9ba05e522139f848415db83ff67a2fd96f1890f6827aafb9fb380c"
	Nov 22 00:56:55 no-preload-165130 kubelet[776]: I1122 00:56:55.151156     776 scope.go:117] "RemoveContainer" containerID="bb223c78c451293e80b95ccadbece2f1a33e69511725ec01523fe5179e972fc5"
	Nov 22 00:56:55 no-preload-165130 kubelet[776]: E1122 00:56:55.151325     776 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-zvt5d_kubernetes-dashboard(59b54b7f-dca2-49aa-978b-e4e4de474d1a)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-zvt5d" podUID="59b54b7f-dca2-49aa-978b-e4e4de474d1a"
	Nov 22 00:56:56 no-preload-165130 kubelet[776]: I1122 00:56:56.154131     776 scope.go:117] "RemoveContainer" containerID="bb223c78c451293e80b95ccadbece2f1a33e69511725ec01523fe5179e972fc5"
	Nov 22 00:56:56 no-preload-165130 kubelet[776]: E1122 00:56:56.154289     776 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-zvt5d_kubernetes-dashboard(59b54b7f-dca2-49aa-978b-e4e4de474d1a)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-zvt5d" podUID="59b54b7f-dca2-49aa-978b-e4e4de474d1a"
	Nov 22 00:56:57 no-preload-165130 kubelet[776]: I1122 00:56:57.156906     776 scope.go:117] "RemoveContainer" containerID="bb223c78c451293e80b95ccadbece2f1a33e69511725ec01523fe5179e972fc5"
	Nov 22 00:56:57 no-preload-165130 kubelet[776]: E1122 00:56:57.157089     776 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-zvt5d_kubernetes-dashboard(59b54b7f-dca2-49aa-978b-e4e4de474d1a)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-zvt5d" podUID="59b54b7f-dca2-49aa-978b-e4e4de474d1a"
	Nov 22 00:57:08 no-preload-165130 kubelet[776]: I1122 00:57:08.981784     776 scope.go:117] "RemoveContainer" containerID="bb223c78c451293e80b95ccadbece2f1a33e69511725ec01523fe5179e972fc5"
	Nov 22 00:57:09 no-preload-165130 kubelet[776]: I1122 00:57:09.189423     776 scope.go:117] "RemoveContainer" containerID="bb223c78c451293e80b95ccadbece2f1a33e69511725ec01523fe5179e972fc5"
	Nov 22 00:57:09 no-preload-165130 kubelet[776]: I1122 00:57:09.190111     776 scope.go:117] "RemoveContainer" containerID="61a964437806ac8e9ff5933c7a44cae8231f1736a03e970fe7de22ff50d297f6"
	Nov 22 00:57:09 no-preload-165130 kubelet[776]: E1122 00:57:09.190835     776 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-zvt5d_kubernetes-dashboard(59b54b7f-dca2-49aa-978b-e4e4de474d1a)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-zvt5d" podUID="59b54b7f-dca2-49aa-978b-e4e4de474d1a"
	Nov 22 00:57:10 no-preload-165130 kubelet[776]: I1122 00:57:10.195209     776 scope.go:117] "RemoveContainer" containerID="68cc2c66134d4931da62d73545c07672b9868b40d69e94529e89109da91c6ae0"
	Nov 22 00:57:15 no-preload-165130 kubelet[776]: I1122 00:57:15.624148     776 scope.go:117] "RemoveContainer" containerID="61a964437806ac8e9ff5933c7a44cae8231f1736a03e970fe7de22ff50d297f6"
	Nov 22 00:57:15 no-preload-165130 kubelet[776]: E1122 00:57:15.625341     776 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-zvt5d_kubernetes-dashboard(59b54b7f-dca2-49aa-978b-e4e4de474d1a)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-zvt5d" podUID="59b54b7f-dca2-49aa-978b-e4e4de474d1a"
	Nov 22 00:57:25 no-preload-165130 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Nov 22 00:57:25 no-preload-165130 systemd[1]: kubelet.service: Deactivated successfully.
	Nov 22 00:57:25 no-preload-165130 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	
	
	==> kubernetes-dashboard [f3789d40f639b65e128847fc1b9af95f63a6b0dd3621ef61be5f39b47cd48613] <==
	2025/11/22 00:56:48 Using namespace: kubernetes-dashboard
	2025/11/22 00:56:48 Using in-cluster config to connect to apiserver
	2025/11/22 00:56:48 Using secret token for csrf signing
	2025/11/22 00:56:48 Initializing csrf token from kubernetes-dashboard-csrf secret
	2025/11/22 00:56:48 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2025/11/22 00:56:48 Successful initial request to the apiserver, version: v1.34.1
	2025/11/22 00:56:48 Generating JWE encryption key
	2025/11/22 00:56:48 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2025/11/22 00:56:48 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2025/11/22 00:56:51 Initializing JWE encryption key from synchronized object
	2025/11/22 00:56:51 Creating in-cluster Sidecar client
	2025/11/22 00:56:51 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/11/22 00:56:51 Serving insecurely on HTTP port: 9090
	2025/11/22 00:57:21 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/11/22 00:56:48 Starting overwatch
	
	
	==> storage-provisioner [68cc2c66134d4931da62d73545c07672b9868b40d69e94529e89109da91c6ae0] <==
	I1122 00:56:39.714477       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1122 00:57:09.719573       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	
	
	==> storage-provisioner [8e14a16d0cdf92563d2c084ac281b0188eb1b47be445498e8be3c53d189c3b19] <==
	I1122 00:57:10.295430       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1122 00:57:10.320147       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1122 00:57:10.320269       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W1122 00:57:10.322908       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1122 00:57:13.787364       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1122 00:57:18.050051       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1122 00:57:21.649011       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1122 00:57:24.703098       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1122 00:57:27.724799       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1122 00:57:27.766152       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1122 00:57:27.766307       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1122 00:57:27.771236       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_no-preload-165130_2bc0a3bc-8de2-490b-b7e4-2857944b60fb!
	I1122 00:57:27.770532       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"fa43c339-2ef6-4277-ae88-e611a28aa232", APIVersion:"v1", ResourceVersion:"673", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' no-preload-165130_2bc0a3bc-8de2-490b-b7e4-2857944b60fb became leader
	W1122 00:57:27.775327       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1122 00:57:27.787195       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1122 00:57:27.888050       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_no-preload-165130_2bc0a3bc-8de2-490b-b7e4-2857944b60fb!
	W1122 00:57:29.792015       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1122 00:57:29.800607       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1122 00:57:31.804524       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1122 00:57:31.820034       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-165130 -n no-preload-165130
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-165130 -n no-preload-165130: exit status 2 (473.70106ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:269: (dbg) Run:  kubectl --context no-preload-165130 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/no-preload/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestStartStop/group/no-preload/serial/Pause (8.29s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Pause (6.93s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 pause -p embed-certs-879000 --alsologtostderr -v=1
E1122 00:58:10.906935  516937 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21934-513600/.minikube/profiles/old-k8s-version-625837/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-arm64 pause -p embed-certs-879000 --alsologtostderr -v=1: exit status 80 (2.108909284s)

                                                
                                                
-- stdout --
	* Pausing node embed-certs-879000 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1122 00:58:10.160784  716648 out.go:360] Setting OutFile to fd 1 ...
	I1122 00:58:10.160903  716648 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1122 00:58:10.160913  716648 out.go:374] Setting ErrFile to fd 2...
	I1122 00:58:10.160919  716648 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1122 00:58:10.161183  716648 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21934-513600/.minikube/bin
	I1122 00:58:10.161443  716648 out.go:368] Setting JSON to false
	I1122 00:58:10.161471  716648 mustload.go:66] Loading cluster: embed-certs-879000
	I1122 00:58:10.161935  716648 config.go:182] Loaded profile config "embed-certs-879000": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1122 00:58:10.162396  716648 cli_runner.go:164] Run: docker container inspect embed-certs-879000 --format={{.State.Status}}
	I1122 00:58:10.181039  716648 host.go:66] Checking if "embed-certs-879000" exists ...
	I1122 00:58:10.181374  716648 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1122 00:58:10.244132  716648 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:2 ContainersRunning:2 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:51 OomKillDisable:true NGoroutines:62 SystemTime:2025-11-22 00:58:10.233301794 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1122 00:58:10.244835  716648 pause.go:60] "namespaces" [kube-system kubernetes-dashboard istio-operator]="keys" map[addons:[] all:%!s(bool=false) apiserver-ips:[] apiserver-name:minikubeCA apiserver-names:[] apiserver-port:%!s(int=8443) auto-pause-interval:1m0s auto-update-drivers:%!s(bool=true) base-image:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763588073-21934@sha256:19d3da0413e1bfa354cbb88004c6796f8e9772a083e0230b0f6e50212ee04c7e binary-mirror: bootstrapper:kubeadm cache-images:%!s(bool=true) cancel-scheduled:%!s(bool=false) cert-expiration:26280h0m0s cni: container-runtime: cpus:2 cri-socket: delete-on-failure:%!s(bool=false) disable-coredns-log:%!s(bool=false) disable-driver-mounts:%!s(bool=false) disable-metrics:%!s(bool=false) disable-optimizations:%!s(bool=false) disk-size:20000mb dns-domain:cluster.local dns-proxy:%!s(bool=false) docker-env:[] docker-opt:[] download-only:%!s(bool=false) driver: dry-run:%!s(bool=false) embed-certs:%!s(bool=false) embedcerts:%!s(bool=false) enable-default-
cni:%!s(bool=false) extra-config: extra-disks:%!s(int=0) feature-gates: force:%!s(bool=false) force-systemd:%!s(bool=false) gpus: ha:%!s(bool=false) host-dns-resolver:%!s(bool=true) host-only-cidr:192.168.59.1/24 host-only-nic-type:virtio hyperkit-vpnkit-sock: hyperkit-vsock-ports:[] hyperv-external-adapter: hyperv-use-external-switch:%!s(bool=false) hyperv-virtual-switch: image-mirror-country: image-repository: insecure-registry:[] install-addons:%!s(bool=true) interactive:%!s(bool=true) iso-url:[https://storage.googleapis.com/minikube-builds/iso/21924/minikube-v1.37.0-1763503576-21924-arm64.iso https://github.com/kubernetes/minikube/releases/download/v1.37.0-1763503576-21924/minikube-v1.37.0-1763503576-21924-arm64.iso https://kubernetes.oss-cn-hangzhou.aliyuncs.com/minikube/iso/minikube-v1.37.0-1763503576-21924-arm64.iso] keep-context:%!s(bool=false) keep-context-active:%!s(bool=false) kubernetes-version: kvm-gpu:%!s(bool=false) kvm-hidden:%!s(bool=false) kvm-network:default kvm-numa-count:%!s(int=1) kvm-qe
mu-uri:qemu:///system listen-address: maxauditentries:%!s(int=1000) memory: mount:%!s(bool=false) mount-9p-version:9p2000.L mount-gid:docker mount-ip: mount-msize:%!s(int=262144) mount-options:[] mount-port:0 mount-string: mount-type:9p mount-uid:docker namespace:default nat-nic-type:virtio native-ssh:%!s(bool=true) network: network-plugin: nfs-share:[] nfs-shares-root:/nfsshares no-kubernetes:%!s(bool=false) no-vtx-check:%!s(bool=false) nodes:%!s(int=1) output:text ports:[] preload:%!s(bool=true) profile:embed-certs-879000 purge:%!s(bool=false) qemu-firmware-path: registry-mirror:[] reminderwaitperiodinhours:%!s(int=24) rootless:%!s(bool=false) schedule:0s service-cluster-ip-range:10.96.0.0/12 skip-audit:%!s(bool=false) socket-vmnet-client-path: socket-vmnet-path: ssh-ip-address: ssh-key: ssh-port:%!s(int=22) ssh-user:root static-ip: subnet: trace: user: uuid: vm:%!s(bool=false) vm-driver: wait:[apiserver system_pods] wait-timeout:6m0s wantnonedriverwarning:%!s(bool=true) wantupdatenotification:%!s(bool=true
) wantvirtualboxdriverwarning:%!s(bool=true)]="(MISSING)"
	I1122 00:58:10.248103  716648 out.go:179] * Pausing node embed-certs-879000 ... 
	I1122 00:58:10.250807  716648 host.go:66] Checking if "embed-certs-879000" exists ...
	I1122 00:58:10.251142  716648 ssh_runner.go:195] Run: systemctl --version
	I1122 00:58:10.251201  716648 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-879000
	I1122 00:58:10.270589  716648 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33797 SSHKeyPath:/home/jenkins/minikube-integration/21934-513600/.minikube/machines/embed-certs-879000/id_rsa Username:docker}
	I1122 00:58:10.384006  716648 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1122 00:58:10.402673  716648 pause.go:52] kubelet running: true
	I1122 00:58:10.402751  716648 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1122 00:58:10.754860  716648 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1122 00:58:10.754958  716648 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1122 00:58:10.852339  716648 cri.go:89] found id: "933cca2968517797342b88d5e9db0d039293c75efb66faae70c1f0e8a213eaaa"
	I1122 00:58:10.852372  716648 cri.go:89] found id: "08b992da614bc2e772d094ded50c154806974fe8fb54eb1da406e962496e84d6"
	I1122 00:58:10.852382  716648 cri.go:89] found id: "f859b19e5db26f0386393c8d593ce69ad672e813e636cf787d88dc587b72d3be"
	I1122 00:58:10.852386  716648 cri.go:89] found id: "6904d457c7dc728f7026d679dcbeeb784ce896011c4ea8efb2ad461a00099705"
	I1122 00:58:10.852389  716648 cri.go:89] found id: "9fd53bbae898cfbeaff1f5002c8476e6762d7ebef17deeac9f498600ae2a7b1b"
	I1122 00:58:10.852393  716648 cri.go:89] found id: "9852dc8e953e4082535d9da09fe8c7c488642b2923223eea6484d9282094e3ea"
	I1122 00:58:10.852396  716648 cri.go:89] found id: "f660ef303bd4694fbdad76a7eb87133a3cca27093a6685ac673521dce9c9d434"
	I1122 00:58:10.852431  716648 cri.go:89] found id: "d2057db699cba9b5fd5582afa88f5011c61af54cfbf9b6be282bae14ccb3e06b"
	I1122 00:58:10.852441  716648 cri.go:89] found id: "53b12e2f48badfbf5f25cd651f43e00f0d1451191aa045dab44e2461293c766c"
	I1122 00:58:10.852447  716648 cri.go:89] found id: "93d281eec1d97f6f14ff89771acbf42bcdfcc26819b8af93e1dd9a6f16af4fd6"
	I1122 00:58:10.852451  716648 cri.go:89] found id: "94175cffd15a6a8cfd48697de8133883ffd4078df2399b4e302dafcd31ccd293"
	I1122 00:58:10.852454  716648 cri.go:89] found id: ""
	I1122 00:58:10.852541  716648 ssh_runner.go:195] Run: sudo runc list -f json
	I1122 00:58:10.864282  716648 retry.go:31] will retry after 289.293859ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-22T00:58:10Z" level=error msg="open /run/runc: no such file or directory"
	I1122 00:58:11.153786  716648 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1122 00:58:11.168352  716648 pause.go:52] kubelet running: false
	I1122 00:58:11.168466  716648 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1122 00:58:11.352135  716648 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1122 00:58:11.352224  716648 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1122 00:58:11.478523  716648 cri.go:89] found id: "933cca2968517797342b88d5e9db0d039293c75efb66faae70c1f0e8a213eaaa"
	I1122 00:58:11.478586  716648 cri.go:89] found id: "08b992da614bc2e772d094ded50c154806974fe8fb54eb1da406e962496e84d6"
	I1122 00:58:11.478614  716648 cri.go:89] found id: "f859b19e5db26f0386393c8d593ce69ad672e813e636cf787d88dc587b72d3be"
	I1122 00:58:11.478633  716648 cri.go:89] found id: "6904d457c7dc728f7026d679dcbeeb784ce896011c4ea8efb2ad461a00099705"
	I1122 00:58:11.478665  716648 cri.go:89] found id: "9fd53bbae898cfbeaff1f5002c8476e6762d7ebef17deeac9f498600ae2a7b1b"
	I1122 00:58:11.478687  716648 cri.go:89] found id: "9852dc8e953e4082535d9da09fe8c7c488642b2923223eea6484d9282094e3ea"
	I1122 00:58:11.478704  716648 cri.go:89] found id: "f660ef303bd4694fbdad76a7eb87133a3cca27093a6685ac673521dce9c9d434"
	I1122 00:58:11.478722  716648 cri.go:89] found id: "d2057db699cba9b5fd5582afa88f5011c61af54cfbf9b6be282bae14ccb3e06b"
	I1122 00:58:11.478753  716648 cri.go:89] found id: "53b12e2f48badfbf5f25cd651f43e00f0d1451191aa045dab44e2461293c766c"
	I1122 00:58:11.478776  716648 cri.go:89] found id: "93d281eec1d97f6f14ff89771acbf42bcdfcc26819b8af93e1dd9a6f16af4fd6"
	I1122 00:58:11.478794  716648 cri.go:89] found id: "94175cffd15a6a8cfd48697de8133883ffd4078df2399b4e302dafcd31ccd293"
	I1122 00:58:11.478812  716648 cri.go:89] found id: ""
	I1122 00:58:11.478888  716648 ssh_runner.go:195] Run: sudo runc list -f json
	I1122 00:58:11.498138  716648 retry.go:31] will retry after 192.350818ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-22T00:58:11Z" level=error msg="open /run/runc: no such file or directory"
	I1122 00:58:11.691598  716648 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1122 00:58:11.708486  716648 pause.go:52] kubelet running: false
	I1122 00:58:11.708559  716648 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1122 00:58:12.019997  716648 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1122 00:58:12.020078  716648 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1122 00:58:12.177964  716648 cri.go:89] found id: "933cca2968517797342b88d5e9db0d039293c75efb66faae70c1f0e8a213eaaa"
	I1122 00:58:12.178002  716648 cri.go:89] found id: "08b992da614bc2e772d094ded50c154806974fe8fb54eb1da406e962496e84d6"
	I1122 00:58:12.178009  716648 cri.go:89] found id: "f859b19e5db26f0386393c8d593ce69ad672e813e636cf787d88dc587b72d3be"
	I1122 00:58:12.178013  716648 cri.go:89] found id: "6904d457c7dc728f7026d679dcbeeb784ce896011c4ea8efb2ad461a00099705"
	I1122 00:58:12.178016  716648 cri.go:89] found id: "9fd53bbae898cfbeaff1f5002c8476e6762d7ebef17deeac9f498600ae2a7b1b"
	I1122 00:58:12.178019  716648 cri.go:89] found id: "9852dc8e953e4082535d9da09fe8c7c488642b2923223eea6484d9282094e3ea"
	I1122 00:58:12.178023  716648 cri.go:89] found id: "f660ef303bd4694fbdad76a7eb87133a3cca27093a6685ac673521dce9c9d434"
	I1122 00:58:12.178036  716648 cri.go:89] found id: "d2057db699cba9b5fd5582afa88f5011c61af54cfbf9b6be282bae14ccb3e06b"
	I1122 00:58:12.178039  716648 cri.go:89] found id: "53b12e2f48badfbf5f25cd651f43e00f0d1451191aa045dab44e2461293c766c"
	I1122 00:58:12.178046  716648 cri.go:89] found id: "93d281eec1d97f6f14ff89771acbf42bcdfcc26819b8af93e1dd9a6f16af4fd6"
	I1122 00:58:12.178049  716648 cri.go:89] found id: "94175cffd15a6a8cfd48697de8133883ffd4078df2399b4e302dafcd31ccd293"
	I1122 00:58:12.178052  716648 cri.go:89] found id: ""
	I1122 00:58:12.178099  716648 ssh_runner.go:195] Run: sudo runc list -f json
	I1122 00:58:12.195930  716648 out.go:203] 
	W1122 00:58:12.198853  716648 out.go:285] X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-22T00:58:12Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-22T00:58:12Z" level=error msg="open /run/runc: no such file or directory"
	
	W1122 00:58:12.198871  716648 out.go:285] * 
	* 
	W1122 00:58:12.208799  716648 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1122 00:58:12.211843  716648 out.go:203] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:309: out/minikube-linux-arm64 pause -p embed-certs-879000 --alsologtostderr -v=1 failed: exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/embed-certs/serial/Pause]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/embed-certs/serial/Pause]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect embed-certs-879000
helpers_test.go:243: (dbg) docker inspect embed-certs-879000:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "a6fb6b81dce5b3616a7bbeb7273662d346745bb8e0cc70a843d43e0a09366fd0",
	        "Created": "2025-11-22T00:55:18.964561473Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 710970,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-11-22T00:57:03.672522224Z",
	            "FinishedAt": "2025-11-22T00:57:02.577511562Z"
	        },
	        "Image": "sha256:97061622515a85470dad13cb046ad5fc5021fcd2b05aa921a620abb52951cd5d",
	        "ResolvConfPath": "/var/lib/docker/containers/a6fb6b81dce5b3616a7bbeb7273662d346745bb8e0cc70a843d43e0a09366fd0/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/a6fb6b81dce5b3616a7bbeb7273662d346745bb8e0cc70a843d43e0a09366fd0/hostname",
	        "HostsPath": "/var/lib/docker/containers/a6fb6b81dce5b3616a7bbeb7273662d346745bb8e0cc70a843d43e0a09366fd0/hosts",
	        "LogPath": "/var/lib/docker/containers/a6fb6b81dce5b3616a7bbeb7273662d346745bb8e0cc70a843d43e0a09366fd0/a6fb6b81dce5b3616a7bbeb7273662d346745bb8e0cc70a843d43e0a09366fd0-json.log",
	        "Name": "/embed-certs-879000",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "embed-certs-879000:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "embed-certs-879000",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "a6fb6b81dce5b3616a7bbeb7273662d346745bb8e0cc70a843d43e0a09366fd0",
	                "LowerDir": "/var/lib/docker/overlay2/b7e6923f56a551fc28b0dd2aeb630a3573a17c8126bc88462d7dcfbefd35cac0-init/diff:/var/lib/docker/overlay2/7e8788c6de692bc1c3758a2bb2c4b8da0fbba26855f855c0f3b655bfbdd92f8e/diff",
	                "MergedDir": "/var/lib/docker/overlay2/b7e6923f56a551fc28b0dd2aeb630a3573a17c8126bc88462d7dcfbefd35cac0/merged",
	                "UpperDir": "/var/lib/docker/overlay2/b7e6923f56a551fc28b0dd2aeb630a3573a17c8126bc88462d7dcfbefd35cac0/diff",
	                "WorkDir": "/var/lib/docker/overlay2/b7e6923f56a551fc28b0dd2aeb630a3573a17c8126bc88462d7dcfbefd35cac0/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "embed-certs-879000",
	                "Source": "/var/lib/docker/volumes/embed-certs-879000/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "embed-certs-879000",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763588073-21934@sha256:19d3da0413e1bfa354cbb88004c6796f8e9772a083e0230b0f6e50212ee04c7e",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "embed-certs-879000",
	                "name.minikube.sigs.k8s.io": "embed-certs-879000",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "4a9a6ad23ee32d6c8fa5a452dcd61563d2b58f67e2e1ab8e855c0e878d1731a9",
	            "SandboxKey": "/var/run/docker/netns/4a9a6ad23ee3",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33797"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33798"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33801"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33799"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33800"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "embed-certs-879000": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "3e:6c:0e:f5:18:fa",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "9a53cf267b81b1ff031dda8888cce06c9d46b1b11b960898e399a8e14526904f",
	                    "EndpointID": "c19e7971d917a067e288a90add6dfe0ed556affc0e6ed95eaf7df95bd74a471a",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "embed-certs-879000",
	                        "a6fb6b81dce5"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-879000 -n embed-certs-879000
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-879000 -n embed-certs-879000: exit status 2 (487.677826ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestStartStop/group/embed-certs/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/embed-certs/serial/Pause]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p embed-certs-879000 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p embed-certs-879000 logs -n 25: (1.511772917s)
helpers_test.go:260: TestStartStop/group/embed-certs/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬──────────────
───────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼──────────────
───────┤
	│ start   │ -p old-k8s-version-625837 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-625837       │ jenkins │ v1.37.0 │ 22 Nov 25 00:53 UTC │ 22 Nov 25 00:54 UTC │
	│ image   │ old-k8s-version-625837 image list --format=json                                                                                                                                                                                               │ old-k8s-version-625837       │ jenkins │ v1.37.0 │ 22 Nov 25 00:54 UTC │ 22 Nov 25 00:54 UTC │
	│ pause   │ -p old-k8s-version-625837 --alsologtostderr -v=1                                                                                                                                                                                              │ old-k8s-version-625837       │ jenkins │ v1.37.0 │ 22 Nov 25 00:54 UTC │                     │
	│ start   │ -p cert-expiration-621390 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=crio                                                                                                                                     │ cert-expiration-621390       │ jenkins │ v1.37.0 │ 22 Nov 25 00:54 UTC │ 22 Nov 25 00:55 UTC │
	│ delete  │ -p old-k8s-version-625837                                                                                                                                                                                                                     │ old-k8s-version-625837       │ jenkins │ v1.37.0 │ 22 Nov 25 00:54 UTC │ 22 Nov 25 00:54 UTC │
	│ delete  │ -p old-k8s-version-625837                                                                                                                                                                                                                     │ old-k8s-version-625837       │ jenkins │ v1.37.0 │ 22 Nov 25 00:54 UTC │ 22 Nov 25 00:54 UTC │
	│ start   │ -p no-preload-165130 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                       │ no-preload-165130            │ jenkins │ v1.37.0 │ 22 Nov 25 00:54 UTC │ 22 Nov 25 00:56 UTC │
	│ delete  │ -p cert-expiration-621390                                                                                                                                                                                                                     │ cert-expiration-621390       │ jenkins │ v1.37.0 │ 22 Nov 25 00:55 UTC │ 22 Nov 25 00:55 UTC │
	│ start   │ -p embed-certs-879000 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                        │ embed-certs-879000           │ jenkins │ v1.37.0 │ 22 Nov 25 00:55 UTC │ 22 Nov 25 00:56 UTC │
	│ addons  │ enable metrics-server -p no-preload-165130 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ no-preload-165130            │ jenkins │ v1.37.0 │ 22 Nov 25 00:56 UTC │                     │
	│ stop    │ -p no-preload-165130 --alsologtostderr -v=3                                                                                                                                                                                                   │ no-preload-165130            │ jenkins │ v1.37.0 │ 22 Nov 25 00:56 UTC │ 22 Nov 25 00:56 UTC │
	│ addons  │ enable dashboard -p no-preload-165130 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ no-preload-165130            │ jenkins │ v1.37.0 │ 22 Nov 25 00:56 UTC │ 22 Nov 25 00:56 UTC │
	│ start   │ -p no-preload-165130 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                       │ no-preload-165130            │ jenkins │ v1.37.0 │ 22 Nov 25 00:56 UTC │ 22 Nov 25 00:57 UTC │
	│ addons  │ enable metrics-server -p embed-certs-879000 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                      │ embed-certs-879000           │ jenkins │ v1.37.0 │ 22 Nov 25 00:56 UTC │                     │
	│ stop    │ -p embed-certs-879000 --alsologtostderr -v=3                                                                                                                                                                                                  │ embed-certs-879000           │ jenkins │ v1.37.0 │ 22 Nov 25 00:56 UTC │ 22 Nov 25 00:57 UTC │
	│ addons  │ enable dashboard -p embed-certs-879000 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                 │ embed-certs-879000           │ jenkins │ v1.37.0 │ 22 Nov 25 00:57 UTC │ 22 Nov 25 00:57 UTC │
	│ start   │ -p embed-certs-879000 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                        │ embed-certs-879000           │ jenkins │ v1.37.0 │ 22 Nov 25 00:57 UTC │ 22 Nov 25 00:57 UTC │
	│ image   │ no-preload-165130 image list --format=json                                                                                                                                                                                                    │ no-preload-165130            │ jenkins │ v1.37.0 │ 22 Nov 25 00:57 UTC │ 22 Nov 25 00:57 UTC │
	│ pause   │ -p no-preload-165130 --alsologtostderr -v=1                                                                                                                                                                                                   │ no-preload-165130            │ jenkins │ v1.37.0 │ 22 Nov 25 00:57 UTC │                     │
	│ delete  │ -p no-preload-165130                                                                                                                                                                                                                          │ no-preload-165130            │ jenkins │ v1.37.0 │ 22 Nov 25 00:57 UTC │ 22 Nov 25 00:57 UTC │
	│ delete  │ -p no-preload-165130                                                                                                                                                                                                                          │ no-preload-165130            │ jenkins │ v1.37.0 │ 22 Nov 25 00:57 UTC │ 22 Nov 25 00:57 UTC │
	│ delete  │ -p disable-driver-mounts-046489                                                                                                                                                                                                               │ disable-driver-mounts-046489 │ jenkins │ v1.37.0 │ 22 Nov 25 00:57 UTC │ 22 Nov 25 00:57 UTC │
	│ start   │ -p default-k8s-diff-port-882305 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                      │ default-k8s-diff-port-882305 │ jenkins │ v1.37.0 │ 22 Nov 25 00:57 UTC │                     │
	│ image   │ embed-certs-879000 image list --format=json                                                                                                                                                                                                   │ embed-certs-879000           │ jenkins │ v1.37.0 │ 22 Nov 25 00:58 UTC │ 22 Nov 25 00:58 UTC │
	│ pause   │ -p embed-certs-879000 --alsologtostderr -v=1                                                                                                                                                                                                  │ embed-certs-879000           │ jenkins │ v1.37.0 │ 22 Nov 25 00:58 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴──────────────
───────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/22 00:57:36
	Running on machine: ip-172-31-21-244
	Binary: Built with gc go1.25.3 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1122 00:57:36.338855  714411 out.go:360] Setting OutFile to fd 1 ...
	I1122 00:57:36.338966  714411 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1122 00:57:36.338975  714411 out.go:374] Setting ErrFile to fd 2...
	I1122 00:57:36.338980  714411 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1122 00:57:36.339261  714411 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21934-513600/.minikube/bin
	I1122 00:57:36.339806  714411 out.go:368] Setting JSON to false
	I1122 00:57:36.340878  714411 start.go:133] hostinfo: {"hostname":"ip-172-31-21-244","uptime":20373,"bootTime":1763752684,"procs":199,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"da8ac1fd-6236-412a-a346-95873c98230d"}
	I1122 00:57:36.340949  714411 start.go:143] virtualization:  
	I1122 00:57:36.344745  714411 out.go:179] * [default-k8s-diff-port-882305] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1122 00:57:36.348423  714411 out.go:179]   - MINIKUBE_LOCATION=21934
	I1122 00:57:36.348572  714411 notify.go:221] Checking for updates...
	I1122 00:57:36.354235  714411 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1122 00:57:36.357038  714411 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21934-513600/kubeconfig
	I1122 00:57:36.359823  714411 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21934-513600/.minikube
	I1122 00:57:36.362686  714411 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1122 00:57:36.365604  714411 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1122 00:57:36.369421  714411 config.go:182] Loaded profile config "embed-certs-879000": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1122 00:57:36.369583  714411 driver.go:422] Setting default libvirt URI to qemu:///system
	I1122 00:57:36.401941  714411 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1122 00:57:36.402074  714411 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1122 00:57:36.464707  714411 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-11-22 00:57:36.451169185 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1122 00:57:36.464815  714411 docker.go:319] overlay module found
	I1122 00:57:36.467951  714411 out.go:179] * Using the docker driver based on user configuration
	I1122 00:57:36.470826  714411 start.go:309] selected driver: docker
	I1122 00:57:36.470847  714411 start.go:930] validating driver "docker" against <nil>
	I1122 00:57:36.470867  714411 start.go:941] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1122 00:57:36.471660  714411 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1122 00:57:36.531947  714411 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-11-22 00:57:36.522770872 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1122 00:57:36.532110  714411 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1122 00:57:36.532335  714411 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1122 00:57:36.535396  714411 out.go:179] * Using Docker driver with root privileges
	I1122 00:57:36.538520  714411 cni.go:84] Creating CNI manager for ""
	I1122 00:57:36.538596  714411 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1122 00:57:36.538610  714411 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1122 00:57:36.538689  714411 start.go:353] cluster config:
	{Name:default-k8s-diff-port-882305 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763588073-21934@sha256:19d3da0413e1bfa354cbb88004c6796f8e9772a083e0230b0f6e50212ee04c7e Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-882305 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:
cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SS
HAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1122 00:57:36.542550  714411 out.go:179] * Starting "default-k8s-diff-port-882305" primary control-plane node in "default-k8s-diff-port-882305" cluster
	I1122 00:57:36.545709  714411 cache.go:134] Beginning downloading kic base image for docker with crio
	I1122 00:57:36.548919  714411 out.go:179] * Pulling base image v0.0.48-1763588073-21934 ...
	I1122 00:57:36.552325  714411 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1122 00:57:36.552377  714411 preload.go:203] Found local preload: /home/jenkins/minikube-integration/21934-513600/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4
	I1122 00:57:36.552399  714411 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763588073-21934@sha256:19d3da0413e1bfa354cbb88004c6796f8e9772a083e0230b0f6e50212ee04c7e in local docker daemon
	I1122 00:57:36.552413  714411 cache.go:65] Caching tarball of preloaded images
	I1122 00:57:36.552499  714411 preload.go:238] Found /home/jenkins/minikube-integration/21934-513600/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1122 00:57:36.552510  714411 cache.go:68] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1122 00:57:36.552615  714411 profile.go:143] Saving config to /home/jenkins/minikube-integration/21934-513600/.minikube/profiles/default-k8s-diff-port-882305/config.json ...
	I1122 00:57:36.552636  714411 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21934-513600/.minikube/profiles/default-k8s-diff-port-882305/config.json: {Name:mk88d3853903bd6dc43beb8d0931343736bf22be Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1122 00:57:36.571735  714411 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763588073-21934@sha256:19d3da0413e1bfa354cbb88004c6796f8e9772a083e0230b0f6e50212ee04c7e in local docker daemon, skipping pull
	I1122 00:57:36.571762  714411 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763588073-21934@sha256:19d3da0413e1bfa354cbb88004c6796f8e9772a083e0230b0f6e50212ee04c7e exists in daemon, skipping load
	I1122 00:57:36.571783  714411 cache.go:243] Successfully downloaded all kic artifacts
	I1122 00:57:36.571807  714411 start.go:360] acquireMachinesLock for default-k8s-diff-port-882305: {Name:mk803954bb6347dd99a7e73d8fd5992e1319a31c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1122 00:57:36.571913  714411 start.go:364] duration metric: took 86.537µs to acquireMachinesLock for "default-k8s-diff-port-882305"
	I1122 00:57:36.571944  714411 start.go:93] Provisioning new machine with config: &{Name:default-k8s-diff-port-882305 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763588073-21934@sha256:19d3da0413e1bfa354cbb88004c6796f8e9772a083e0230b0f6e50212ee04c7e Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-882305 Namespace:default API
ServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:
false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1122 00:57:36.572025  714411 start.go:125] createHost starting for "" (driver="docker")
	W1122 00:57:34.281967  710840 pod_ready.go:104] pod "coredns-66bc5c9577-h2kpd" is not "Ready", error: <nil>
	W1122 00:57:36.282360  710840 pod_ready.go:104] pod "coredns-66bc5c9577-h2kpd" is not "Ready", error: <nil>
	I1122 00:57:36.576834  714411 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1122 00:57:36.577089  714411 start.go:159] libmachine.API.Create for "default-k8s-diff-port-882305" (driver="docker")
	I1122 00:57:36.577128  714411 client.go:173] LocalClient.Create starting
	I1122 00:57:36.577211  714411 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21934-513600/.minikube/certs/ca.pem
	I1122 00:57:36.577253  714411 main.go:143] libmachine: Decoding PEM data...
	I1122 00:57:36.577272  714411 main.go:143] libmachine: Parsing certificate...
	I1122 00:57:36.577329  714411 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21934-513600/.minikube/certs/cert.pem
	I1122 00:57:36.577350  714411 main.go:143] libmachine: Decoding PEM data...
	I1122 00:57:36.577364  714411 main.go:143] libmachine: Parsing certificate...
	I1122 00:57:36.577754  714411 cli_runner.go:164] Run: docker network inspect default-k8s-diff-port-882305 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1122 00:57:36.592847  714411 cli_runner.go:211] docker network inspect default-k8s-diff-port-882305 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1122 00:57:36.592924  714411 network_create.go:284] running [docker network inspect default-k8s-diff-port-882305] to gather additional debugging logs...
	I1122 00:57:36.592943  714411 cli_runner.go:164] Run: docker network inspect default-k8s-diff-port-882305
	W1122 00:57:36.608198  714411 cli_runner.go:211] docker network inspect default-k8s-diff-port-882305 returned with exit code 1
	I1122 00:57:36.608244  714411 network_create.go:287] error running [docker network inspect default-k8s-diff-port-882305]: docker network inspect default-k8s-diff-port-882305: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network default-k8s-diff-port-882305 not found
	I1122 00:57:36.608257  714411 network_create.go:289] output of [docker network inspect default-k8s-diff-port-882305]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network default-k8s-diff-port-882305 not found
	
	** /stderr **
	I1122 00:57:36.608350  714411 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1122 00:57:36.626234  714411 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-b16c782e3da8 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:ba:82:00:9d:45:d0} reservation:<nil>}
	I1122 00:57:36.626554  714411 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-13c9c00b5de5 IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:7a:4e:a4:3d:42:9e} reservation:<nil>}
	I1122 00:57:36.626915  714411 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-3c074a6aa87b IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:9a:1f:77:e5:90:0b} reservation:<nil>}
	I1122 00:57:36.627176  714411 network.go:211] skipping subnet 192.168.76.0/24 that is taken: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName:br-9a53cf267b81 IfaceIPv4:192.168.76.1 IfaceMTU:1500 IfaceMAC:02:78:c2:70:c7:bf} reservation:<nil>}
	I1122 00:57:36.627607  714411 network.go:206] using free private subnet 192.168.85.0/24: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x4001a54150}
	I1122 00:57:36.627625  714411 network_create.go:124] attempt to create docker network default-k8s-diff-port-882305 192.168.85.0/24 with gateway 192.168.85.1 and MTU of 1500 ...
	I1122 00:57:36.627679  714411 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.85.0/24 --gateway=192.168.85.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=default-k8s-diff-port-882305 default-k8s-diff-port-882305
	I1122 00:57:36.687123  714411 network_create.go:108] docker network default-k8s-diff-port-882305 192.168.85.0/24 created
	I1122 00:57:36.687155  714411 kic.go:121] calculated static IP "192.168.85.2" for the "default-k8s-diff-port-882305" container
	I1122 00:57:36.687250  714411 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1122 00:57:36.703679  714411 cli_runner.go:164] Run: docker volume create default-k8s-diff-port-882305 --label name.minikube.sigs.k8s.io=default-k8s-diff-port-882305 --label created_by.minikube.sigs.k8s.io=true
	I1122 00:57:36.721251  714411 oci.go:103] Successfully created a docker volume default-k8s-diff-port-882305
	I1122 00:57:36.721347  714411 cli_runner.go:164] Run: docker run --rm --name default-k8s-diff-port-882305-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=default-k8s-diff-port-882305 --entrypoint /usr/bin/test -v default-k8s-diff-port-882305:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763588073-21934@sha256:19d3da0413e1bfa354cbb88004c6796f8e9772a083e0230b0f6e50212ee04c7e -d /var/lib
	I1122 00:57:37.288284  714411 oci.go:107] Successfully prepared a docker volume default-k8s-diff-port-882305
	I1122 00:57:37.288358  714411 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1122 00:57:37.288373  714411 kic.go:194] Starting extracting preloaded images to volume ...
	I1122 00:57:37.288444  714411 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21934-513600/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v default-k8s-diff-port-882305:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763588073-21934@sha256:19d3da0413e1bfa354cbb88004c6796f8e9772a083e0230b0f6e50212ee04c7e -I lz4 -xf /preloaded.tar -C /extractDir
	W1122 00:57:38.781359  710840 pod_ready.go:104] pod "coredns-66bc5c9577-h2kpd" is not "Ready", error: <nil>
	W1122 00:57:40.782821  710840 pod_ready.go:104] pod "coredns-66bc5c9577-h2kpd" is not "Ready", error: <nil>
	W1122 00:57:43.281271  710840 pod_ready.go:104] pod "coredns-66bc5c9577-h2kpd" is not "Ready", error: <nil>
	I1122 00:57:41.634887  714411 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21934-513600/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v default-k8s-diff-port-882305:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763588073-21934@sha256:19d3da0413e1bfa354cbb88004c6796f8e9772a083e0230b0f6e50212ee04c7e -I lz4 -xf /preloaded.tar -C /extractDir: (4.346401253s)
	I1122 00:57:41.634918  714411 kic.go:203] duration metric: took 4.346542253s to extract preloaded images to volume ...
	W1122 00:57:41.635059  714411 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I1122 00:57:41.635179  714411 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1122 00:57:41.700557  714411 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname default-k8s-diff-port-882305 --name default-k8s-diff-port-882305 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=default-k8s-diff-port-882305 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=default-k8s-diff-port-882305 --network default-k8s-diff-port-882305 --ip 192.168.85.2 --volume default-k8s-diff-port-882305:/var --security-opt apparmor=unconfined --memory=3072mb --cpus=2 -e container=docker --expose 8444 --publish=127.0.0.1::8444 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763588073-21934@sha256:19d3da0413e1bfa354cbb88004c6796f8e9772a083e0230b0f6e50212ee04c7e
	I1122 00:57:42.051604  714411 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-882305 --format={{.State.Running}}
	I1122 00:57:42.075492  714411 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-882305 --format={{.State.Status}}
	I1122 00:57:42.109978  714411 cli_runner.go:164] Run: docker exec default-k8s-diff-port-882305 stat /var/lib/dpkg/alternatives/iptables
	I1122 00:57:42.214166  714411 oci.go:144] the created container "default-k8s-diff-port-882305" has a running status.
	I1122 00:57:42.214214  714411 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21934-513600/.minikube/machines/default-k8s-diff-port-882305/id_rsa...
	I1122 00:57:42.534977  714411 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21934-513600/.minikube/machines/default-k8s-diff-port-882305/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1122 00:57:42.557084  714411 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-882305 --format={{.State.Status}}
	I1122 00:57:42.577178  714411 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1122 00:57:42.577197  714411 kic_runner.go:114] Args: [docker exec --privileged default-k8s-diff-port-882305 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1122 00:57:42.645260  714411 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-882305 --format={{.State.Status}}
	I1122 00:57:42.665294  714411 machine.go:94] provisionDockerMachine start ...
	I1122 00:57:42.665384  714411 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-882305
	I1122 00:57:42.687609  714411 main.go:143] libmachine: Using SSH client type: native
	I1122 00:57:42.688024  714411 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33802 <nil> <nil>}
	I1122 00:57:42.688037  714411 main.go:143] libmachine: About to run SSH command:
	hostname
	I1122 00:57:42.688714  714411 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I1122 00:57:45.829296  714411 main.go:143] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-882305
	
	I1122 00:57:45.829321  714411 ubuntu.go:182] provisioning hostname "default-k8s-diff-port-882305"
	I1122 00:57:45.829384  714411 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-882305
	I1122 00:57:45.847592  714411 main.go:143] libmachine: Using SSH client type: native
	I1122 00:57:45.847925  714411 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33802 <nil> <nil>}
	I1122 00:57:45.847940  714411 main.go:143] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-882305 && echo "default-k8s-diff-port-882305" | sudo tee /etc/hostname
	I1122 00:57:45.996173  714411 main.go:143] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-882305
	
	I1122 00:57:45.996335  714411 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-882305
	I1122 00:57:46.016589  714411 main.go:143] libmachine: Using SSH client type: native
	I1122 00:57:46.016919  714411 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33802 <nil> <nil>}
	I1122 00:57:46.016937  714411 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-882305' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-882305/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-882305' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1122 00:57:46.167713  714411 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1122 00:57:46.167739  714411 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21934-513600/.minikube CaCertPath:/home/jenkins/minikube-integration/21934-513600/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21934-513600/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21934-513600/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21934-513600/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21934-513600/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21934-513600/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21934-513600/.minikube}
	I1122 00:57:46.167758  714411 ubuntu.go:190] setting up certificates
	I1122 00:57:46.167769  714411 provision.go:84] configureAuth start
	I1122 00:57:46.167845  714411 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-882305
	I1122 00:57:46.184944  714411 provision.go:143] copyHostCerts
	I1122 00:57:46.185019  714411 exec_runner.go:144] found /home/jenkins/minikube-integration/21934-513600/.minikube/key.pem, removing ...
	I1122 00:57:46.185033  714411 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21934-513600/.minikube/key.pem
	I1122 00:57:46.185111  714411 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21934-513600/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21934-513600/.minikube/key.pem (1675 bytes)
	I1122 00:57:46.185206  714411 exec_runner.go:144] found /home/jenkins/minikube-integration/21934-513600/.minikube/ca.pem, removing ...
	I1122 00:57:46.185218  714411 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21934-513600/.minikube/ca.pem
	I1122 00:57:46.185247  714411 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21934-513600/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21934-513600/.minikube/ca.pem (1078 bytes)
	I1122 00:57:46.185306  714411 exec_runner.go:144] found /home/jenkins/minikube-integration/21934-513600/.minikube/cert.pem, removing ...
	I1122 00:57:46.185315  714411 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21934-513600/.minikube/cert.pem
	I1122 00:57:46.185339  714411 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21934-513600/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21934-513600/.minikube/cert.pem (1123 bytes)
	I1122 00:57:46.185391  714411 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21934-513600/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21934-513600/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21934-513600/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-882305 san=[127.0.0.1 192.168.85.2 default-k8s-diff-port-882305 localhost minikube]
	I1122 00:57:46.300381  714411 provision.go:177] copyRemoteCerts
	I1122 00:57:46.300458  714411 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1122 00:57:46.300498  714411 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-882305
	I1122 00:57:46.318147  714411 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33802 SSHKeyPath:/home/jenkins/minikube-integration/21934-513600/.minikube/machines/default-k8s-diff-port-882305/id_rsa Username:docker}
	W1122 00:57:45.283266  710840 pod_ready.go:104] pod "coredns-66bc5c9577-h2kpd" is not "Ready", error: <nil>
	W1122 00:57:47.786561  710840 pod_ready.go:104] pod "coredns-66bc5c9577-h2kpd" is not "Ready", error: <nil>
	I1122 00:57:46.422892  714411 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21934-513600/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1122 00:57:46.441326  714411 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21934-513600/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1122 00:57:46.461539  714411 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21934-513600/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I1122 00:57:46.481467  714411 provision.go:87] duration metric: took 313.677132ms to configureAuth
	I1122 00:57:46.481507  714411 ubuntu.go:206] setting minikube options for container-runtime
	I1122 00:57:46.481685  714411 config.go:182] Loaded profile config "default-k8s-diff-port-882305": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1122 00:57:46.481794  714411 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-882305
	I1122 00:57:46.498946  714411 main.go:143] libmachine: Using SSH client type: native
	I1122 00:57:46.499282  714411 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33802 <nil> <nil>}
	I1122 00:57:46.499301  714411 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1122 00:57:46.897323  714411 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1122 00:57:46.897423  714411 machine.go:97] duration metric: took 4.232109748s to provisionDockerMachine
	I1122 00:57:46.897447  714411 client.go:176] duration metric: took 10.320307836s to LocalClient.Create
	I1122 00:57:46.897492  714411 start.go:167] duration metric: took 10.320404334s to libmachine.API.Create "default-k8s-diff-port-882305"
	I1122 00:57:46.897518  714411 start.go:293] postStartSetup for "default-k8s-diff-port-882305" (driver="docker")
	I1122 00:57:46.897543  714411 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1122 00:57:46.897643  714411 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1122 00:57:46.897704  714411 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-882305
	I1122 00:57:46.917542  714411 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33802 SSHKeyPath:/home/jenkins/minikube-integration/21934-513600/.minikube/machines/default-k8s-diff-port-882305/id_rsa Username:docker}
	I1122 00:57:47.018291  714411 ssh_runner.go:195] Run: cat /etc/os-release
	I1122 00:57:47.021795  714411 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1122 00:57:47.021854  714411 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1122 00:57:47.021867  714411 filesync.go:126] Scanning /home/jenkins/minikube-integration/21934-513600/.minikube/addons for local assets ...
	I1122 00:57:47.021922  714411 filesync.go:126] Scanning /home/jenkins/minikube-integration/21934-513600/.minikube/files for local assets ...
	I1122 00:57:47.022011  714411 filesync.go:149] local asset: /home/jenkins/minikube-integration/21934-513600/.minikube/files/etc/ssl/certs/5169372.pem -> 5169372.pem in /etc/ssl/certs
	I1122 00:57:47.022123  714411 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1122 00:57:47.029651  714411 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21934-513600/.minikube/files/etc/ssl/certs/5169372.pem --> /etc/ssl/certs/5169372.pem (1708 bytes)
	I1122 00:57:47.050910  714411 start.go:296] duration metric: took 153.363882ms for postStartSetup
	I1122 00:57:47.051304  714411 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-882305
	I1122 00:57:47.069776  714411 profile.go:143] Saving config to /home/jenkins/minikube-integration/21934-513600/.minikube/profiles/default-k8s-diff-port-882305/config.json ...
	I1122 00:57:47.070154  714411 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1122 00:57:47.070205  714411 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-882305
	I1122 00:57:47.094646  714411 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33802 SSHKeyPath:/home/jenkins/minikube-integration/21934-513600/.minikube/machines/default-k8s-diff-port-882305/id_rsa Username:docker}
	I1122 00:57:47.190790  714411 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1122 00:57:47.195372  714411 start.go:128] duration metric: took 10.623328673s to createHost
	I1122 00:57:47.195443  714411 start.go:83] releasing machines lock for "default-k8s-diff-port-882305", held for 10.623514989s
	I1122 00:57:47.195568  714411 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-882305
	I1122 00:57:47.213580  714411 ssh_runner.go:195] Run: cat /version.json
	I1122 00:57:47.213654  714411 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-882305
	I1122 00:57:47.213976  714411 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1122 00:57:47.214041  714411 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-882305
	I1122 00:57:47.237694  714411 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33802 SSHKeyPath:/home/jenkins/minikube-integration/21934-513600/.minikube/machines/default-k8s-diff-port-882305/id_rsa Username:docker}
	I1122 00:57:47.247384  714411 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33802 SSHKeyPath:/home/jenkins/minikube-integration/21934-513600/.minikube/machines/default-k8s-diff-port-882305/id_rsa Username:docker}
	I1122 00:57:47.341556  714411 ssh_runner.go:195] Run: systemctl --version
	I1122 00:57:47.436128  714411 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1122 00:57:47.476930  714411 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1122 00:57:47.481939  714411 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1122 00:57:47.482011  714411 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1122 00:57:47.511413  714411 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/10-crio-bridge.conflist.disabled] bridge cni config(s)
	I1122 00:57:47.511434  714411 start.go:496] detecting cgroup driver to use...
	I1122 00:57:47.511467  714411 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1122 00:57:47.511528  714411 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1122 00:57:47.529997  714411 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1122 00:57:47.543807  714411 docker.go:218] disabling cri-docker service (if available) ...
	I1122 00:57:47.543898  714411 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1122 00:57:47.561702  714411 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1122 00:57:47.582585  714411 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1122 00:57:47.724087  714411 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1122 00:57:47.866352  714411 docker.go:234] disabling docker service ...
	I1122 00:57:47.866468  714411 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1122 00:57:47.888916  714411 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1122 00:57:47.911285  714411 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1122 00:57:48.074318  714411 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1122 00:57:48.193871  714411 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1122 00:57:48.206619  714411 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1122 00:57:48.220358  714411 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1122 00:57:48.220472  714411 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1122 00:57:48.230099  714411 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1122 00:57:48.230169  714411 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1122 00:57:48.241238  714411 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1122 00:57:48.251030  714411 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1122 00:57:48.260004  714411 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1122 00:57:48.269326  714411 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1122 00:57:48.282789  714411 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1122 00:57:48.298599  714411 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1122 00:57:48.307769  714411 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1122 00:57:48.316181  714411 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1122 00:57:48.323680  714411 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1122 00:57:48.449992  714411 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1122 00:57:48.636803  714411 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1122 00:57:48.636906  714411 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1122 00:57:48.641309  714411 start.go:564] Will wait 60s for crictl version
	I1122 00:57:48.641399  714411 ssh_runner.go:195] Run: which crictl
	I1122 00:57:48.645290  714411 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1122 00:57:48.680443  714411 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.2
	RuntimeApiVersion:  v1
	I1122 00:57:48.680594  714411 ssh_runner.go:195] Run: crio --version
	I1122 00:57:48.717115  714411 ssh_runner.go:195] Run: crio --version
	I1122 00:57:48.758504  714411 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.2 ...
	I1122 00:57:48.763666  714411 cli_runner.go:164] Run: docker network inspect default-k8s-diff-port-882305 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1122 00:57:48.789982  714411 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I1122 00:57:48.793913  714411 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.85.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1122 00:57:48.804338  714411 kubeadm.go:884] updating cluster {Name:default-k8s-diff-port-882305 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763588073-21934@sha256:19d3da0413e1bfa354cbb88004c6796f8e9772a083e0230b0f6e50212ee04c7e Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-882305 Namespace:default APIServerHAVIP: APISer
verName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false C
ustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1122 00:57:48.804465  714411 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1122 00:57:48.804546  714411 ssh_runner.go:195] Run: sudo crictl images --output json
	I1122 00:57:48.841363  714411 crio.go:514] all images are preloaded for cri-o runtime.
	I1122 00:57:48.841393  714411 crio.go:433] Images already preloaded, skipping extraction
	I1122 00:57:48.841450  714411 ssh_runner.go:195] Run: sudo crictl images --output json
	I1122 00:57:48.867619  714411 crio.go:514] all images are preloaded for cri-o runtime.
	I1122 00:57:48.867642  714411 cache_images.go:86] Images are preloaded, skipping loading
	I1122 00:57:48.867649  714411 kubeadm.go:935] updating node { 192.168.85.2 8444 v1.34.1 crio true true} ...
	I1122 00:57:48.867739  714411 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=default-k8s-diff-port-882305 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-882305 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1122 00:57:48.867823  714411 ssh_runner.go:195] Run: crio config
	I1122 00:57:48.935771  714411 cni.go:84] Creating CNI manager for ""
	I1122 00:57:48.935850  714411 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1122 00:57:48.935884  714411 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1122 00:57:48.935935  714411 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8444 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-882305 NodeName:default-k8s-diff-port-882305 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.
crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1122 00:57:48.936099  714411 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-882305"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.85.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1122 00:57:48.936201  714411 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1122 00:57:48.944157  714411 binaries.go:51] Found k8s binaries, skipping transfer
	I1122 00:57:48.944231  714411 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1122 00:57:48.952059  714411 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (378 bytes)
	I1122 00:57:48.965184  714411 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1122 00:57:48.977916  714411 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2225 bytes)
	I1122 00:57:48.990672  714411 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I1122 00:57:48.994270  714411 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1122 00:57:49.005478  714411 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1122 00:57:49.129181  714411 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1122 00:57:49.145991  714411 certs.go:69] Setting up /home/jenkins/minikube-integration/21934-513600/.minikube/profiles/default-k8s-diff-port-882305 for IP: 192.168.85.2
	I1122 00:57:49.146052  714411 certs.go:195] generating shared ca certs ...
	I1122 00:57:49.146082  714411 certs.go:227] acquiring lock for ca certs: {Name:mkaf4c79493334cb2058b349c36be7473837f9b7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1122 00:57:49.146244  714411 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21934-513600/.minikube/ca.key
	I1122 00:57:49.146313  714411 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21934-513600/.minikube/proxy-client-ca.key
	I1122 00:57:49.146334  714411 certs.go:257] generating profile certs ...
	I1122 00:57:49.146417  714411 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21934-513600/.minikube/profiles/default-k8s-diff-port-882305/client.key
	I1122 00:57:49.146449  714411 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21934-513600/.minikube/profiles/default-k8s-diff-port-882305/client.crt with IP's: []
	I1122 00:57:49.514846  714411 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21934-513600/.minikube/profiles/default-k8s-diff-port-882305/client.crt ...
	I1122 00:57:49.514881  714411 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21934-513600/.minikube/profiles/default-k8s-diff-port-882305/client.crt: {Name:mk7377ae04629c0ac33a6f4a312bb4bb2ed63cf6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1122 00:57:49.515081  714411 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21934-513600/.minikube/profiles/default-k8s-diff-port-882305/client.key ...
	I1122 00:57:49.515096  714411 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21934-513600/.minikube/profiles/default-k8s-diff-port-882305/client.key: {Name:mk3d952a0612866c8f7a6ae6f6bb8b13b69f3dc9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1122 00:57:49.515190  714411 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21934-513600/.minikube/profiles/default-k8s-diff-port-882305/apiserver.key.14c699f7
	I1122 00:57:49.515207  714411 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21934-513600/.minikube/profiles/default-k8s-diff-port-882305/apiserver.crt.14c699f7 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.85.2]
	I1122 00:57:49.652313  714411 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21934-513600/.minikube/profiles/default-k8s-diff-port-882305/apiserver.crt.14c699f7 ...
	I1122 00:57:49.652346  714411 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21934-513600/.minikube/profiles/default-k8s-diff-port-882305/apiserver.crt.14c699f7: {Name:mk3fb1b2409298ad13f2a2bbc731fb22e92e1fd9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1122 00:57:49.652518  714411 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21934-513600/.minikube/profiles/default-k8s-diff-port-882305/apiserver.key.14c699f7 ...
	I1122 00:57:49.652533  714411 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21934-513600/.minikube/profiles/default-k8s-diff-port-882305/apiserver.key.14c699f7: {Name:mk3c157e6ae7da90405863e114d5f562094f32d0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1122 00:57:49.652620  714411 certs.go:382] copying /home/jenkins/minikube-integration/21934-513600/.minikube/profiles/default-k8s-diff-port-882305/apiserver.crt.14c699f7 -> /home/jenkins/minikube-integration/21934-513600/.minikube/profiles/default-k8s-diff-port-882305/apiserver.crt
	I1122 00:57:49.652702  714411 certs.go:386] copying /home/jenkins/minikube-integration/21934-513600/.minikube/profiles/default-k8s-diff-port-882305/apiserver.key.14c699f7 -> /home/jenkins/minikube-integration/21934-513600/.minikube/profiles/default-k8s-diff-port-882305/apiserver.key
	I1122 00:57:49.652762  714411 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21934-513600/.minikube/profiles/default-k8s-diff-port-882305/proxy-client.key
	I1122 00:57:49.652781  714411 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21934-513600/.minikube/profiles/default-k8s-diff-port-882305/proxy-client.crt with IP's: []
	I1122 00:57:49.813893  714411 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21934-513600/.minikube/profiles/default-k8s-diff-port-882305/proxy-client.crt ...
	I1122 00:57:49.813923  714411 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21934-513600/.minikube/profiles/default-k8s-diff-port-882305/proxy-client.crt: {Name:mk113ad53d15edaae52dea3559e2e61985f0d225 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1122 00:57:49.814093  714411 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21934-513600/.minikube/profiles/default-k8s-diff-port-882305/proxy-client.key ...
	I1122 00:57:49.814113  714411 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21934-513600/.minikube/profiles/default-k8s-diff-port-882305/proxy-client.key: {Name:mk36d48c76d061c1f8fdbb9f64c8bcc3b3b20c16 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1122 00:57:49.814299  714411 certs.go:484] found cert: /home/jenkins/minikube-integration/21934-513600/.minikube/certs/516937.pem (1338 bytes)
	W1122 00:57:49.814345  714411 certs.go:480] ignoring /home/jenkins/minikube-integration/21934-513600/.minikube/certs/516937_empty.pem, impossibly tiny 0 bytes
	I1122 00:57:49.814358  714411 certs.go:484] found cert: /home/jenkins/minikube-integration/21934-513600/.minikube/certs/ca-key.pem (1679 bytes)
	I1122 00:57:49.814387  714411 certs.go:484] found cert: /home/jenkins/minikube-integration/21934-513600/.minikube/certs/ca.pem (1078 bytes)
	I1122 00:57:49.814416  714411 certs.go:484] found cert: /home/jenkins/minikube-integration/21934-513600/.minikube/certs/cert.pem (1123 bytes)
	I1122 00:57:49.814445  714411 certs.go:484] found cert: /home/jenkins/minikube-integration/21934-513600/.minikube/certs/key.pem (1675 bytes)
	I1122 00:57:49.814496  714411 certs.go:484] found cert: /home/jenkins/minikube-integration/21934-513600/.minikube/files/etc/ssl/certs/5169372.pem (1708 bytes)
	I1122 00:57:49.815036  714411 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21934-513600/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1122 00:57:49.835252  714411 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21934-513600/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1122 00:57:49.855020  714411 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21934-513600/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1122 00:57:49.873296  714411 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21934-513600/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1122 00:57:49.891626  714411 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21934-513600/.minikube/profiles/default-k8s-diff-port-882305/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I1122 00:57:49.909596  714411 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21934-513600/.minikube/profiles/default-k8s-diff-port-882305/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1122 00:57:49.927463  714411 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21934-513600/.minikube/profiles/default-k8s-diff-port-882305/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1122 00:57:49.945589  714411 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21934-513600/.minikube/profiles/default-k8s-diff-port-882305/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1122 00:57:49.962712  714411 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21934-513600/.minikube/certs/516937.pem --> /usr/share/ca-certificates/516937.pem (1338 bytes)
	I1122 00:57:49.979872  714411 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21934-513600/.minikube/files/etc/ssl/certs/5169372.pem --> /usr/share/ca-certificates/5169372.pem (1708 bytes)
	I1122 00:57:49.998214  714411 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21934-513600/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1122 00:57:50.028353  714411 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1122 00:57:50.053434  714411 ssh_runner.go:195] Run: openssl version
	I1122 00:57:50.060745  714411 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/516937.pem && ln -fs /usr/share/ca-certificates/516937.pem /etc/ssl/certs/516937.pem"
	I1122 00:57:50.071075  714411 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/516937.pem
	I1122 00:57:50.075861  714411 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov 21 23:55 /usr/share/ca-certificates/516937.pem
	I1122 00:57:50.075973  714411 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/516937.pem
	I1122 00:57:50.125693  714411 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/516937.pem /etc/ssl/certs/51391683.0"
	I1122 00:57:50.134481  714411 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/5169372.pem && ln -fs /usr/share/ca-certificates/5169372.pem /etc/ssl/certs/5169372.pem"
	I1122 00:57:50.143325  714411 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/5169372.pem
	I1122 00:57:50.147490  714411 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov 21 23:55 /usr/share/ca-certificates/5169372.pem
	I1122 00:57:50.147582  714411 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/5169372.pem
	I1122 00:57:50.189297  714411 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/5169372.pem /etc/ssl/certs/3ec20f2e.0"
	I1122 00:57:50.198053  714411 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1122 00:57:50.206803  714411 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1122 00:57:50.210840  714411 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov 21 23:48 /usr/share/ca-certificates/minikubeCA.pem
	I1122 00:57:50.210906  714411 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1122 00:57:50.252654  714411 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1122 00:57:50.261552  714411 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1122 00:57:50.265268  714411 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1122 00:57:50.265389  714411 kubeadm.go:401] StartCluster: {Name:default-k8s-diff-port-882305 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763588073-21934@sha256:19d3da0413e1bfa354cbb88004c6796f8e9772a083e0230b0f6e50212ee04c7e Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-882305 Namespace:default APIServerHAVIP: APIServer
Name:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false Cust
omQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1122 00:57:50.265474  714411 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1122 00:57:50.265546  714411 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1122 00:57:50.298072  714411 cri.go:89] found id: ""
	I1122 00:57:50.298160  714411 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1122 00:57:50.306224  714411 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1122 00:57:50.313894  714411 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1122 00:57:50.313958  714411 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1122 00:57:50.321662  714411 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1122 00:57:50.321682  714411 kubeadm.go:158] found existing configuration files:
	
	I1122 00:57:50.321735  714411 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I1122 00:57:50.330168  714411 kubeadm.go:164] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1122 00:57:50.330246  714411 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1122 00:57:50.337643  714411 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I1122 00:57:50.345329  714411 kubeadm.go:164] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1122 00:57:50.345397  714411 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1122 00:57:50.352720  714411 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I1122 00:57:50.361194  714411 kubeadm.go:164] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1122 00:57:50.361310  714411 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1122 00:57:50.374209  714411 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I1122 00:57:50.382929  714411 kubeadm.go:164] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1122 00:57:50.383002  714411 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1122 00:57:50.390480  714411 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1122 00:57:50.436757  714411 kubeadm.go:319] [init] Using Kubernetes version: v1.34.1
	I1122 00:57:50.436819  714411 kubeadm.go:319] [preflight] Running pre-flight checks
	I1122 00:57:50.461701  714411 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1122 00:57:50.461776  714411 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1122 00:57:50.461835  714411 kubeadm.go:319] OS: Linux
	I1122 00:57:50.461885  714411 kubeadm.go:319] CGROUPS_CPU: enabled
	I1122 00:57:50.461939  714411 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1122 00:57:50.461991  714411 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1122 00:57:50.462042  714411 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1122 00:57:50.462094  714411 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1122 00:57:50.462145  714411 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1122 00:57:50.462198  714411 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1122 00:57:50.462249  714411 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1122 00:57:50.462298  714411 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1122 00:57:50.535136  714411 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1122 00:57:50.535342  714411 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1122 00:57:50.535483  714411 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1122 00:57:50.542789  714411 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1122 00:57:50.548106  714411 out.go:252]   - Generating certificates and keys ...
	I1122 00:57:50.548206  714411 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1122 00:57:50.548276  714411 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1122 00:57:50.768906  714411 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1122 00:57:50.975715  714411 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	W1122 00:57:50.281535  710840 pod_ready.go:104] pod "coredns-66bc5c9577-h2kpd" is not "Ready", error: <nil>
	W1122 00:57:52.286081  710840 pod_ready.go:104] pod "coredns-66bc5c9577-h2kpd" is not "Ready", error: <nil>
	I1122 00:57:52.101141  714411 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1122 00:57:52.461866  714411 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1122 00:57:52.728889  714411 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1122 00:57:52.729272  714411 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [default-k8s-diff-port-882305 localhost] and IPs [192.168.85.2 127.0.0.1 ::1]
	I1122 00:57:53.536729  714411 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1122 00:57:53.537077  714411 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [default-k8s-diff-port-882305 localhost] and IPs [192.168.85.2 127.0.0.1 ::1]
	I1122 00:57:55.063363  714411 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1122 00:57:55.445852  714411 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1122 00:57:55.724965  714411 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1122 00:57:55.725216  714411 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1122 00:57:55.971678  714411 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1122 00:57:56.176982  714411 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1122 00:57:56.718212  714411 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1122 00:57:57.227941  714411 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1122 00:57:57.762121  714411 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1122 00:57:57.762843  714411 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1122 00:57:57.765520  714411 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	W1122 00:57:54.782214  710840 pod_ready.go:104] pod "coredns-66bc5c9577-h2kpd" is not "Ready", error: <nil>
	I1122 00:57:56.784835  710840 pod_ready.go:94] pod "coredns-66bc5c9577-h2kpd" is "Ready"
	I1122 00:57:56.784864  710840 pod_ready.go:86] duration metric: took 38.009231841s for pod "coredns-66bc5c9577-h2kpd" in "kube-system" namespace to be "Ready" or be gone ...
	I1122 00:57:56.789483  710840 pod_ready.go:83] waiting for pod "etcd-embed-certs-879000" in "kube-system" namespace to be "Ready" or be gone ...
	I1122 00:57:56.798365  710840 pod_ready.go:94] pod "etcd-embed-certs-879000" is "Ready"
	I1122 00:57:56.798393  710840 pod_ready.go:86] duration metric: took 8.882194ms for pod "etcd-embed-certs-879000" in "kube-system" namespace to be "Ready" or be gone ...
	I1122 00:57:56.801545  710840 pod_ready.go:83] waiting for pod "kube-apiserver-embed-certs-879000" in "kube-system" namespace to be "Ready" or be gone ...
	I1122 00:57:56.809327  710840 pod_ready.go:94] pod "kube-apiserver-embed-certs-879000" is "Ready"
	I1122 00:57:56.809360  710840 pod_ready.go:86] duration metric: took 7.785856ms for pod "kube-apiserver-embed-certs-879000" in "kube-system" namespace to be "Ready" or be gone ...
	I1122 00:57:56.811870  710840 pod_ready.go:83] waiting for pod "kube-controller-manager-embed-certs-879000" in "kube-system" namespace to be "Ready" or be gone ...
	I1122 00:57:56.979526  710840 pod_ready.go:94] pod "kube-controller-manager-embed-certs-879000" is "Ready"
	I1122 00:57:56.979556  710840 pod_ready.go:86] duration metric: took 167.647895ms for pod "kube-controller-manager-embed-certs-879000" in "kube-system" namespace to be "Ready" or be gone ...
	I1122 00:57:57.180650  710840 pod_ready.go:83] waiting for pod "kube-proxy-w9bqj" in "kube-system" namespace to be "Ready" or be gone ...
	I1122 00:57:57.580685  710840 pod_ready.go:94] pod "kube-proxy-w9bqj" is "Ready"
	I1122 00:57:57.580726  710840 pod_ready.go:86] duration metric: took 400.046535ms for pod "kube-proxy-w9bqj" in "kube-system" namespace to be "Ready" or be gone ...
	I1122 00:57:57.780308  710840 pod_ready.go:83] waiting for pod "kube-scheduler-embed-certs-879000" in "kube-system" namespace to be "Ready" or be gone ...
	I1122 00:57:58.178694  710840 pod_ready.go:94] pod "kube-scheduler-embed-certs-879000" is "Ready"
	I1122 00:57:58.178724  710840 pod_ready.go:86] duration metric: took 398.39348ms for pod "kube-scheduler-embed-certs-879000" in "kube-system" namespace to be "Ready" or be gone ...
	I1122 00:57:58.178737  710840 pod_ready.go:40] duration metric: took 39.407657542s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1122 00:57:58.251619  710840 start.go:628] kubectl: 1.33.2, cluster: 1.34.1 (minor skew: 1)
	I1122 00:57:58.254819  710840 out.go:179] * Done! kubectl is now configured to use "embed-certs-879000" cluster and "default" namespace by default
	I1122 00:57:57.768910  714411 out.go:252]   - Booting up control plane ...
	I1122 00:57:57.769026  714411 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1122 00:57:57.769112  714411 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1122 00:57:57.769178  714411 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1122 00:57:57.787663  714411 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1122 00:57:57.787776  714411 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1122 00:57:57.797750  714411 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1122 00:57:57.797905  714411 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1122 00:57:57.797952  714411 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1122 00:57:57.921159  714411 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1122 00:57:57.921282  714411 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1122 00:57:59.420467  714411 kubeadm.go:319] [kubelet-check] The kubelet is healthy after 1.500722777s
	I1122 00:57:59.424226  714411 kubeadm.go:319] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1122 00:57:59.424325  714411 kubeadm.go:319] [control-plane-check] Checking kube-apiserver at https://192.168.85.2:8444/livez
	I1122 00:57:59.424420  714411 kubeadm.go:319] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1122 00:57:59.424505  714411 kubeadm.go:319] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1122 00:58:02.621764  714411 kubeadm.go:319] [control-plane-check] kube-controller-manager is healthy after 3.196925866s
	I1122 00:58:03.868010  714411 kubeadm.go:319] [control-plane-check] kube-scheduler is healthy after 4.443761565s
	I1122 00:58:05.926787  714411 kubeadm.go:319] [control-plane-check] kube-apiserver is healthy after 6.502295854s
	I1122 00:58:05.948847  714411 kubeadm.go:319] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1122 00:58:05.966901  714411 kubeadm.go:319] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1122 00:58:05.984369  714411 kubeadm.go:319] [upload-certs] Skipping phase. Please see --upload-certs
	I1122 00:58:05.984601  714411 kubeadm.go:319] [mark-control-plane] Marking the node default-k8s-diff-port-882305 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1122 00:58:05.997188  714411 kubeadm.go:319] [bootstrap-token] Using token: gtlx2j.zsn1vl5ysqq6c7xo
	I1122 00:58:06.003111  714411 out.go:252]   - Configuring RBAC rules ...
	I1122 00:58:06.003267  714411 kubeadm.go:319] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1122 00:58:06.007885  714411 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1122 00:58:06.018263  714411 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1122 00:58:06.024846  714411 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1122 00:58:06.029517  714411 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1122 00:58:06.040697  714411 kubeadm.go:319] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1122 00:58:06.336451  714411 kubeadm.go:319] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1122 00:58:06.786479  714411 kubeadm.go:319] [addons] Applied essential addon: CoreDNS
	I1122 00:58:07.335883  714411 kubeadm.go:319] [addons] Applied essential addon: kube-proxy
	I1122 00:58:07.337123  714411 kubeadm.go:319] 
	I1122 00:58:07.337196  714411 kubeadm.go:319] Your Kubernetes control-plane has initialized successfully!
	I1122 00:58:07.337201  714411 kubeadm.go:319] 
	I1122 00:58:07.337278  714411 kubeadm.go:319] To start using your cluster, you need to run the following as a regular user:
	I1122 00:58:07.337283  714411 kubeadm.go:319] 
	I1122 00:58:07.337307  714411 kubeadm.go:319]   mkdir -p $HOME/.kube
	I1122 00:58:07.337366  714411 kubeadm.go:319]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1122 00:58:07.337416  714411 kubeadm.go:319]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1122 00:58:07.337420  714411 kubeadm.go:319] 
	I1122 00:58:07.337474  714411 kubeadm.go:319] Alternatively, if you are the root user, you can run:
	I1122 00:58:07.337478  714411 kubeadm.go:319] 
	I1122 00:58:07.337525  714411 kubeadm.go:319]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1122 00:58:07.337529  714411 kubeadm.go:319] 
	I1122 00:58:07.337581  714411 kubeadm.go:319] You should now deploy a pod network to the cluster.
	I1122 00:58:07.337656  714411 kubeadm.go:319] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1122 00:58:07.337724  714411 kubeadm.go:319]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1122 00:58:07.337729  714411 kubeadm.go:319] 
	I1122 00:58:07.337839  714411 kubeadm.go:319] You can now join any number of control-plane nodes by copying certificate authorities
	I1122 00:58:07.337926  714411 kubeadm.go:319] and service account keys on each node and then running the following as root:
	I1122 00:58:07.337930  714411 kubeadm.go:319] 
	I1122 00:58:07.338014  714411 kubeadm.go:319]   kubeadm join control-plane.minikube.internal:8444 --token gtlx2j.zsn1vl5ysqq6c7xo \
	I1122 00:58:07.338117  714411 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:ecfebb5fda4f065a571cf90106e71e452abce05aaa4d3155b81d7383977d6854 \
	I1122 00:58:07.338138  714411 kubeadm.go:319] 	--control-plane 
	I1122 00:58:07.338141  714411 kubeadm.go:319] 
	I1122 00:58:07.338226  714411 kubeadm.go:319] Then you can join any number of worker nodes by running the following on each as root:
	I1122 00:58:07.338236  714411 kubeadm.go:319] 
	I1122 00:58:07.338318  714411 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8444 --token gtlx2j.zsn1vl5ysqq6c7xo \
	I1122 00:58:07.338420  714411 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:ecfebb5fda4f065a571cf90106e71e452abce05aaa4d3155b81d7383977d6854 
	I1122 00:58:07.342997  714411 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is in maintenance mode, please migrate to cgroups v2
	I1122 00:58:07.343230  714411 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1122 00:58:07.343336  714411 kubeadm.go:319] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1122 00:58:07.343352  714411 cni.go:84] Creating CNI manager for ""
	I1122 00:58:07.343360  714411 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1122 00:58:07.346500  714411 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I1122 00:58:07.349469  714411 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1122 00:58:07.353392  714411 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.1/kubectl ...
	I1122 00:58:07.353414  714411 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1122 00:58:07.373753  714411 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1122 00:58:07.693417  714411 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1122 00:58:07.693548  714411 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1122 00:58:07.693613  714411 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes default-k8s-diff-port-882305 minikube.k8s.io/updated_at=2025_11_22T00_58_07_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=299bbe887a12c40541707cc636234f35f4ff1785 minikube.k8s.io/name=default-k8s-diff-port-882305 minikube.k8s.io/primary=true
	I1122 00:58:07.866986  714411 ops.go:34] apiserver oom_adj: -16
	I1122 00:58:07.867095  714411 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1122 00:58:08.367799  714411 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1122 00:58:08.867420  714411 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1122 00:58:09.368107  714411 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1122 00:58:09.867380  714411 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1122 00:58:10.367735  714411 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1122 00:58:10.867603  714411 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1122 00:58:11.367592  714411 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1122 00:58:11.510343  714411 kubeadm.go:1114] duration metric: took 3.816835748s to wait for elevateKubeSystemPrivileges
	I1122 00:58:11.510369  714411 kubeadm.go:403] duration metric: took 21.244985101s to StartCluster
	I1122 00:58:11.510385  714411 settings.go:142] acquiring lock: {Name:mk6c31eb57ec65b047b78b4e1046e03fe7cc77bb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1122 00:58:11.510452  714411 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21934-513600/kubeconfig
	I1122 00:58:11.511993  714411 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21934-513600/kubeconfig: {Name:mkcd094d8a765e5a81369c0fb33cb5e696b17bb8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1122 00:58:11.512228  714411 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.85.2 Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1122 00:58:11.512456  714411 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1122 00:58:11.512847  714411 config.go:182] Loaded profile config "default-k8s-diff-port-882305": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1122 00:58:11.512888  714411 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1122 00:58:11.512946  714411 addons.go:70] Setting storage-provisioner=true in profile "default-k8s-diff-port-882305"
	I1122 00:58:11.512959  714411 addons.go:239] Setting addon storage-provisioner=true in "default-k8s-diff-port-882305"
	I1122 00:58:11.512981  714411 host.go:66] Checking if "default-k8s-diff-port-882305" exists ...
	I1122 00:58:11.513170  714411 addons.go:70] Setting default-storageclass=true in profile "default-k8s-diff-port-882305"
	I1122 00:58:11.513192  714411 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-882305"
	I1122 00:58:11.513578  714411 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-882305 --format={{.State.Status}}
	I1122 00:58:11.514350  714411 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-882305 --format={{.State.Status}}
	I1122 00:58:11.515811  714411 out.go:179] * Verifying Kubernetes components...
	I1122 00:58:11.522450  714411 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1122 00:58:11.549101  714411 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	
	
	==> CRI-O <==
	Nov 22 00:57:44 embed-certs-879000 crio[649]: time="2025-11-22T00:57:44.91517462Z" level=info msg="Removed container c008c2dd7377bbfd4128b50485dd8b601432d034f4890680bab4bcd2fdbea6a4: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-hnrr2/dashboard-metrics-scraper" id=b560fe16-6c86-4756-87a1-05a69b32c4e7 name=/runtime.v1.RuntimeService/RemoveContainer
	Nov 22 00:57:47 embed-certs-879000 conmon[1133]: conmon 6904d457c7dc728f7026 <ninfo>: container 1138 exited with status 1
	Nov 22 00:57:47 embed-certs-879000 crio[649]: time="2025-11-22T00:57:47.906265428Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=2a794be6-fbca-4ca6-85b8-8f1bcf9fac3f name=/runtime.v1.ImageService/ImageStatus
	Nov 22 00:57:47 embed-certs-879000 crio[649]: time="2025-11-22T00:57:47.907579942Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=3850f02e-52de-4971-bc78-31845537df1b name=/runtime.v1.ImageService/ImageStatus
	Nov 22 00:57:47 embed-certs-879000 crio[649]: time="2025-11-22T00:57:47.918365218Z" level=info msg="Creating container: kube-system/storage-provisioner/storage-provisioner" id=f1eec474-6628-4aa4-bcb1-c582c3e90514 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 22 00:57:47 embed-certs-879000 crio[649]: time="2025-11-22T00:57:47.918493214Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 22 00:57:47 embed-certs-879000 crio[649]: time="2025-11-22T00:57:47.929651747Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 22 00:57:47 embed-certs-879000 crio[649]: time="2025-11-22T00:57:47.929893553Z" level=warning msg="Failed to open /etc/passwd: open /var/lib/containers/storage/overlay/0a98b134b440713d299f499d934583186303f5db86d330fad3bdb9bc36d2a9eb/merged/etc/passwd: no such file or directory"
	Nov 22 00:57:47 embed-certs-879000 crio[649]: time="2025-11-22T00:57:47.929922722Z" level=warning msg="Failed to open /etc/group: open /var/lib/containers/storage/overlay/0a98b134b440713d299f499d934583186303f5db86d330fad3bdb9bc36d2a9eb/merged/etc/group: no such file or directory"
	Nov 22 00:57:47 embed-certs-879000 crio[649]: time="2025-11-22T00:57:47.930252878Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 22 00:57:47 embed-certs-879000 crio[649]: time="2025-11-22T00:57:47.947361235Z" level=info msg="Created container 933cca2968517797342b88d5e9db0d039293c75efb66faae70c1f0e8a213eaaa: kube-system/storage-provisioner/storage-provisioner" id=f1eec474-6628-4aa4-bcb1-c582c3e90514 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 22 00:57:47 embed-certs-879000 crio[649]: time="2025-11-22T00:57:47.957173346Z" level=info msg="Starting container: 933cca2968517797342b88d5e9db0d039293c75efb66faae70c1f0e8a213eaaa" id=722d56c1-e6b3-4150-bb7f-8d146e9dbc5a name=/runtime.v1.RuntimeService/StartContainer
	Nov 22 00:57:47 embed-certs-879000 crio[649]: time="2025-11-22T00:57:47.960926057Z" level=info msg="Started container" PID=1641 containerID=933cca2968517797342b88d5e9db0d039293c75efb66faae70c1f0e8a213eaaa description=kube-system/storage-provisioner/storage-provisioner id=722d56c1-e6b3-4150-bb7f-8d146e9dbc5a name=/runtime.v1.RuntimeService/StartContainer sandboxID=2dafa3b42ca588edb8bc1337c892149bde9bed3adf6acae13b7580b69f26771f
	Nov 22 00:57:57 embed-certs-879000 crio[649]: time="2025-11-22T00:57:57.624870235Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 22 00:57:57 embed-certs-879000 crio[649]: time="2025-11-22T00:57:57.634145492Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 22 00:57:57 embed-certs-879000 crio[649]: time="2025-11-22T00:57:57.634180371Z" level=info msg="Updated default CNI network name to kindnet"
	Nov 22 00:57:57 embed-certs-879000 crio[649]: time="2025-11-22T00:57:57.634208628Z" level=info msg="CNI monitoring event WRITE         \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 22 00:57:57 embed-certs-879000 crio[649]: time="2025-11-22T00:57:57.642049933Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 22 00:57:57 embed-certs-879000 crio[649]: time="2025-11-22T00:57:57.642217124Z" level=info msg="Updated default CNI network name to kindnet"
	Nov 22 00:57:57 embed-certs-879000 crio[649]: time="2025-11-22T00:57:57.642313728Z" level=info msg="CNI monitoring event RENAME        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 22 00:57:57 embed-certs-879000 crio[649]: time="2025-11-22T00:57:57.646045574Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 22 00:57:57 embed-certs-879000 crio[649]: time="2025-11-22T00:57:57.646187601Z" level=info msg="Updated default CNI network name to kindnet"
	Nov 22 00:57:57 embed-certs-879000 crio[649]: time="2025-11-22T00:57:57.64626375Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist\" ← \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 22 00:57:57 embed-certs-879000 crio[649]: time="2025-11-22T00:57:57.64943343Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 22 00:57:57 embed-certs-879000 crio[649]: time="2025-11-22T00:57:57.649556741Z" level=info msg="Updated default CNI network name to kindnet"
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                      CREATED              STATE               NAME                        ATTEMPT             POD ID              POD                                          NAMESPACE
	933cca2968517       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                           25 seconds ago       Running             storage-provisioner         2                   2dafa3b42ca58       storage-provisioner                          kube-system
	93d281eec1d97       a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7                                           28 seconds ago       Exited              dashboard-metrics-scraper   2                   862b859352853       dashboard-metrics-scraper-6ffb444bf9-hnrr2   kubernetes-dashboard
	94175cffd15a6       docker.io/kubernetesui/dashboard@sha256:5c52c60663b473628bd98e4ffee7a747ef1f88d8c7bcee957b089fb3f61bdedf   40 seconds ago       Running             kubernetes-dashboard        0                   97d0187a82c6a       kubernetes-dashboard-855c9754f9-mrcpd        kubernetes-dashboard
	745bdf157ec74       1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c                                           56 seconds ago       Running             busybox                     1                   6d5b8906a7105       busybox                                      default
	08b992da614bc       138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc                                           56 seconds ago       Running             coredns                     1                   6794853f05ea5       coredns-66bc5c9577-h2kpd                     kube-system
	f859b19e5db26       05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9                                           56 seconds ago       Running             kube-proxy                  1                   552addf39817a       kube-proxy-w9bqj                             kube-system
	6904d457c7dc7       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                           56 seconds ago       Exited              storage-provisioner         1                   2dafa3b42ca58       storage-provisioner                          kube-system
	9fd53bbae898c       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c                                           56 seconds ago       Running             kindnet-cni                 1                   a68abed2f8733       kindnet-j8wwg                                kube-system
	9852dc8e953e4       a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e                                           About a minute ago   Running             etcd                        1                   3a2c8dcdc139f       etcd-embed-certs-879000                      kube-system
	f660ef303bd46       7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a                                           About a minute ago   Running             kube-controller-manager     1                   17eb5f2423265       kube-controller-manager-embed-certs-879000   kube-system
	d2057db699cba       43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196                                           About a minute ago   Running             kube-apiserver              1                   e502162e9c4c6       kube-apiserver-embed-certs-879000            kube-system
	53b12e2f48bad       b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0                                           About a minute ago   Running             kube-scheduler              1                   86d5a1ed196f4       kube-scheduler-embed-certs-879000            kube-system
	
	
	==> coredns [08b992da614bc2e772d094ded50c154806974fe8fb54eb1da406e962496e84d6] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 3e2243e8b9e7116f563b83b1933f477a68ba9ad4a829ed5d7e54629fb2ce53528b9bc6023030be20be434ad805fd246296dd428c64e9bbef3a70f22b8621f560
	CoreDNS-1.12.1
	linux/arm64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:55252 - 62034 "HINFO IN 6746390676220224766.2530450903095737698. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.031624797s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	
	
	==> describe nodes <==
	Name:               embed-certs-879000
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=embed-certs-879000
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=299bbe887a12c40541707cc636234f35f4ff1785
	                    minikube.k8s.io/name=embed-certs-879000
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_11_22T00_55_51_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 22 Nov 2025 00:55:45 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  embed-certs-879000
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 22 Nov 2025 00:58:07 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 22 Nov 2025 00:58:08 +0000   Sat, 22 Nov 2025 00:55:39 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 22 Nov 2025 00:58:08 +0000   Sat, 22 Nov 2025 00:55:39 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 22 Nov 2025 00:58:08 +0000   Sat, 22 Nov 2025 00:55:39 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 22 Nov 2025 00:58:08 +0000   Sat, 22 Nov 2025 00:56:35 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.76.2
	  Hostname:    embed-certs-879000
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	System Info:
	  Machine ID:                 c92eea4d3c03c0156e01fcf8691e3907
	  System UUID:                37f1296a-29c2-4a0f-8fef-fc1d195b0150
	  Boot ID:                    72ac7385-472f-47d1-a23e-bd80468d6e09
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.2
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         95s
	  kube-system                 coredns-66bc5c9577-h2kpd                      100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     2m19s
	  kube-system                 etcd-embed-certs-879000                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         2m24s
	  kube-system                 kindnet-j8wwg                                 100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      2m19s
	  kube-system                 kube-apiserver-embed-certs-879000             250m (12%)    0 (0%)      0 (0%)           0 (0%)         2m24s
	  kube-system                 kube-controller-manager-embed-certs-879000    200m (10%)    0 (0%)      0 (0%)           0 (0%)         2m23s
	  kube-system                 kube-proxy-w9bqj                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m19s
	  kube-system                 kube-scheduler-embed-certs-879000             100m (5%)     0 (0%)      0 (0%)           0 (0%)         2m27s
	  kube-system                 storage-provisioner                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m18s
	  kubernetes-dashboard        dashboard-metrics-scraper-6ffb444bf9-hnrr2    0 (0%)        0 (0%)      0 (0%)           0 (0%)         52s
	  kubernetes-dashboard        kubernetes-dashboard-855c9754f9-mrcpd         0 (0%)        0 (0%)      0 (0%)           0 (0%)         52s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  100m (5%)
	  memory             220Mi (2%)  220Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                    From             Message
	  ----     ------                   ----                   ----             -------
	  Normal   Starting                 2m18s                  kube-proxy       
	  Normal   Starting                 55s                    kube-proxy       
	  Normal   Starting                 2m36s                  kubelet          Starting kubelet.
	  Warning  CgroupV1                 2m36s                  kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  2m36s (x8 over 2m36s)  kubelet          Node embed-certs-879000 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    2m36s (x8 over 2m36s)  kubelet          Node embed-certs-879000 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     2m36s (x8 over 2m36s)  kubelet          Node embed-certs-879000 status is now: NodeHasSufficientPID
	  Normal   NodeHasNoDiskPressure    2m23s                  kubelet          Node embed-certs-879000 status is now: NodeHasNoDiskPressure
	  Warning  CgroupV1                 2m23s                  kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  2m23s                  kubelet          Node embed-certs-879000 status is now: NodeHasSufficientMemory
	  Normal   NodeHasSufficientPID     2m23s                  kubelet          Node embed-certs-879000 status is now: NodeHasSufficientPID
	  Normal   Starting                 2m23s                  kubelet          Starting kubelet.
	  Normal   RegisteredNode           2m20s                  node-controller  Node embed-certs-879000 event: Registered Node embed-certs-879000 in Controller
	  Normal   NodeReady                98s                    kubelet          Node embed-certs-879000 status is now: NodeReady
	  Normal   Starting                 63s                    kubelet          Starting kubelet.
	  Warning  CgroupV1                 63s                    kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  63s (x8 over 63s)      kubelet          Node embed-certs-879000 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    63s (x8 over 63s)      kubelet          Node embed-certs-879000 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     63s (x8 over 63s)      kubelet          Node embed-certs-879000 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           53s                    node-controller  Node embed-certs-879000 event: Registered Node embed-certs-879000 in Controller
	
	
	==> dmesg <==
	[Nov22 00:35] overlayfs: idmapped layers are currently not supported
	[Nov22 00:36] overlayfs: idmapped layers are currently not supported
	[ +18.168104] overlayfs: idmapped layers are currently not supported
	[Nov22 00:37] overlayfs: idmapped layers are currently not supported
	[ +56.322609] overlayfs: idmapped layers are currently not supported
	[Nov22 00:38] overlayfs: idmapped layers are currently not supported
	[Nov22 00:39] overlayfs: idmapped layers are currently not supported
	[ +23.174928] overlayfs: idmapped layers are currently not supported
	[Nov22 00:41] overlayfs: idmapped layers are currently not supported
	[Nov22 00:42] overlayfs: idmapped layers are currently not supported
	[Nov22 00:44] overlayfs: idmapped layers are currently not supported
	[Nov22 00:45] overlayfs: idmapped layers are currently not supported
	[Nov22 00:46] overlayfs: idmapped layers are currently not supported
	[Nov22 00:48] overlayfs: idmapped layers are currently not supported
	[Nov22 00:50] overlayfs: idmapped layers are currently not supported
	[Nov22 00:51] overlayfs: idmapped layers are currently not supported
	[ +11.900293] overlayfs: idmapped layers are currently not supported
	[ +28.922055] overlayfs: idmapped layers are currently not supported
	[Nov22 00:52] overlayfs: idmapped layers are currently not supported
	[Nov22 00:53] overlayfs: idmapped layers are currently not supported
	[Nov22 00:54] overlayfs: idmapped layers are currently not supported
	[Nov22 00:55] overlayfs: idmapped layers are currently not supported
	[Nov22 00:56] overlayfs: idmapped layers are currently not supported
	[Nov22 00:57] overlayfs: idmapped layers are currently not supported
	[Nov22 00:58] overlayfs: idmapped layers are currently not supported
	
	
	==> etcd [9852dc8e953e4082535d9da09fe8c7c488642b2923223eea6484d9282094e3ea] <==
	{"level":"warn","ts":"2025-11-22T00:57:14.304034Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49110","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-22T00:57:14.345714Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49128","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-22T00:57:14.355069Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49152","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-22T00:57:14.373007Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49172","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-22T00:57:14.386613Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49198","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-22T00:57:14.410756Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49208","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-22T00:57:14.427155Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49220","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-22T00:57:14.446454Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49246","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-22T00:57:14.473086Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49254","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-22T00:57:14.501508Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49272","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-22T00:57:14.519917Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49292","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-22T00:57:14.567472Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49308","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-22T00:57:14.569705Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49324","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-22T00:57:14.580493Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49346","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-22T00:57:14.605225Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49360","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-22T00:57:14.624813Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49374","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-22T00:57:14.641009Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49384","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-22T00:57:14.659786Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49402","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-22T00:57:14.676654Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49422","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-22T00:57:14.694298Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49448","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-22T00:57:14.750798Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49454","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-22T00:57:14.776595Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49484","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-22T00:57:14.823767Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49506","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-22T00:57:14.856529Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49522","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-22T00:57:14.933021Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38118","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 00:58:13 up  5:40,  0 user,  load average: 3.67, 3.85, 2.94
	Linux embed-certs-879000 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [9fd53bbae898cfbeaff1f5002c8476e6762d7ebef17deeac9f498600ae2a7b1b] <==
	I1122 00:57:17.448640       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1122 00:57:17.449057       1 main.go:139] hostIP = 192.168.76.2
	podIP = 192.168.76.2
	I1122 00:57:17.449249       1 main.go:148] setting mtu 1500 for CNI 
	I1122 00:57:17.449303       1 main.go:178] kindnetd IP family: "ipv4"
	I1122 00:57:17.449337       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-11-22T00:57:17Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1122 00:57:17.624268       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1122 00:57:17.643238       1 controller.go:381] "Waiting for informer caches to sync"
	I1122 00:57:17.643278       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1122 00:57:17.644080       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	E1122 00:57:47.624569       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Namespace: Get \"https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Namespace"
	E1122 00:57:47.644195       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.96.0.1:443/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Node"
	E1122 00:57:47.644309       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Pod: Get \"https://10.96.0.1:443/api/v1/pods?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Pod"
	E1122 00:57:47.644392       1 reflector.go:200] "Failed to watch" err="failed to list *v1.NetworkPolicy: Get \"https://10.96.0.1:443/apis/networking.k8s.io/v1/networkpolicies?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.NetworkPolicy"
	I1122 00:57:48.744280       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1122 00:57:48.744322       1 metrics.go:72] Registering metrics
	I1122 00:57:48.744383       1 controller.go:711] "Syncing nftables rules"
	I1122 00:57:57.624568       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1122 00:57:57.624607       1 main.go:301] handling current node
	I1122 00:58:07.630555       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1122 00:58:07.630656       1 main.go:301] handling current node
	
	
	==> kube-apiserver [d2057db699cba9b5fd5582afa88f5011c61af54cfbf9b6be282bae14ccb3e06b] <==
	I1122 00:57:16.641644       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1122 00:57:16.656692       1 shared_informer.go:356] "Caches are synced" controller="configmaps"
	I1122 00:57:16.668839       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I1122 00:57:16.668881       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I1122 00:57:16.705884       1 shared_informer.go:356] "Caches are synced" controller="node_authorizer"
	I1122 00:57:16.709314       1 shared_informer.go:356] "Caches are synced" controller="*generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]"
	I1122 00:57:16.709340       1 policy_source.go:240] refreshing policies
	I1122 00:57:16.739151       1 cache.go:39] Caches are synced for LocalAvailability controller
	I1122 00:57:16.745722       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I1122 00:57:16.747896       1 cache.go:39] Caches are synced for autoregister controller
	I1122 00:57:16.748135       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1122 00:57:16.762518       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1122 00:57:16.779771       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I1122 00:57:16.843870       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	E1122 00:57:16.863877       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I1122 00:57:16.942170       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1122 00:57:17.484970       1 controller.go:667] quota admission added evaluator for: namespaces
	I1122 00:57:17.786616       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1122 00:57:17.920159       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1122 00:57:17.949221       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1122 00:57:18.180412       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.97.98.159"}
	I1122 00:57:18.215563       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.106.24.172"}
	I1122 00:57:20.545936       1 controller.go:667] quota admission added evaluator for: endpoints
	I1122 00:57:20.847254       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1122 00:57:21.060665       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	
	
	==> kube-controller-manager [f660ef303bd4694fbdad76a7eb87133a3cca27093a6685ac673521dce9c9d434] <==
	I1122 00:57:20.559800       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1122 00:57:20.559827       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1122 00:57:20.559854       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1122 00:57:20.568738       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1122 00:57:20.570504       1 shared_informer.go:356] "Caches are synced" controller="node"
	I1122 00:57:20.570567       1 range_allocator.go:177] "Sending events to api server" logger="node-ipam-controller"
	I1122 00:57:20.570595       1 range_allocator.go:183] "Starting range CIDR allocator" logger="node-ipam-controller"
	I1122 00:57:20.570607       1 shared_informer.go:349] "Waiting for caches to sync" controller="cidrallocator"
	I1122 00:57:20.570613       1 shared_informer.go:356] "Caches are synced" controller="cidrallocator"
	I1122 00:57:20.573678       1 shared_informer.go:356] "Caches are synced" controller="PVC protection"
	I1122 00:57:20.575914       1 shared_informer.go:356] "Caches are synced" controller="HPA"
	I1122 00:57:20.577051       1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
	I1122 00:57:20.578163       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I1122 00:57:20.579385       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I1122 00:57:20.582901       1 shared_informer.go:356] "Caches are synced" controller="legacy-service-account-token-cleaner"
	I1122 00:57:20.582946       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I1122 00:57:20.588954       1 shared_informer.go:356] "Caches are synced" controller="ReplicationController"
	I1122 00:57:20.589034       1 shared_informer.go:356] "Caches are synced" controller="job"
	I1122 00:57:20.588965       1 shared_informer.go:356] "Caches are synced" controller="PV protection"
	I1122 00:57:20.588981       1 shared_informer.go:356] "Caches are synced" controller="service account"
	I1122 00:57:20.588994       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	I1122 00:57:20.589015       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I1122 00:57:20.589007       1 shared_informer.go:356] "Caches are synced" controller="cronjob"
	I1122 00:57:20.589025       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1122 00:57:20.599028       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	
	
	==> kube-proxy [f859b19e5db26f0386393c8d593ce69ad672e813e636cf787d88dc587b72d3be] <==
	I1122 00:57:18.175287       1 server_linux.go:53] "Using iptables proxy"
	I1122 00:57:18.290478       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1122 00:57:18.391684       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1122 00:57:18.391791       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.76.2"]
	E1122 00:57:18.391914       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1122 00:57:18.421392       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1122 00:57:18.421506       1 server_linux.go:132] "Using iptables Proxier"
	I1122 00:57:18.425216       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1122 00:57:18.425582       1 server.go:527] "Version info" version="v1.34.1"
	I1122 00:57:18.425790       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1122 00:57:18.427139       1 config.go:200] "Starting service config controller"
	I1122 00:57:18.427351       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1122 00:57:18.427415       1 config.go:106] "Starting endpoint slice config controller"
	I1122 00:57:18.427466       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1122 00:57:18.427516       1 config.go:403] "Starting serviceCIDR config controller"
	I1122 00:57:18.427543       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1122 00:57:18.428412       1 config.go:309] "Starting node config controller"
	I1122 00:57:18.428463       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1122 00:57:18.428492       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1122 00:57:18.527665       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1122 00:57:18.527664       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1122 00:57:18.527684       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [53b12e2f48badfbf5f25cd651f43e00f0d1451191aa045dab44e2461293c766c] <==
	I1122 00:57:14.064884       1 serving.go:386] Generated self-signed cert in-memory
	I1122 00:57:17.415074       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.1"
	I1122 00:57:17.415102       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1122 00:57:17.470617       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1122 00:57:17.476216       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1122 00:57:17.476258       1 requestheader_controller.go:180] Starting RequestHeaderAuthRequestController
	I1122 00:57:17.476751       1 shared_informer.go:349] "Waiting for caches to sync" controller="RequestHeaderAuthRequestController"
	I1122 00:57:17.476272       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1122 00:57:17.497258       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1122 00:57:17.476282       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1122 00:57:17.497415       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1122 00:57:17.579514       1 shared_informer.go:356] "Caches are synced" controller="RequestHeaderAuthRequestController"
	I1122 00:57:17.597938       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1122 00:57:17.598757       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	
	
	==> kubelet <==
	Nov 22 00:57:21 embed-certs-879000 kubelet[782]: I1122 00:57:21.146664     782 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2qmgl\" (UniqueName: \"kubernetes.io/projected/10d444b4-3695-440b-8e1b-8ddb92023d36-kube-api-access-2qmgl\") pod \"kubernetes-dashboard-855c9754f9-mrcpd\" (UID: \"10d444b4-3695-440b-8e1b-8ddb92023d36\") " pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-mrcpd"
	Nov 22 00:57:21 embed-certs-879000 kubelet[782]: I1122 00:57:21.146687     782 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/10d444b4-3695-440b-8e1b-8ddb92023d36-tmp-volume\") pod \"kubernetes-dashboard-855c9754f9-mrcpd\" (UID: \"10d444b4-3695-440b-8e1b-8ddb92023d36\") " pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-mrcpd"
	Nov 22 00:57:21 embed-certs-879000 kubelet[782]: W1122 00:57:21.388577     782 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/a6fb6b81dce5b3616a7bbeb7273662d346745bb8e0cc70a843d43e0a09366fd0/crio-862b85935285388072e7bd0d6ae6fea17ac17f3a3c07a13f09c4f3675198fd7d WatchSource:0}: Error finding container 862b85935285388072e7bd0d6ae6fea17ac17f3a3c07a13f09c4f3675198fd7d: Status 404 returned error can't find the container with id 862b85935285388072e7bd0d6ae6fea17ac17f3a3c07a13f09c4f3675198fd7d
	Nov 22 00:57:26 embed-certs-879000 kubelet[782]: I1122 00:57:26.455796     782 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness"
	Nov 22 00:57:26 embed-certs-879000 kubelet[782]: I1122 00:57:26.814125     782 scope.go:117] "RemoveContainer" containerID="3002581a3fd9e6cde518227d2932ffb657ea413eb41ea3e05f30be06fe5d1d25"
	Nov 22 00:57:27 embed-certs-879000 kubelet[782]: I1122 00:57:27.820204     782 scope.go:117] "RemoveContainer" containerID="c008c2dd7377bbfd4128b50485dd8b601432d034f4890680bab4bcd2fdbea6a4"
	Nov 22 00:57:27 embed-certs-879000 kubelet[782]: E1122 00:57:27.820363     782 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-hnrr2_kubernetes-dashboard(28d67df6-e61b-4eae-9947-24db9e122425)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-hnrr2" podUID="28d67df6-e61b-4eae-9947-24db9e122425"
	Nov 22 00:57:27 embed-certs-879000 kubelet[782]: I1122 00:57:27.822789     782 scope.go:117] "RemoveContainer" containerID="3002581a3fd9e6cde518227d2932ffb657ea413eb41ea3e05f30be06fe5d1d25"
	Nov 22 00:57:28 embed-certs-879000 kubelet[782]: I1122 00:57:28.831714     782 scope.go:117] "RemoveContainer" containerID="c008c2dd7377bbfd4128b50485dd8b601432d034f4890680bab4bcd2fdbea6a4"
	Nov 22 00:57:28 embed-certs-879000 kubelet[782]: E1122 00:57:28.832094     782 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-hnrr2_kubernetes-dashboard(28d67df6-e61b-4eae-9947-24db9e122425)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-hnrr2" podUID="28d67df6-e61b-4eae-9947-24db9e122425"
	Nov 22 00:57:31 embed-certs-879000 kubelet[782]: I1122 00:57:31.350401     782 scope.go:117] "RemoveContainer" containerID="c008c2dd7377bbfd4128b50485dd8b601432d034f4890680bab4bcd2fdbea6a4"
	Nov 22 00:57:31 embed-certs-879000 kubelet[782]: E1122 00:57:31.350621     782 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-hnrr2_kubernetes-dashboard(28d67df6-e61b-4eae-9947-24db9e122425)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-hnrr2" podUID="28d67df6-e61b-4eae-9947-24db9e122425"
	Nov 22 00:57:44 embed-certs-879000 kubelet[782]: I1122 00:57:44.670872     782 scope.go:117] "RemoveContainer" containerID="c008c2dd7377bbfd4128b50485dd8b601432d034f4890680bab4bcd2fdbea6a4"
	Nov 22 00:57:44 embed-certs-879000 kubelet[782]: I1122 00:57:44.894381     782 scope.go:117] "RemoveContainer" containerID="c008c2dd7377bbfd4128b50485dd8b601432d034f4890680bab4bcd2fdbea6a4"
	Nov 22 00:57:44 embed-certs-879000 kubelet[782]: I1122 00:57:44.894623     782 scope.go:117] "RemoveContainer" containerID="93d281eec1d97f6f14ff89771acbf42bcdfcc26819b8af93e1dd9a6f16af4fd6"
	Nov 22 00:57:44 embed-certs-879000 kubelet[782]: E1122 00:57:44.894775     782 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-hnrr2_kubernetes-dashboard(28d67df6-e61b-4eae-9947-24db9e122425)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-hnrr2" podUID="28d67df6-e61b-4eae-9947-24db9e122425"
	Nov 22 00:57:44 embed-certs-879000 kubelet[782]: I1122 00:57:44.908739     782 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-mrcpd" podStartSLOduration=12.507392407 podStartE2EDuration="23.908718201s" podCreationTimestamp="2025-11-22 00:57:21 +0000 UTC" firstStartedPulling="2025-11-22 00:57:21.405329163 +0000 UTC m=+10.998230598" lastFinishedPulling="2025-11-22 00:57:32.806654965 +0000 UTC m=+22.399556392" observedRunningTime="2025-11-22 00:57:33.877751942 +0000 UTC m=+23.470653394" watchObservedRunningTime="2025-11-22 00:57:44.908718201 +0000 UTC m=+34.501619644"
	Nov 22 00:57:47 embed-certs-879000 kubelet[782]: I1122 00:57:47.905607     782 scope.go:117] "RemoveContainer" containerID="6904d457c7dc728f7026d679dcbeeb784ce896011c4ea8efb2ad461a00099705"
	Nov 22 00:57:51 embed-certs-879000 kubelet[782]: I1122 00:57:51.350447     782 scope.go:117] "RemoveContainer" containerID="93d281eec1d97f6f14ff89771acbf42bcdfcc26819b8af93e1dd9a6f16af4fd6"
	Nov 22 00:57:51 embed-certs-879000 kubelet[782]: E1122 00:57:51.350721     782 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-hnrr2_kubernetes-dashboard(28d67df6-e61b-4eae-9947-24db9e122425)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-hnrr2" podUID="28d67df6-e61b-4eae-9947-24db9e122425"
	Nov 22 00:58:03 embed-certs-879000 kubelet[782]: I1122 00:58:03.669745     782 scope.go:117] "RemoveContainer" containerID="93d281eec1d97f6f14ff89771acbf42bcdfcc26819b8af93e1dd9a6f16af4fd6"
	Nov 22 00:58:03 embed-certs-879000 kubelet[782]: E1122 00:58:03.670458     782 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-hnrr2_kubernetes-dashboard(28d67df6-e61b-4eae-9947-24db9e122425)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-hnrr2" podUID="28d67df6-e61b-4eae-9947-24db9e122425"
	Nov 22 00:58:10 embed-certs-879000 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Nov 22 00:58:10 embed-certs-879000 systemd[1]: kubelet.service: Deactivated successfully.
	Nov 22 00:58:10 embed-certs-879000 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	
	
	==> kubernetes-dashboard [94175cffd15a6a8cfd48697de8133883ffd4078df2399b4e302dafcd31ccd293] <==
	2025/11/22 00:57:32 Using namespace: kubernetes-dashboard
	2025/11/22 00:57:32 Using in-cluster config to connect to apiserver
	2025/11/22 00:57:32 Using secret token for csrf signing
	2025/11/22 00:57:32 Initializing csrf token from kubernetes-dashboard-csrf secret
	2025/11/22 00:57:32 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2025/11/22 00:57:32 Successful initial request to the apiserver, version: v1.34.1
	2025/11/22 00:57:32 Generating JWE encryption key
	2025/11/22 00:57:32 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2025/11/22 00:57:32 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2025/11/22 00:57:33 Initializing JWE encryption key from synchronized object
	2025/11/22 00:57:33 Creating in-cluster Sidecar client
	2025/11/22 00:57:33 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/11/22 00:57:33 Serving insecurely on HTTP port: 9090
	2025/11/22 00:58:03 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/11/22 00:57:32 Starting overwatch
	
	
	==> storage-provisioner [6904d457c7dc728f7026d679dcbeeb784ce896011c4ea8efb2ad461a00099705] <==
	I1122 00:57:17.764166       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1122 00:57:47.771312       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	
	
	==> storage-provisioner [933cca2968517797342b88d5e9db0d039293c75efb66faae70c1f0e8a213eaaa] <==
	I1122 00:57:47.994133       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1122 00:57:48.008397       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1122 00:57:48.008550       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W1122 00:57:48.015426       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1122 00:57:51.471559       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1122 00:57:55.731953       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1122 00:57:59.329933       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1122 00:58:02.383918       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1122 00:58:05.405893       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1122 00:58:05.411064       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1122 00:58:05.411292       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1122 00:58:05.411501       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_embed-certs-879000_14ebbff0-0f1c-4acd-8a13-d03de6ddee56!
	I1122 00:58:05.412440       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"2574480b-6478-427e-83f2-2c518ace1325", APIVersion:"v1", ResourceVersion:"679", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' embed-certs-879000_14ebbff0-0f1c-4acd-8a13-d03de6ddee56 became leader
	W1122 00:58:05.420998       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1122 00:58:05.429200       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1122 00:58:05.511731       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_embed-certs-879000_14ebbff0-0f1c-4acd-8a13-d03de6ddee56!
	W1122 00:58:07.432805       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1122 00:58:07.441515       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1122 00:58:09.452276       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1122 00:58:09.458631       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1122 00:58:11.471478       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1122 00:58:11.490044       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1122 00:58:13.493153       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1122 00:58:13.497718       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p embed-certs-879000 -n embed-certs-879000
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p embed-certs-879000 -n embed-certs-879000: exit status 2 (434.929782ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:269: (dbg) Run:  kubectl --context embed-certs-879000 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/embed-certs/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/embed-certs/serial/Pause]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/embed-certs/serial/Pause]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect embed-certs-879000
helpers_test.go:243: (dbg) docker inspect embed-certs-879000:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "a6fb6b81dce5b3616a7bbeb7273662d346745bb8e0cc70a843d43e0a09366fd0",
	        "Created": "2025-11-22T00:55:18.964561473Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 710970,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-11-22T00:57:03.672522224Z",
	            "FinishedAt": "2025-11-22T00:57:02.577511562Z"
	        },
	        "Image": "sha256:97061622515a85470dad13cb046ad5fc5021fcd2b05aa921a620abb52951cd5d",
	        "ResolvConfPath": "/var/lib/docker/containers/a6fb6b81dce5b3616a7bbeb7273662d346745bb8e0cc70a843d43e0a09366fd0/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/a6fb6b81dce5b3616a7bbeb7273662d346745bb8e0cc70a843d43e0a09366fd0/hostname",
	        "HostsPath": "/var/lib/docker/containers/a6fb6b81dce5b3616a7bbeb7273662d346745bb8e0cc70a843d43e0a09366fd0/hosts",
	        "LogPath": "/var/lib/docker/containers/a6fb6b81dce5b3616a7bbeb7273662d346745bb8e0cc70a843d43e0a09366fd0/a6fb6b81dce5b3616a7bbeb7273662d346745bb8e0cc70a843d43e0a09366fd0-json.log",
	        "Name": "/embed-certs-879000",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "embed-certs-879000:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "embed-certs-879000",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "a6fb6b81dce5b3616a7bbeb7273662d346745bb8e0cc70a843d43e0a09366fd0",
	                "LowerDir": "/var/lib/docker/overlay2/b7e6923f56a551fc28b0dd2aeb630a3573a17c8126bc88462d7dcfbefd35cac0-init/diff:/var/lib/docker/overlay2/7e8788c6de692bc1c3758a2bb2c4b8da0fbba26855f855c0f3b655bfbdd92f8e/diff",
	                "MergedDir": "/var/lib/docker/overlay2/b7e6923f56a551fc28b0dd2aeb630a3573a17c8126bc88462d7dcfbefd35cac0/merged",
	                "UpperDir": "/var/lib/docker/overlay2/b7e6923f56a551fc28b0dd2aeb630a3573a17c8126bc88462d7dcfbefd35cac0/diff",
	                "WorkDir": "/var/lib/docker/overlay2/b7e6923f56a551fc28b0dd2aeb630a3573a17c8126bc88462d7dcfbefd35cac0/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "embed-certs-879000",
	                "Source": "/var/lib/docker/volumes/embed-certs-879000/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "embed-certs-879000",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763588073-21934@sha256:19d3da0413e1bfa354cbb88004c6796f8e9772a083e0230b0f6e50212ee04c7e",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "embed-certs-879000",
	                "name.minikube.sigs.k8s.io": "embed-certs-879000",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "4a9a6ad23ee32d6c8fa5a452dcd61563d2b58f67e2e1ab8e855c0e878d1731a9",
	            "SandboxKey": "/var/run/docker/netns/4a9a6ad23ee3",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33797"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33798"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33801"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33799"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33800"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "embed-certs-879000": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "3e:6c:0e:f5:18:fa",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "9a53cf267b81b1ff031dda8888cce06c9d46b1b11b960898e399a8e14526904f",
	                    "EndpointID": "c19e7971d917a067e288a90add6dfe0ed556affc0e6ed95eaf7df95bd74a471a",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "embed-certs-879000",
	                        "a6fb6b81dce5"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-879000 -n embed-certs-879000
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-879000 -n embed-certs-879000: exit status 2 (370.668408ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestStartStop/group/embed-certs/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/embed-certs/serial/Pause]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p embed-certs-879000 logs -n 25
E1122 00:58:16.028485  516937 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21934-513600/.minikube/profiles/old-k8s-version-625837/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p embed-certs-879000 logs -n 25: (1.303088258s)
helpers_test.go:260: TestStartStop/group/embed-certs/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬──────────────
───────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼──────────────
───────┤
	│ start   │ -p old-k8s-version-625837 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-625837       │ jenkins │ v1.37.0 │ 22 Nov 25 00:53 UTC │ 22 Nov 25 00:54 UTC │
	│ image   │ old-k8s-version-625837 image list --format=json                                                                                                                                                                                               │ old-k8s-version-625837       │ jenkins │ v1.37.0 │ 22 Nov 25 00:54 UTC │ 22 Nov 25 00:54 UTC │
	│ pause   │ -p old-k8s-version-625837 --alsologtostderr -v=1                                                                                                                                                                                              │ old-k8s-version-625837       │ jenkins │ v1.37.0 │ 22 Nov 25 00:54 UTC │                     │
	│ start   │ -p cert-expiration-621390 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=crio                                                                                                                                     │ cert-expiration-621390       │ jenkins │ v1.37.0 │ 22 Nov 25 00:54 UTC │ 22 Nov 25 00:55 UTC │
	│ delete  │ -p old-k8s-version-625837                                                                                                                                                                                                                     │ old-k8s-version-625837       │ jenkins │ v1.37.0 │ 22 Nov 25 00:54 UTC │ 22 Nov 25 00:54 UTC │
	│ delete  │ -p old-k8s-version-625837                                                                                                                                                                                                                     │ old-k8s-version-625837       │ jenkins │ v1.37.0 │ 22 Nov 25 00:54 UTC │ 22 Nov 25 00:54 UTC │
	│ start   │ -p no-preload-165130 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                       │ no-preload-165130            │ jenkins │ v1.37.0 │ 22 Nov 25 00:54 UTC │ 22 Nov 25 00:56 UTC │
	│ delete  │ -p cert-expiration-621390                                                                                                                                                                                                                     │ cert-expiration-621390       │ jenkins │ v1.37.0 │ 22 Nov 25 00:55 UTC │ 22 Nov 25 00:55 UTC │
	│ start   │ -p embed-certs-879000 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                        │ embed-certs-879000           │ jenkins │ v1.37.0 │ 22 Nov 25 00:55 UTC │ 22 Nov 25 00:56 UTC │
	│ addons  │ enable metrics-server -p no-preload-165130 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ no-preload-165130            │ jenkins │ v1.37.0 │ 22 Nov 25 00:56 UTC │                     │
	│ stop    │ -p no-preload-165130 --alsologtostderr -v=3                                                                                                                                                                                                   │ no-preload-165130            │ jenkins │ v1.37.0 │ 22 Nov 25 00:56 UTC │ 22 Nov 25 00:56 UTC │
	│ addons  │ enable dashboard -p no-preload-165130 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ no-preload-165130            │ jenkins │ v1.37.0 │ 22 Nov 25 00:56 UTC │ 22 Nov 25 00:56 UTC │
	│ start   │ -p no-preload-165130 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                       │ no-preload-165130            │ jenkins │ v1.37.0 │ 22 Nov 25 00:56 UTC │ 22 Nov 25 00:57 UTC │
	│ addons  │ enable metrics-server -p embed-certs-879000 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                      │ embed-certs-879000           │ jenkins │ v1.37.0 │ 22 Nov 25 00:56 UTC │                     │
	│ stop    │ -p embed-certs-879000 --alsologtostderr -v=3                                                                                                                                                                                                  │ embed-certs-879000           │ jenkins │ v1.37.0 │ 22 Nov 25 00:56 UTC │ 22 Nov 25 00:57 UTC │
	│ addons  │ enable dashboard -p embed-certs-879000 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                 │ embed-certs-879000           │ jenkins │ v1.37.0 │ 22 Nov 25 00:57 UTC │ 22 Nov 25 00:57 UTC │
	│ start   │ -p embed-certs-879000 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                        │ embed-certs-879000           │ jenkins │ v1.37.0 │ 22 Nov 25 00:57 UTC │ 22 Nov 25 00:57 UTC │
	│ image   │ no-preload-165130 image list --format=json                                                                                                                                                                                                    │ no-preload-165130            │ jenkins │ v1.37.0 │ 22 Nov 25 00:57 UTC │ 22 Nov 25 00:57 UTC │
	│ pause   │ -p no-preload-165130 --alsologtostderr -v=1                                                                                                                                                                                                   │ no-preload-165130            │ jenkins │ v1.37.0 │ 22 Nov 25 00:57 UTC │                     │
	│ delete  │ -p no-preload-165130                                                                                                                                                                                                                          │ no-preload-165130            │ jenkins │ v1.37.0 │ 22 Nov 25 00:57 UTC │ 22 Nov 25 00:57 UTC │
	│ delete  │ -p no-preload-165130                                                                                                                                                                                                                          │ no-preload-165130            │ jenkins │ v1.37.0 │ 22 Nov 25 00:57 UTC │ 22 Nov 25 00:57 UTC │
	│ delete  │ -p disable-driver-mounts-046489                                                                                                                                                                                                               │ disable-driver-mounts-046489 │ jenkins │ v1.37.0 │ 22 Nov 25 00:57 UTC │ 22 Nov 25 00:57 UTC │
	│ start   │ -p default-k8s-diff-port-882305 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                      │ default-k8s-diff-port-882305 │ jenkins │ v1.37.0 │ 22 Nov 25 00:57 UTC │                     │
	│ image   │ embed-certs-879000 image list --format=json                                                                                                                                                                                                   │ embed-certs-879000           │ jenkins │ v1.37.0 │ 22 Nov 25 00:58 UTC │ 22 Nov 25 00:58 UTC │
	│ pause   │ -p embed-certs-879000 --alsologtostderr -v=1                                                                                                                                                                                                  │ embed-certs-879000           │ jenkins │ v1.37.0 │ 22 Nov 25 00:58 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴──────────────
───────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/22 00:57:36
	Running on machine: ip-172-31-21-244
	Binary: Built with gc go1.25.3 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1122 00:57:36.338855  714411 out.go:360] Setting OutFile to fd 1 ...
	I1122 00:57:36.338966  714411 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1122 00:57:36.338975  714411 out.go:374] Setting ErrFile to fd 2...
	I1122 00:57:36.338980  714411 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1122 00:57:36.339261  714411 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21934-513600/.minikube/bin
	I1122 00:57:36.339806  714411 out.go:368] Setting JSON to false
	I1122 00:57:36.340878  714411 start.go:133] hostinfo: {"hostname":"ip-172-31-21-244","uptime":20373,"bootTime":1763752684,"procs":199,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"da8ac1fd-6236-412a-a346-95873c98230d"}
	I1122 00:57:36.340949  714411 start.go:143] virtualization:  
	I1122 00:57:36.344745  714411 out.go:179] * [default-k8s-diff-port-882305] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1122 00:57:36.348423  714411 out.go:179]   - MINIKUBE_LOCATION=21934
	I1122 00:57:36.348572  714411 notify.go:221] Checking for updates...
	I1122 00:57:36.354235  714411 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1122 00:57:36.357038  714411 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21934-513600/kubeconfig
	I1122 00:57:36.359823  714411 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21934-513600/.minikube
	I1122 00:57:36.362686  714411 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1122 00:57:36.365604  714411 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1122 00:57:36.369421  714411 config.go:182] Loaded profile config "embed-certs-879000": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1122 00:57:36.369583  714411 driver.go:422] Setting default libvirt URI to qemu:///system
	I1122 00:57:36.401941  714411 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1122 00:57:36.402074  714411 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1122 00:57:36.464707  714411 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-11-22 00:57:36.451169185 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1122 00:57:36.464815  714411 docker.go:319] overlay module found
	I1122 00:57:36.467951  714411 out.go:179] * Using the docker driver based on user configuration
	I1122 00:57:36.470826  714411 start.go:309] selected driver: docker
	I1122 00:57:36.470847  714411 start.go:930] validating driver "docker" against <nil>
	I1122 00:57:36.470867  714411 start.go:941] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1122 00:57:36.471660  714411 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1122 00:57:36.531947  714411 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-11-22 00:57:36.522770872 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1122 00:57:36.532110  714411 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1122 00:57:36.532335  714411 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1122 00:57:36.535396  714411 out.go:179] * Using Docker driver with root privileges
	I1122 00:57:36.538520  714411 cni.go:84] Creating CNI manager for ""
	I1122 00:57:36.538596  714411 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1122 00:57:36.538610  714411 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1122 00:57:36.538689  714411 start.go:353] cluster config:
	{Name:default-k8s-diff-port-882305 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763588073-21934@sha256:19d3da0413e1bfa354cbb88004c6796f8e9772a083e0230b0f6e50212ee04c7e Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-882305 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:
cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SS
HAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1122 00:57:36.542550  714411 out.go:179] * Starting "default-k8s-diff-port-882305" primary control-plane node in "default-k8s-diff-port-882305" cluster
	I1122 00:57:36.545709  714411 cache.go:134] Beginning downloading kic base image for docker with crio
	I1122 00:57:36.548919  714411 out.go:179] * Pulling base image v0.0.48-1763588073-21934 ...
	I1122 00:57:36.552325  714411 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1122 00:57:36.552377  714411 preload.go:203] Found local preload: /home/jenkins/minikube-integration/21934-513600/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4
	I1122 00:57:36.552399  714411 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763588073-21934@sha256:19d3da0413e1bfa354cbb88004c6796f8e9772a083e0230b0f6e50212ee04c7e in local docker daemon
	I1122 00:57:36.552413  714411 cache.go:65] Caching tarball of preloaded images
	I1122 00:57:36.552499  714411 preload.go:238] Found /home/jenkins/minikube-integration/21934-513600/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1122 00:57:36.552510  714411 cache.go:68] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1122 00:57:36.552615  714411 profile.go:143] Saving config to /home/jenkins/minikube-integration/21934-513600/.minikube/profiles/default-k8s-diff-port-882305/config.json ...
	I1122 00:57:36.552636  714411 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21934-513600/.minikube/profiles/default-k8s-diff-port-882305/config.json: {Name:mk88d3853903bd6dc43beb8d0931343736bf22be Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1122 00:57:36.571735  714411 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763588073-21934@sha256:19d3da0413e1bfa354cbb88004c6796f8e9772a083e0230b0f6e50212ee04c7e in local docker daemon, skipping pull
	I1122 00:57:36.571762  714411 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763588073-21934@sha256:19d3da0413e1bfa354cbb88004c6796f8e9772a083e0230b0f6e50212ee04c7e exists in daemon, skipping load
	I1122 00:57:36.571783  714411 cache.go:243] Successfully downloaded all kic artifacts
	I1122 00:57:36.571807  714411 start.go:360] acquireMachinesLock for default-k8s-diff-port-882305: {Name:mk803954bb6347dd99a7e73d8fd5992e1319a31c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1122 00:57:36.571913  714411 start.go:364] duration metric: took 86.537µs to acquireMachinesLock for "default-k8s-diff-port-882305"
	I1122 00:57:36.571944  714411 start.go:93] Provisioning new machine with config: &{Name:default-k8s-diff-port-882305 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763588073-21934@sha256:19d3da0413e1bfa354cbb88004c6796f8e9772a083e0230b0f6e50212ee04c7e Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-882305 Namespace:default API
ServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:
false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1122 00:57:36.572025  714411 start.go:125] createHost starting for "" (driver="docker")
	W1122 00:57:34.281967  710840 pod_ready.go:104] pod "coredns-66bc5c9577-h2kpd" is not "Ready", error: <nil>
	W1122 00:57:36.282360  710840 pod_ready.go:104] pod "coredns-66bc5c9577-h2kpd" is not "Ready", error: <nil>
	I1122 00:57:36.576834  714411 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1122 00:57:36.577089  714411 start.go:159] libmachine.API.Create for "default-k8s-diff-port-882305" (driver="docker")
	I1122 00:57:36.577128  714411 client.go:173] LocalClient.Create starting
	I1122 00:57:36.577211  714411 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21934-513600/.minikube/certs/ca.pem
	I1122 00:57:36.577253  714411 main.go:143] libmachine: Decoding PEM data...
	I1122 00:57:36.577272  714411 main.go:143] libmachine: Parsing certificate...
	I1122 00:57:36.577329  714411 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21934-513600/.minikube/certs/cert.pem
	I1122 00:57:36.577350  714411 main.go:143] libmachine: Decoding PEM data...
	I1122 00:57:36.577364  714411 main.go:143] libmachine: Parsing certificate...
	I1122 00:57:36.577754  714411 cli_runner.go:164] Run: docker network inspect default-k8s-diff-port-882305 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1122 00:57:36.592847  714411 cli_runner.go:211] docker network inspect default-k8s-diff-port-882305 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1122 00:57:36.592924  714411 network_create.go:284] running [docker network inspect default-k8s-diff-port-882305] to gather additional debugging logs...
	I1122 00:57:36.592943  714411 cli_runner.go:164] Run: docker network inspect default-k8s-diff-port-882305
	W1122 00:57:36.608198  714411 cli_runner.go:211] docker network inspect default-k8s-diff-port-882305 returned with exit code 1
	I1122 00:57:36.608244  714411 network_create.go:287] error running [docker network inspect default-k8s-diff-port-882305]: docker network inspect default-k8s-diff-port-882305: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network default-k8s-diff-port-882305 not found
	I1122 00:57:36.608257  714411 network_create.go:289] output of [docker network inspect default-k8s-diff-port-882305]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network default-k8s-diff-port-882305 not found
	
	** /stderr **
	I1122 00:57:36.608350  714411 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1122 00:57:36.626234  714411 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-b16c782e3da8 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:ba:82:00:9d:45:d0} reservation:<nil>}
	I1122 00:57:36.626554  714411 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-13c9c00b5de5 IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:7a:4e:a4:3d:42:9e} reservation:<nil>}
	I1122 00:57:36.626915  714411 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-3c074a6aa87b IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:9a:1f:77:e5:90:0b} reservation:<nil>}
	I1122 00:57:36.627176  714411 network.go:211] skipping subnet 192.168.76.0/24 that is taken: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName:br-9a53cf267b81 IfaceIPv4:192.168.76.1 IfaceMTU:1500 IfaceMAC:02:78:c2:70:c7:bf} reservation:<nil>}
	I1122 00:57:36.627607  714411 network.go:206] using free private subnet 192.168.85.0/24: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x4001a54150}
	I1122 00:57:36.627625  714411 network_create.go:124] attempt to create docker network default-k8s-diff-port-882305 192.168.85.0/24 with gateway 192.168.85.1 and MTU of 1500 ...
	I1122 00:57:36.627679  714411 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.85.0/24 --gateway=192.168.85.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=default-k8s-diff-port-882305 default-k8s-diff-port-882305
	I1122 00:57:36.687123  714411 network_create.go:108] docker network default-k8s-diff-port-882305 192.168.85.0/24 created
	I1122 00:57:36.687155  714411 kic.go:121] calculated static IP "192.168.85.2" for the "default-k8s-diff-port-882305" container
	I1122 00:57:36.687250  714411 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1122 00:57:36.703679  714411 cli_runner.go:164] Run: docker volume create default-k8s-diff-port-882305 --label name.minikube.sigs.k8s.io=default-k8s-diff-port-882305 --label created_by.minikube.sigs.k8s.io=true
	I1122 00:57:36.721251  714411 oci.go:103] Successfully created a docker volume default-k8s-diff-port-882305
	I1122 00:57:36.721347  714411 cli_runner.go:164] Run: docker run --rm --name default-k8s-diff-port-882305-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=default-k8s-diff-port-882305 --entrypoint /usr/bin/test -v default-k8s-diff-port-882305:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763588073-21934@sha256:19d3da0413e1bfa354cbb88004c6796f8e9772a083e0230b0f6e50212ee04c7e -d /var/lib
	I1122 00:57:37.288284  714411 oci.go:107] Successfully prepared a docker volume default-k8s-diff-port-882305
	I1122 00:57:37.288358  714411 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1122 00:57:37.288373  714411 kic.go:194] Starting extracting preloaded images to volume ...
	I1122 00:57:37.288444  714411 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21934-513600/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v default-k8s-diff-port-882305:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763588073-21934@sha256:19d3da0413e1bfa354cbb88004c6796f8e9772a083e0230b0f6e50212ee04c7e -I lz4 -xf /preloaded.tar -C /extractDir
	W1122 00:57:38.781359  710840 pod_ready.go:104] pod "coredns-66bc5c9577-h2kpd" is not "Ready", error: <nil>
	W1122 00:57:40.782821  710840 pod_ready.go:104] pod "coredns-66bc5c9577-h2kpd" is not "Ready", error: <nil>
	W1122 00:57:43.281271  710840 pod_ready.go:104] pod "coredns-66bc5c9577-h2kpd" is not "Ready", error: <nil>
	I1122 00:57:41.634887  714411 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21934-513600/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v default-k8s-diff-port-882305:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763588073-21934@sha256:19d3da0413e1bfa354cbb88004c6796f8e9772a083e0230b0f6e50212ee04c7e -I lz4 -xf /preloaded.tar -C /extractDir: (4.346401253s)
	I1122 00:57:41.634918  714411 kic.go:203] duration metric: took 4.346542253s to extract preloaded images to volume ...
	W1122 00:57:41.635059  714411 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I1122 00:57:41.635179  714411 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1122 00:57:41.700557  714411 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname default-k8s-diff-port-882305 --name default-k8s-diff-port-882305 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=default-k8s-diff-port-882305 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=default-k8s-diff-port-882305 --network default-k8s-diff-port-882305 --ip 192.168.85.2 --volume default-k8s-diff-port-882305:/var --security-opt apparmor=unconfined --memory=3072mb --cpus=2 -e container=docker --expose 8444 --publish=127.0.0.1::8444 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763588073-21934@sha256:19d3da0413e1bfa354cbb88004c6796f8e9772a083e0230b0f6e50212ee04c7e
	I1122 00:57:42.051604  714411 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-882305 --format={{.State.Running}}
	I1122 00:57:42.075492  714411 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-882305 --format={{.State.Status}}
	I1122 00:57:42.109978  714411 cli_runner.go:164] Run: docker exec default-k8s-diff-port-882305 stat /var/lib/dpkg/alternatives/iptables
	I1122 00:57:42.214166  714411 oci.go:144] the created container "default-k8s-diff-port-882305" has a running status.
	I1122 00:57:42.214214  714411 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21934-513600/.minikube/machines/default-k8s-diff-port-882305/id_rsa...
	I1122 00:57:42.534977  714411 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21934-513600/.minikube/machines/default-k8s-diff-port-882305/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1122 00:57:42.557084  714411 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-882305 --format={{.State.Status}}
	I1122 00:57:42.577178  714411 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1122 00:57:42.577197  714411 kic_runner.go:114] Args: [docker exec --privileged default-k8s-diff-port-882305 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1122 00:57:42.645260  714411 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-882305 --format={{.State.Status}}
	I1122 00:57:42.665294  714411 machine.go:94] provisionDockerMachine start ...
	I1122 00:57:42.665384  714411 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-882305
	I1122 00:57:42.687609  714411 main.go:143] libmachine: Using SSH client type: native
	I1122 00:57:42.688024  714411 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33802 <nil> <nil>}
	I1122 00:57:42.688037  714411 main.go:143] libmachine: About to run SSH command:
	hostname
	I1122 00:57:42.688714  714411 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I1122 00:57:45.829296  714411 main.go:143] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-882305
	
	I1122 00:57:45.829321  714411 ubuntu.go:182] provisioning hostname "default-k8s-diff-port-882305"
	I1122 00:57:45.829384  714411 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-882305
	I1122 00:57:45.847592  714411 main.go:143] libmachine: Using SSH client type: native
	I1122 00:57:45.847925  714411 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33802 <nil> <nil>}
	I1122 00:57:45.847940  714411 main.go:143] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-882305 && echo "default-k8s-diff-port-882305" | sudo tee /etc/hostname
	I1122 00:57:45.996173  714411 main.go:143] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-882305
	
	I1122 00:57:45.996335  714411 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-882305
	I1122 00:57:46.016589  714411 main.go:143] libmachine: Using SSH client type: native
	I1122 00:57:46.016919  714411 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33802 <nil> <nil>}
	I1122 00:57:46.016937  714411 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-882305' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-882305/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-882305' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1122 00:57:46.167713  714411 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1122 00:57:46.167739  714411 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21934-513600/.minikube CaCertPath:/home/jenkins/minikube-integration/21934-513600/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21934-513600/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21934-513600/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21934-513600/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21934-513600/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21934-513600/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21934-513600/.minikube}
	I1122 00:57:46.167758  714411 ubuntu.go:190] setting up certificates
	I1122 00:57:46.167769  714411 provision.go:84] configureAuth start
	I1122 00:57:46.167845  714411 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-882305
	I1122 00:57:46.184944  714411 provision.go:143] copyHostCerts
	I1122 00:57:46.185019  714411 exec_runner.go:144] found /home/jenkins/minikube-integration/21934-513600/.minikube/key.pem, removing ...
	I1122 00:57:46.185033  714411 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21934-513600/.minikube/key.pem
	I1122 00:57:46.185111  714411 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21934-513600/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21934-513600/.minikube/key.pem (1675 bytes)
	I1122 00:57:46.185206  714411 exec_runner.go:144] found /home/jenkins/minikube-integration/21934-513600/.minikube/ca.pem, removing ...
	I1122 00:57:46.185218  714411 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21934-513600/.minikube/ca.pem
	I1122 00:57:46.185247  714411 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21934-513600/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21934-513600/.minikube/ca.pem (1078 bytes)
	I1122 00:57:46.185306  714411 exec_runner.go:144] found /home/jenkins/minikube-integration/21934-513600/.minikube/cert.pem, removing ...
	I1122 00:57:46.185315  714411 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21934-513600/.minikube/cert.pem
	I1122 00:57:46.185339  714411 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21934-513600/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21934-513600/.minikube/cert.pem (1123 bytes)
	I1122 00:57:46.185391  714411 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21934-513600/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21934-513600/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21934-513600/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-882305 san=[127.0.0.1 192.168.85.2 default-k8s-diff-port-882305 localhost minikube]
	I1122 00:57:46.300381  714411 provision.go:177] copyRemoteCerts
	I1122 00:57:46.300458  714411 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1122 00:57:46.300498  714411 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-882305
	I1122 00:57:46.318147  714411 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33802 SSHKeyPath:/home/jenkins/minikube-integration/21934-513600/.minikube/machines/default-k8s-diff-port-882305/id_rsa Username:docker}
	W1122 00:57:45.283266  710840 pod_ready.go:104] pod "coredns-66bc5c9577-h2kpd" is not "Ready", error: <nil>
	W1122 00:57:47.786561  710840 pod_ready.go:104] pod "coredns-66bc5c9577-h2kpd" is not "Ready", error: <nil>
	I1122 00:57:46.422892  714411 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21934-513600/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1122 00:57:46.441326  714411 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21934-513600/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1122 00:57:46.461539  714411 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21934-513600/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I1122 00:57:46.481467  714411 provision.go:87] duration metric: took 313.677132ms to configureAuth
	I1122 00:57:46.481507  714411 ubuntu.go:206] setting minikube options for container-runtime
	I1122 00:57:46.481685  714411 config.go:182] Loaded profile config "default-k8s-diff-port-882305": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1122 00:57:46.481794  714411 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-882305
	I1122 00:57:46.498946  714411 main.go:143] libmachine: Using SSH client type: native
	I1122 00:57:46.499282  714411 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33802 <nil> <nil>}
	I1122 00:57:46.499301  714411 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1122 00:57:46.897323  714411 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1122 00:57:46.897423  714411 machine.go:97] duration metric: took 4.232109748s to provisionDockerMachine
	I1122 00:57:46.897447  714411 client.go:176] duration metric: took 10.320307836s to LocalClient.Create
	I1122 00:57:46.897492  714411 start.go:167] duration metric: took 10.320404334s to libmachine.API.Create "default-k8s-diff-port-882305"
	I1122 00:57:46.897518  714411 start.go:293] postStartSetup for "default-k8s-diff-port-882305" (driver="docker")
	I1122 00:57:46.897543  714411 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1122 00:57:46.897643  714411 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1122 00:57:46.897704  714411 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-882305
	I1122 00:57:46.917542  714411 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33802 SSHKeyPath:/home/jenkins/minikube-integration/21934-513600/.minikube/machines/default-k8s-diff-port-882305/id_rsa Username:docker}
	I1122 00:57:47.018291  714411 ssh_runner.go:195] Run: cat /etc/os-release
	I1122 00:57:47.021795  714411 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1122 00:57:47.021854  714411 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1122 00:57:47.021867  714411 filesync.go:126] Scanning /home/jenkins/minikube-integration/21934-513600/.minikube/addons for local assets ...
	I1122 00:57:47.021922  714411 filesync.go:126] Scanning /home/jenkins/minikube-integration/21934-513600/.minikube/files for local assets ...
	I1122 00:57:47.022011  714411 filesync.go:149] local asset: /home/jenkins/minikube-integration/21934-513600/.minikube/files/etc/ssl/certs/5169372.pem -> 5169372.pem in /etc/ssl/certs
	I1122 00:57:47.022123  714411 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1122 00:57:47.029651  714411 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21934-513600/.minikube/files/etc/ssl/certs/5169372.pem --> /etc/ssl/certs/5169372.pem (1708 bytes)
	I1122 00:57:47.050910  714411 start.go:296] duration metric: took 153.363882ms for postStartSetup
	I1122 00:57:47.051304  714411 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-882305
	I1122 00:57:47.069776  714411 profile.go:143] Saving config to /home/jenkins/minikube-integration/21934-513600/.minikube/profiles/default-k8s-diff-port-882305/config.json ...
	I1122 00:57:47.070154  714411 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1122 00:57:47.070205  714411 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-882305
	I1122 00:57:47.094646  714411 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33802 SSHKeyPath:/home/jenkins/minikube-integration/21934-513600/.minikube/machines/default-k8s-diff-port-882305/id_rsa Username:docker}
	I1122 00:57:47.190790  714411 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1122 00:57:47.195372  714411 start.go:128] duration metric: took 10.623328673s to createHost
	I1122 00:57:47.195443  714411 start.go:83] releasing machines lock for "default-k8s-diff-port-882305", held for 10.623514989s
	I1122 00:57:47.195568  714411 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-882305
	I1122 00:57:47.213580  714411 ssh_runner.go:195] Run: cat /version.json
	I1122 00:57:47.213654  714411 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-882305
	I1122 00:57:47.213976  714411 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1122 00:57:47.214041  714411 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-882305
	I1122 00:57:47.237694  714411 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33802 SSHKeyPath:/home/jenkins/minikube-integration/21934-513600/.minikube/machines/default-k8s-diff-port-882305/id_rsa Username:docker}
	I1122 00:57:47.247384  714411 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33802 SSHKeyPath:/home/jenkins/minikube-integration/21934-513600/.minikube/machines/default-k8s-diff-port-882305/id_rsa Username:docker}
	I1122 00:57:47.341556  714411 ssh_runner.go:195] Run: systemctl --version
	I1122 00:57:47.436128  714411 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1122 00:57:47.476930  714411 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1122 00:57:47.481939  714411 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1122 00:57:47.482011  714411 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1122 00:57:47.511413  714411 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/10-crio-bridge.conflist.disabled] bridge cni config(s)
	I1122 00:57:47.511434  714411 start.go:496] detecting cgroup driver to use...
	I1122 00:57:47.511467  714411 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1122 00:57:47.511528  714411 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1122 00:57:47.529997  714411 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1122 00:57:47.543807  714411 docker.go:218] disabling cri-docker service (if available) ...
	I1122 00:57:47.543898  714411 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1122 00:57:47.561702  714411 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1122 00:57:47.582585  714411 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1122 00:57:47.724087  714411 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1122 00:57:47.866352  714411 docker.go:234] disabling docker service ...
	I1122 00:57:47.866468  714411 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1122 00:57:47.888916  714411 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1122 00:57:47.911285  714411 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1122 00:57:48.074318  714411 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1122 00:57:48.193871  714411 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1122 00:57:48.206619  714411 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1122 00:57:48.220358  714411 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1122 00:57:48.220472  714411 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1122 00:57:48.230099  714411 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1122 00:57:48.230169  714411 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1122 00:57:48.241238  714411 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1122 00:57:48.251030  714411 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1122 00:57:48.260004  714411 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1122 00:57:48.269326  714411 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1122 00:57:48.282789  714411 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1122 00:57:48.298599  714411 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1122 00:57:48.307769  714411 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1122 00:57:48.316181  714411 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1122 00:57:48.323680  714411 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1122 00:57:48.449992  714411 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1122 00:57:48.636803  714411 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1122 00:57:48.636906  714411 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1122 00:57:48.641309  714411 start.go:564] Will wait 60s for crictl version
	I1122 00:57:48.641399  714411 ssh_runner.go:195] Run: which crictl
	I1122 00:57:48.645290  714411 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1122 00:57:48.680443  714411 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.2
	RuntimeApiVersion:  v1
	I1122 00:57:48.680594  714411 ssh_runner.go:195] Run: crio --version
	I1122 00:57:48.717115  714411 ssh_runner.go:195] Run: crio --version
	I1122 00:57:48.758504  714411 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.2 ...
	I1122 00:57:48.763666  714411 cli_runner.go:164] Run: docker network inspect default-k8s-diff-port-882305 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1122 00:57:48.789982  714411 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I1122 00:57:48.793913  714411 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.85.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1122 00:57:48.804338  714411 kubeadm.go:884] updating cluster {Name:default-k8s-diff-port-882305 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763588073-21934@sha256:19d3da0413e1bfa354cbb88004c6796f8e9772a083e0230b0f6e50212ee04c7e Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-882305 Namespace:default APIServerHAVIP: APISer
verName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false C
ustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1122 00:57:48.804465  714411 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1122 00:57:48.804546  714411 ssh_runner.go:195] Run: sudo crictl images --output json
	I1122 00:57:48.841363  714411 crio.go:514] all images are preloaded for cri-o runtime.
	I1122 00:57:48.841393  714411 crio.go:433] Images already preloaded, skipping extraction
	I1122 00:57:48.841450  714411 ssh_runner.go:195] Run: sudo crictl images --output json
	I1122 00:57:48.867619  714411 crio.go:514] all images are preloaded for cri-o runtime.
	I1122 00:57:48.867642  714411 cache_images.go:86] Images are preloaded, skipping loading
	I1122 00:57:48.867649  714411 kubeadm.go:935] updating node { 192.168.85.2 8444 v1.34.1 crio true true} ...
	I1122 00:57:48.867739  714411 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=default-k8s-diff-port-882305 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-882305 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1122 00:57:48.867823  714411 ssh_runner.go:195] Run: crio config
	I1122 00:57:48.935771  714411 cni.go:84] Creating CNI manager for ""
	I1122 00:57:48.935850  714411 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1122 00:57:48.935884  714411 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1122 00:57:48.935935  714411 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8444 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-882305 NodeName:default-k8s-diff-port-882305 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.
crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1122 00:57:48.936099  714411 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-882305"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.85.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1122 00:57:48.936201  714411 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1122 00:57:48.944157  714411 binaries.go:51] Found k8s binaries, skipping transfer
	I1122 00:57:48.944231  714411 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1122 00:57:48.952059  714411 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (378 bytes)
	I1122 00:57:48.965184  714411 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1122 00:57:48.977916  714411 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2225 bytes)
	I1122 00:57:48.990672  714411 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I1122 00:57:48.994270  714411 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1122 00:57:49.005478  714411 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1122 00:57:49.129181  714411 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1122 00:57:49.145991  714411 certs.go:69] Setting up /home/jenkins/minikube-integration/21934-513600/.minikube/profiles/default-k8s-diff-port-882305 for IP: 192.168.85.2
	I1122 00:57:49.146052  714411 certs.go:195] generating shared ca certs ...
	I1122 00:57:49.146082  714411 certs.go:227] acquiring lock for ca certs: {Name:mkaf4c79493334cb2058b349c36be7473837f9b7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1122 00:57:49.146244  714411 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21934-513600/.minikube/ca.key
	I1122 00:57:49.146313  714411 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21934-513600/.minikube/proxy-client-ca.key
	I1122 00:57:49.146334  714411 certs.go:257] generating profile certs ...
	I1122 00:57:49.146417  714411 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21934-513600/.minikube/profiles/default-k8s-diff-port-882305/client.key
	I1122 00:57:49.146449  714411 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21934-513600/.minikube/profiles/default-k8s-diff-port-882305/client.crt with IP's: []
	I1122 00:57:49.514846  714411 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21934-513600/.minikube/profiles/default-k8s-diff-port-882305/client.crt ...
	I1122 00:57:49.514881  714411 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21934-513600/.minikube/profiles/default-k8s-diff-port-882305/client.crt: {Name:mk7377ae04629c0ac33a6f4a312bb4bb2ed63cf6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1122 00:57:49.515081  714411 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21934-513600/.minikube/profiles/default-k8s-diff-port-882305/client.key ...
	I1122 00:57:49.515096  714411 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21934-513600/.minikube/profiles/default-k8s-diff-port-882305/client.key: {Name:mk3d952a0612866c8f7a6ae6f6bb8b13b69f3dc9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1122 00:57:49.515190  714411 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21934-513600/.minikube/profiles/default-k8s-diff-port-882305/apiserver.key.14c699f7
	I1122 00:57:49.515207  714411 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21934-513600/.minikube/profiles/default-k8s-diff-port-882305/apiserver.crt.14c699f7 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.85.2]
	I1122 00:57:49.652313  714411 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21934-513600/.minikube/profiles/default-k8s-diff-port-882305/apiserver.crt.14c699f7 ...
	I1122 00:57:49.652346  714411 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21934-513600/.minikube/profiles/default-k8s-diff-port-882305/apiserver.crt.14c699f7: {Name:mk3fb1b2409298ad13f2a2bbc731fb22e92e1fd9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1122 00:57:49.652518  714411 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21934-513600/.minikube/profiles/default-k8s-diff-port-882305/apiserver.key.14c699f7 ...
	I1122 00:57:49.652533  714411 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21934-513600/.minikube/profiles/default-k8s-diff-port-882305/apiserver.key.14c699f7: {Name:mk3c157e6ae7da90405863e114d5f562094f32d0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1122 00:57:49.652620  714411 certs.go:382] copying /home/jenkins/minikube-integration/21934-513600/.minikube/profiles/default-k8s-diff-port-882305/apiserver.crt.14c699f7 -> /home/jenkins/minikube-integration/21934-513600/.minikube/profiles/default-k8s-diff-port-882305/apiserver.crt
	I1122 00:57:49.652702  714411 certs.go:386] copying /home/jenkins/minikube-integration/21934-513600/.minikube/profiles/default-k8s-diff-port-882305/apiserver.key.14c699f7 -> /home/jenkins/minikube-integration/21934-513600/.minikube/profiles/default-k8s-diff-port-882305/apiserver.key
	I1122 00:57:49.652762  714411 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21934-513600/.minikube/profiles/default-k8s-diff-port-882305/proxy-client.key
	I1122 00:57:49.652781  714411 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21934-513600/.minikube/profiles/default-k8s-diff-port-882305/proxy-client.crt with IP's: []
	I1122 00:57:49.813893  714411 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21934-513600/.minikube/profiles/default-k8s-diff-port-882305/proxy-client.crt ...
	I1122 00:57:49.813923  714411 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21934-513600/.minikube/profiles/default-k8s-diff-port-882305/proxy-client.crt: {Name:mk113ad53d15edaae52dea3559e2e61985f0d225 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1122 00:57:49.814093  714411 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21934-513600/.minikube/profiles/default-k8s-diff-port-882305/proxy-client.key ...
	I1122 00:57:49.814113  714411 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21934-513600/.minikube/profiles/default-k8s-diff-port-882305/proxy-client.key: {Name:mk36d48c76d061c1f8fdbb9f64c8bcc3b3b20c16 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1122 00:57:49.814299  714411 certs.go:484] found cert: /home/jenkins/minikube-integration/21934-513600/.minikube/certs/516937.pem (1338 bytes)
	W1122 00:57:49.814345  714411 certs.go:480] ignoring /home/jenkins/minikube-integration/21934-513600/.minikube/certs/516937_empty.pem, impossibly tiny 0 bytes
	I1122 00:57:49.814358  714411 certs.go:484] found cert: /home/jenkins/minikube-integration/21934-513600/.minikube/certs/ca-key.pem (1679 bytes)
	I1122 00:57:49.814387  714411 certs.go:484] found cert: /home/jenkins/minikube-integration/21934-513600/.minikube/certs/ca.pem (1078 bytes)
	I1122 00:57:49.814416  714411 certs.go:484] found cert: /home/jenkins/minikube-integration/21934-513600/.minikube/certs/cert.pem (1123 bytes)
	I1122 00:57:49.814445  714411 certs.go:484] found cert: /home/jenkins/minikube-integration/21934-513600/.minikube/certs/key.pem (1675 bytes)
	I1122 00:57:49.814496  714411 certs.go:484] found cert: /home/jenkins/minikube-integration/21934-513600/.minikube/files/etc/ssl/certs/5169372.pem (1708 bytes)
	I1122 00:57:49.815036  714411 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21934-513600/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1122 00:57:49.835252  714411 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21934-513600/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1122 00:57:49.855020  714411 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21934-513600/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1122 00:57:49.873296  714411 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21934-513600/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1122 00:57:49.891626  714411 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21934-513600/.minikube/profiles/default-k8s-diff-port-882305/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I1122 00:57:49.909596  714411 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21934-513600/.minikube/profiles/default-k8s-diff-port-882305/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1122 00:57:49.927463  714411 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21934-513600/.minikube/profiles/default-k8s-diff-port-882305/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1122 00:57:49.945589  714411 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21934-513600/.minikube/profiles/default-k8s-diff-port-882305/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1122 00:57:49.962712  714411 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21934-513600/.minikube/certs/516937.pem --> /usr/share/ca-certificates/516937.pem (1338 bytes)
	I1122 00:57:49.979872  714411 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21934-513600/.minikube/files/etc/ssl/certs/5169372.pem --> /usr/share/ca-certificates/5169372.pem (1708 bytes)
	I1122 00:57:49.998214  714411 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21934-513600/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1122 00:57:50.028353  714411 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1122 00:57:50.053434  714411 ssh_runner.go:195] Run: openssl version
	I1122 00:57:50.060745  714411 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/516937.pem && ln -fs /usr/share/ca-certificates/516937.pem /etc/ssl/certs/516937.pem"
	I1122 00:57:50.071075  714411 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/516937.pem
	I1122 00:57:50.075861  714411 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov 21 23:55 /usr/share/ca-certificates/516937.pem
	I1122 00:57:50.075973  714411 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/516937.pem
	I1122 00:57:50.125693  714411 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/516937.pem /etc/ssl/certs/51391683.0"
	I1122 00:57:50.134481  714411 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/5169372.pem && ln -fs /usr/share/ca-certificates/5169372.pem /etc/ssl/certs/5169372.pem"
	I1122 00:57:50.143325  714411 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/5169372.pem
	I1122 00:57:50.147490  714411 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov 21 23:55 /usr/share/ca-certificates/5169372.pem
	I1122 00:57:50.147582  714411 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/5169372.pem
	I1122 00:57:50.189297  714411 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/5169372.pem /etc/ssl/certs/3ec20f2e.0"
	I1122 00:57:50.198053  714411 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1122 00:57:50.206803  714411 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1122 00:57:50.210840  714411 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov 21 23:48 /usr/share/ca-certificates/minikubeCA.pem
	I1122 00:57:50.210906  714411 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1122 00:57:50.252654  714411 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1122 00:57:50.261552  714411 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1122 00:57:50.265268  714411 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1122 00:57:50.265389  714411 kubeadm.go:401] StartCluster: {Name:default-k8s-diff-port-882305 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763588073-21934@sha256:19d3da0413e1bfa354cbb88004c6796f8e9772a083e0230b0f6e50212ee04c7e Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-882305 Namespace:default APIServerHAVIP: APIServer
Name:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false Cust
omQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1122 00:57:50.265474  714411 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1122 00:57:50.265546  714411 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1122 00:57:50.298072  714411 cri.go:89] found id: ""
	I1122 00:57:50.298160  714411 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1122 00:57:50.306224  714411 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1122 00:57:50.313894  714411 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1122 00:57:50.313958  714411 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1122 00:57:50.321662  714411 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1122 00:57:50.321682  714411 kubeadm.go:158] found existing configuration files:
	
	I1122 00:57:50.321735  714411 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I1122 00:57:50.330168  714411 kubeadm.go:164] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1122 00:57:50.330246  714411 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1122 00:57:50.337643  714411 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I1122 00:57:50.345329  714411 kubeadm.go:164] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1122 00:57:50.345397  714411 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1122 00:57:50.352720  714411 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I1122 00:57:50.361194  714411 kubeadm.go:164] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1122 00:57:50.361310  714411 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1122 00:57:50.374209  714411 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I1122 00:57:50.382929  714411 kubeadm.go:164] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1122 00:57:50.383002  714411 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1122 00:57:50.390480  714411 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1122 00:57:50.436757  714411 kubeadm.go:319] [init] Using Kubernetes version: v1.34.1
	I1122 00:57:50.436819  714411 kubeadm.go:319] [preflight] Running pre-flight checks
	I1122 00:57:50.461701  714411 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1122 00:57:50.461776  714411 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1122 00:57:50.461835  714411 kubeadm.go:319] OS: Linux
	I1122 00:57:50.461885  714411 kubeadm.go:319] CGROUPS_CPU: enabled
	I1122 00:57:50.461939  714411 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1122 00:57:50.461991  714411 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1122 00:57:50.462042  714411 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1122 00:57:50.462094  714411 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1122 00:57:50.462145  714411 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1122 00:57:50.462198  714411 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1122 00:57:50.462249  714411 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1122 00:57:50.462298  714411 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1122 00:57:50.535136  714411 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1122 00:57:50.535342  714411 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1122 00:57:50.535483  714411 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1122 00:57:50.542789  714411 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1122 00:57:50.548106  714411 out.go:252]   - Generating certificates and keys ...
	I1122 00:57:50.548206  714411 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1122 00:57:50.548276  714411 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1122 00:57:50.768906  714411 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1122 00:57:50.975715  714411 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	W1122 00:57:50.281535  710840 pod_ready.go:104] pod "coredns-66bc5c9577-h2kpd" is not "Ready", error: <nil>
	W1122 00:57:52.286081  710840 pod_ready.go:104] pod "coredns-66bc5c9577-h2kpd" is not "Ready", error: <nil>
	I1122 00:57:52.101141  714411 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1122 00:57:52.461866  714411 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1122 00:57:52.728889  714411 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1122 00:57:52.729272  714411 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [default-k8s-diff-port-882305 localhost] and IPs [192.168.85.2 127.0.0.1 ::1]
	I1122 00:57:53.536729  714411 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1122 00:57:53.537077  714411 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [default-k8s-diff-port-882305 localhost] and IPs [192.168.85.2 127.0.0.1 ::1]
	I1122 00:57:55.063363  714411 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1122 00:57:55.445852  714411 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1122 00:57:55.724965  714411 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1122 00:57:55.725216  714411 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1122 00:57:55.971678  714411 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1122 00:57:56.176982  714411 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1122 00:57:56.718212  714411 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1122 00:57:57.227941  714411 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1122 00:57:57.762121  714411 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1122 00:57:57.762843  714411 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1122 00:57:57.765520  714411 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	W1122 00:57:54.782214  710840 pod_ready.go:104] pod "coredns-66bc5c9577-h2kpd" is not "Ready", error: <nil>
	I1122 00:57:56.784835  710840 pod_ready.go:94] pod "coredns-66bc5c9577-h2kpd" is "Ready"
	I1122 00:57:56.784864  710840 pod_ready.go:86] duration metric: took 38.009231841s for pod "coredns-66bc5c9577-h2kpd" in "kube-system" namespace to be "Ready" or be gone ...
	I1122 00:57:56.789483  710840 pod_ready.go:83] waiting for pod "etcd-embed-certs-879000" in "kube-system" namespace to be "Ready" or be gone ...
	I1122 00:57:56.798365  710840 pod_ready.go:94] pod "etcd-embed-certs-879000" is "Ready"
	I1122 00:57:56.798393  710840 pod_ready.go:86] duration metric: took 8.882194ms for pod "etcd-embed-certs-879000" in "kube-system" namespace to be "Ready" or be gone ...
	I1122 00:57:56.801545  710840 pod_ready.go:83] waiting for pod "kube-apiserver-embed-certs-879000" in "kube-system" namespace to be "Ready" or be gone ...
	I1122 00:57:56.809327  710840 pod_ready.go:94] pod "kube-apiserver-embed-certs-879000" is "Ready"
	I1122 00:57:56.809360  710840 pod_ready.go:86] duration metric: took 7.785856ms for pod "kube-apiserver-embed-certs-879000" in "kube-system" namespace to be "Ready" or be gone ...
	I1122 00:57:56.811870  710840 pod_ready.go:83] waiting for pod "kube-controller-manager-embed-certs-879000" in "kube-system" namespace to be "Ready" or be gone ...
	I1122 00:57:56.979526  710840 pod_ready.go:94] pod "kube-controller-manager-embed-certs-879000" is "Ready"
	I1122 00:57:56.979556  710840 pod_ready.go:86] duration metric: took 167.647895ms for pod "kube-controller-manager-embed-certs-879000" in "kube-system" namespace to be "Ready" or be gone ...
	I1122 00:57:57.180650  710840 pod_ready.go:83] waiting for pod "kube-proxy-w9bqj" in "kube-system" namespace to be "Ready" or be gone ...
	I1122 00:57:57.580685  710840 pod_ready.go:94] pod "kube-proxy-w9bqj" is "Ready"
	I1122 00:57:57.580726  710840 pod_ready.go:86] duration metric: took 400.046535ms for pod "kube-proxy-w9bqj" in "kube-system" namespace to be "Ready" or be gone ...
	I1122 00:57:57.780308  710840 pod_ready.go:83] waiting for pod "kube-scheduler-embed-certs-879000" in "kube-system" namespace to be "Ready" or be gone ...
	I1122 00:57:58.178694  710840 pod_ready.go:94] pod "kube-scheduler-embed-certs-879000" is "Ready"
	I1122 00:57:58.178724  710840 pod_ready.go:86] duration metric: took 398.39348ms for pod "kube-scheduler-embed-certs-879000" in "kube-system" namespace to be "Ready" or be gone ...
	I1122 00:57:58.178737  710840 pod_ready.go:40] duration metric: took 39.407657542s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1122 00:57:58.251619  710840 start.go:628] kubectl: 1.33.2, cluster: 1.34.1 (minor skew: 1)
	I1122 00:57:58.254819  710840 out.go:179] * Done! kubectl is now configured to use "embed-certs-879000" cluster and "default" namespace by default
	I1122 00:57:57.768910  714411 out.go:252]   - Booting up control plane ...
	I1122 00:57:57.769026  714411 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1122 00:57:57.769112  714411 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1122 00:57:57.769178  714411 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1122 00:57:57.787663  714411 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1122 00:57:57.787776  714411 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1122 00:57:57.797750  714411 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1122 00:57:57.797905  714411 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1122 00:57:57.797952  714411 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1122 00:57:57.921159  714411 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1122 00:57:57.921282  714411 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1122 00:57:59.420467  714411 kubeadm.go:319] [kubelet-check] The kubelet is healthy after 1.500722777s
	I1122 00:57:59.424226  714411 kubeadm.go:319] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1122 00:57:59.424325  714411 kubeadm.go:319] [control-plane-check] Checking kube-apiserver at https://192.168.85.2:8444/livez
	I1122 00:57:59.424420  714411 kubeadm.go:319] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1122 00:57:59.424505  714411 kubeadm.go:319] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1122 00:58:02.621764  714411 kubeadm.go:319] [control-plane-check] kube-controller-manager is healthy after 3.196925866s
	I1122 00:58:03.868010  714411 kubeadm.go:319] [control-plane-check] kube-scheduler is healthy after 4.443761565s
	I1122 00:58:05.926787  714411 kubeadm.go:319] [control-plane-check] kube-apiserver is healthy after 6.502295854s
	I1122 00:58:05.948847  714411 kubeadm.go:319] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1122 00:58:05.966901  714411 kubeadm.go:319] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1122 00:58:05.984369  714411 kubeadm.go:319] [upload-certs] Skipping phase. Please see --upload-certs
	I1122 00:58:05.984601  714411 kubeadm.go:319] [mark-control-plane] Marking the node default-k8s-diff-port-882305 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1122 00:58:05.997188  714411 kubeadm.go:319] [bootstrap-token] Using token: gtlx2j.zsn1vl5ysqq6c7xo
	I1122 00:58:06.003111  714411 out.go:252]   - Configuring RBAC rules ...
	I1122 00:58:06.003267  714411 kubeadm.go:319] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1122 00:58:06.007885  714411 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1122 00:58:06.018263  714411 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1122 00:58:06.024846  714411 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1122 00:58:06.029517  714411 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1122 00:58:06.040697  714411 kubeadm.go:319] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1122 00:58:06.336451  714411 kubeadm.go:319] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1122 00:58:06.786479  714411 kubeadm.go:319] [addons] Applied essential addon: CoreDNS
	I1122 00:58:07.335883  714411 kubeadm.go:319] [addons] Applied essential addon: kube-proxy
	I1122 00:58:07.337123  714411 kubeadm.go:319] 
	I1122 00:58:07.337196  714411 kubeadm.go:319] Your Kubernetes control-plane has initialized successfully!
	I1122 00:58:07.337201  714411 kubeadm.go:319] 
	I1122 00:58:07.337278  714411 kubeadm.go:319] To start using your cluster, you need to run the following as a regular user:
	I1122 00:58:07.337283  714411 kubeadm.go:319] 
	I1122 00:58:07.337307  714411 kubeadm.go:319]   mkdir -p $HOME/.kube
	I1122 00:58:07.337366  714411 kubeadm.go:319]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1122 00:58:07.337416  714411 kubeadm.go:319]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1122 00:58:07.337420  714411 kubeadm.go:319] 
	I1122 00:58:07.337474  714411 kubeadm.go:319] Alternatively, if you are the root user, you can run:
	I1122 00:58:07.337478  714411 kubeadm.go:319] 
	I1122 00:58:07.337525  714411 kubeadm.go:319]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1122 00:58:07.337529  714411 kubeadm.go:319] 
	I1122 00:58:07.337581  714411 kubeadm.go:319] You should now deploy a pod network to the cluster.
	I1122 00:58:07.337656  714411 kubeadm.go:319] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1122 00:58:07.337724  714411 kubeadm.go:319]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1122 00:58:07.337729  714411 kubeadm.go:319] 
	I1122 00:58:07.337839  714411 kubeadm.go:319] You can now join any number of control-plane nodes by copying certificate authorities
	I1122 00:58:07.337926  714411 kubeadm.go:319] and service account keys on each node and then running the following as root:
	I1122 00:58:07.337930  714411 kubeadm.go:319] 
	I1122 00:58:07.338014  714411 kubeadm.go:319]   kubeadm join control-plane.minikube.internal:8444 --token gtlx2j.zsn1vl5ysqq6c7xo \
	I1122 00:58:07.338117  714411 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:ecfebb5fda4f065a571cf90106e71e452abce05aaa4d3155b81d7383977d6854 \
	I1122 00:58:07.338138  714411 kubeadm.go:319] 	--control-plane 
	I1122 00:58:07.338141  714411 kubeadm.go:319] 
	I1122 00:58:07.338226  714411 kubeadm.go:319] Then you can join any number of worker nodes by running the following on each as root:
	I1122 00:58:07.338236  714411 kubeadm.go:319] 
	I1122 00:58:07.338318  714411 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8444 --token gtlx2j.zsn1vl5ysqq6c7xo \
	I1122 00:58:07.338420  714411 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:ecfebb5fda4f065a571cf90106e71e452abce05aaa4d3155b81d7383977d6854 
	I1122 00:58:07.342997  714411 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is in maintenance mode, please migrate to cgroups v2
	I1122 00:58:07.343230  714411 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1122 00:58:07.343336  714411 kubeadm.go:319] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1122 00:58:07.343352  714411 cni.go:84] Creating CNI manager for ""
	I1122 00:58:07.343360  714411 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1122 00:58:07.346500  714411 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I1122 00:58:07.349469  714411 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1122 00:58:07.353392  714411 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.1/kubectl ...
	I1122 00:58:07.353414  714411 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1122 00:58:07.373753  714411 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1122 00:58:07.693417  714411 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1122 00:58:07.693548  714411 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1122 00:58:07.693613  714411 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes default-k8s-diff-port-882305 minikube.k8s.io/updated_at=2025_11_22T00_58_07_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=299bbe887a12c40541707cc636234f35f4ff1785 minikube.k8s.io/name=default-k8s-diff-port-882305 minikube.k8s.io/primary=true
	I1122 00:58:07.866986  714411 ops.go:34] apiserver oom_adj: -16
	I1122 00:58:07.867095  714411 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1122 00:58:08.367799  714411 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1122 00:58:08.867420  714411 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1122 00:58:09.368107  714411 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1122 00:58:09.867380  714411 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1122 00:58:10.367735  714411 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1122 00:58:10.867603  714411 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1122 00:58:11.367592  714411 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1122 00:58:11.510343  714411 kubeadm.go:1114] duration metric: took 3.816835748s to wait for elevateKubeSystemPrivileges
	I1122 00:58:11.510369  714411 kubeadm.go:403] duration metric: took 21.244985101s to StartCluster
	I1122 00:58:11.510385  714411 settings.go:142] acquiring lock: {Name:mk6c31eb57ec65b047b78b4e1046e03fe7cc77bb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1122 00:58:11.510452  714411 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21934-513600/kubeconfig
	I1122 00:58:11.511993  714411 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21934-513600/kubeconfig: {Name:mkcd094d8a765e5a81369c0fb33cb5e696b17bb8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1122 00:58:11.512228  714411 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.85.2 Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1122 00:58:11.512456  714411 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1122 00:58:11.512847  714411 config.go:182] Loaded profile config "default-k8s-diff-port-882305": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1122 00:58:11.512888  714411 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1122 00:58:11.512946  714411 addons.go:70] Setting storage-provisioner=true in profile "default-k8s-diff-port-882305"
	I1122 00:58:11.512959  714411 addons.go:239] Setting addon storage-provisioner=true in "default-k8s-diff-port-882305"
	I1122 00:58:11.512981  714411 host.go:66] Checking if "default-k8s-diff-port-882305" exists ...
	I1122 00:58:11.513170  714411 addons.go:70] Setting default-storageclass=true in profile "default-k8s-diff-port-882305"
	I1122 00:58:11.513192  714411 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-882305"
	I1122 00:58:11.513578  714411 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-882305 --format={{.State.Status}}
	I1122 00:58:11.514350  714411 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-882305 --format={{.State.Status}}
	I1122 00:58:11.515811  714411 out.go:179] * Verifying Kubernetes components...
	I1122 00:58:11.522450  714411 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1122 00:58:11.549101  714411 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1122 00:58:11.552297  714411 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1122 00:58:11.552321  714411 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1122 00:58:11.552395  714411 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-882305
	I1122 00:58:11.563079  714411 addons.go:239] Setting addon default-storageclass=true in "default-k8s-diff-port-882305"
	I1122 00:58:11.563122  714411 host.go:66] Checking if "default-k8s-diff-port-882305" exists ...
	I1122 00:58:11.563569  714411 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-882305 --format={{.State.Status}}
	I1122 00:58:11.611494  714411 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33802 SSHKeyPath:/home/jenkins/minikube-integration/21934-513600/.minikube/machines/default-k8s-diff-port-882305/id_rsa Username:docker}
	I1122 00:58:11.622447  714411 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1122 00:58:11.622470  714411 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1122 00:58:11.622533  714411 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-882305
	I1122 00:58:11.656409  714411 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33802 SSHKeyPath:/home/jenkins/minikube-integration/21934-513600/.minikube/machines/default-k8s-diff-port-882305/id_rsa Username:docker}
	I1122 00:58:12.028373  714411 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1122 00:58:12.102042  714411 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1122 00:58:12.154115  714411 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.85.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1122 00:58:12.154208  714411 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1122 00:58:13.050128  714411 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.0216815s)
	I1122 00:58:13.051806  714411 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-882305" to be "Ready" ...
	I1122 00:58:13.052113  714411 start.go:977] {"host.minikube.internal": 192.168.85.1} host record injected into CoreDNS's ConfigMap
	I1122 00:58:13.108367  714411 out.go:179] * Enabled addons: storage-provisioner, default-storageclass
	
	
	==> CRI-O <==
	Nov 22 00:57:44 embed-certs-879000 crio[649]: time="2025-11-22T00:57:44.91517462Z" level=info msg="Removed container c008c2dd7377bbfd4128b50485dd8b601432d034f4890680bab4bcd2fdbea6a4: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-hnrr2/dashboard-metrics-scraper" id=b560fe16-6c86-4756-87a1-05a69b32c4e7 name=/runtime.v1.RuntimeService/RemoveContainer
	Nov 22 00:57:47 embed-certs-879000 conmon[1133]: conmon 6904d457c7dc728f7026 <ninfo>: container 1138 exited with status 1
	Nov 22 00:57:47 embed-certs-879000 crio[649]: time="2025-11-22T00:57:47.906265428Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=2a794be6-fbca-4ca6-85b8-8f1bcf9fac3f name=/runtime.v1.ImageService/ImageStatus
	Nov 22 00:57:47 embed-certs-879000 crio[649]: time="2025-11-22T00:57:47.907579942Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=3850f02e-52de-4971-bc78-31845537df1b name=/runtime.v1.ImageService/ImageStatus
	Nov 22 00:57:47 embed-certs-879000 crio[649]: time="2025-11-22T00:57:47.918365218Z" level=info msg="Creating container: kube-system/storage-provisioner/storage-provisioner" id=f1eec474-6628-4aa4-bcb1-c582c3e90514 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 22 00:57:47 embed-certs-879000 crio[649]: time="2025-11-22T00:57:47.918493214Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 22 00:57:47 embed-certs-879000 crio[649]: time="2025-11-22T00:57:47.929651747Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 22 00:57:47 embed-certs-879000 crio[649]: time="2025-11-22T00:57:47.929893553Z" level=warning msg="Failed to open /etc/passwd: open /var/lib/containers/storage/overlay/0a98b134b440713d299f499d934583186303f5db86d330fad3bdb9bc36d2a9eb/merged/etc/passwd: no such file or directory"
	Nov 22 00:57:47 embed-certs-879000 crio[649]: time="2025-11-22T00:57:47.929922722Z" level=warning msg="Failed to open /etc/group: open /var/lib/containers/storage/overlay/0a98b134b440713d299f499d934583186303f5db86d330fad3bdb9bc36d2a9eb/merged/etc/group: no such file or directory"
	Nov 22 00:57:47 embed-certs-879000 crio[649]: time="2025-11-22T00:57:47.930252878Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 22 00:57:47 embed-certs-879000 crio[649]: time="2025-11-22T00:57:47.947361235Z" level=info msg="Created container 933cca2968517797342b88d5e9db0d039293c75efb66faae70c1f0e8a213eaaa: kube-system/storage-provisioner/storage-provisioner" id=f1eec474-6628-4aa4-bcb1-c582c3e90514 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 22 00:57:47 embed-certs-879000 crio[649]: time="2025-11-22T00:57:47.957173346Z" level=info msg="Starting container: 933cca2968517797342b88d5e9db0d039293c75efb66faae70c1f0e8a213eaaa" id=722d56c1-e6b3-4150-bb7f-8d146e9dbc5a name=/runtime.v1.RuntimeService/StartContainer
	Nov 22 00:57:47 embed-certs-879000 crio[649]: time="2025-11-22T00:57:47.960926057Z" level=info msg="Started container" PID=1641 containerID=933cca2968517797342b88d5e9db0d039293c75efb66faae70c1f0e8a213eaaa description=kube-system/storage-provisioner/storage-provisioner id=722d56c1-e6b3-4150-bb7f-8d146e9dbc5a name=/runtime.v1.RuntimeService/StartContainer sandboxID=2dafa3b42ca588edb8bc1337c892149bde9bed3adf6acae13b7580b69f26771f
	Nov 22 00:57:57 embed-certs-879000 crio[649]: time="2025-11-22T00:57:57.624870235Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 22 00:57:57 embed-certs-879000 crio[649]: time="2025-11-22T00:57:57.634145492Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 22 00:57:57 embed-certs-879000 crio[649]: time="2025-11-22T00:57:57.634180371Z" level=info msg="Updated default CNI network name to kindnet"
	Nov 22 00:57:57 embed-certs-879000 crio[649]: time="2025-11-22T00:57:57.634208628Z" level=info msg="CNI monitoring event WRITE         \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 22 00:57:57 embed-certs-879000 crio[649]: time="2025-11-22T00:57:57.642049933Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 22 00:57:57 embed-certs-879000 crio[649]: time="2025-11-22T00:57:57.642217124Z" level=info msg="Updated default CNI network name to kindnet"
	Nov 22 00:57:57 embed-certs-879000 crio[649]: time="2025-11-22T00:57:57.642313728Z" level=info msg="CNI monitoring event RENAME        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 22 00:57:57 embed-certs-879000 crio[649]: time="2025-11-22T00:57:57.646045574Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 22 00:57:57 embed-certs-879000 crio[649]: time="2025-11-22T00:57:57.646187601Z" level=info msg="Updated default CNI network name to kindnet"
	Nov 22 00:57:57 embed-certs-879000 crio[649]: time="2025-11-22T00:57:57.64626375Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist\" ← \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 22 00:57:57 embed-certs-879000 crio[649]: time="2025-11-22T00:57:57.64943343Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 22 00:57:57 embed-certs-879000 crio[649]: time="2025-11-22T00:57:57.649556741Z" level=info msg="Updated default CNI network name to kindnet"
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                      CREATED              STATE               NAME                        ATTEMPT             POD ID              POD                                          NAMESPACE
	933cca2968517       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                           27 seconds ago       Running             storage-provisioner         2                   2dafa3b42ca58       storage-provisioner                          kube-system
	93d281eec1d97       a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7                                           31 seconds ago       Exited              dashboard-metrics-scraper   2                   862b859352853       dashboard-metrics-scraper-6ffb444bf9-hnrr2   kubernetes-dashboard
	94175cffd15a6       docker.io/kubernetesui/dashboard@sha256:5c52c60663b473628bd98e4ffee7a747ef1f88d8c7bcee957b089fb3f61bdedf   43 seconds ago       Running             kubernetes-dashboard        0                   97d0187a82c6a       kubernetes-dashboard-855c9754f9-mrcpd        kubernetes-dashboard
	745bdf157ec74       1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c                                           58 seconds ago       Running             busybox                     1                   6d5b8906a7105       busybox                                      default
	08b992da614bc       138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc                                           58 seconds ago       Running             coredns                     1                   6794853f05ea5       coredns-66bc5c9577-h2kpd                     kube-system
	f859b19e5db26       05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9                                           58 seconds ago       Running             kube-proxy                  1                   552addf39817a       kube-proxy-w9bqj                             kube-system
	6904d457c7dc7       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                           58 seconds ago       Exited              storage-provisioner         1                   2dafa3b42ca58       storage-provisioner                          kube-system
	9fd53bbae898c       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c                                           58 seconds ago       Running             kindnet-cni                 1                   a68abed2f8733       kindnet-j8wwg                                kube-system
	9852dc8e953e4       a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e                                           About a minute ago   Running             etcd                        1                   3a2c8dcdc139f       etcd-embed-certs-879000                      kube-system
	f660ef303bd46       7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a                                           About a minute ago   Running             kube-controller-manager     1                   17eb5f2423265       kube-controller-manager-embed-certs-879000   kube-system
	d2057db699cba       43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196                                           About a minute ago   Running             kube-apiserver              1                   e502162e9c4c6       kube-apiserver-embed-certs-879000            kube-system
	53b12e2f48bad       b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0                                           About a minute ago   Running             kube-scheduler              1                   86d5a1ed196f4       kube-scheduler-embed-certs-879000            kube-system
	
	
	==> coredns [08b992da614bc2e772d094ded50c154806974fe8fb54eb1da406e962496e84d6] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 3e2243e8b9e7116f563b83b1933f477a68ba9ad4a829ed5d7e54629fb2ce53528b9bc6023030be20be434ad805fd246296dd428c64e9bbef3a70f22b8621f560
	CoreDNS-1.12.1
	linux/arm64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:55252 - 62034 "HINFO IN 6746390676220224766.2530450903095737698. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.031624797s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	
	
	==> describe nodes <==
	Name:               embed-certs-879000
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=embed-certs-879000
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=299bbe887a12c40541707cc636234f35f4ff1785
	                    minikube.k8s.io/name=embed-certs-879000
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_11_22T00_55_51_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 22 Nov 2025 00:55:45 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  embed-certs-879000
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 22 Nov 2025 00:58:07 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 22 Nov 2025 00:58:08 +0000   Sat, 22 Nov 2025 00:55:39 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 22 Nov 2025 00:58:08 +0000   Sat, 22 Nov 2025 00:55:39 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 22 Nov 2025 00:58:08 +0000   Sat, 22 Nov 2025 00:55:39 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 22 Nov 2025 00:58:08 +0000   Sat, 22 Nov 2025 00:56:35 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.76.2
	  Hostname:    embed-certs-879000
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	System Info:
	  Machine ID:                 c92eea4d3c03c0156e01fcf8691e3907
	  System UUID:                37f1296a-29c2-4a0f-8fef-fc1d195b0150
	  Boot ID:                    72ac7385-472f-47d1-a23e-bd80468d6e09
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.2
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         98s
	  kube-system                 coredns-66bc5c9577-h2kpd                      100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     2m22s
	  kube-system                 etcd-embed-certs-879000                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         2m27s
	  kube-system                 kindnet-j8wwg                                 100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      2m22s
	  kube-system                 kube-apiserver-embed-certs-879000             250m (12%)    0 (0%)      0 (0%)           0 (0%)         2m27s
	  kube-system                 kube-controller-manager-embed-certs-879000    200m (10%)    0 (0%)      0 (0%)           0 (0%)         2m26s
	  kube-system                 kube-proxy-w9bqj                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m22s
	  kube-system                 kube-scheduler-embed-certs-879000             100m (5%)     0 (0%)      0 (0%)           0 (0%)         2m30s
	  kube-system                 storage-provisioner                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m21s
	  kubernetes-dashboard        dashboard-metrics-scraper-6ffb444bf9-hnrr2    0 (0%)        0 (0%)      0 (0%)           0 (0%)         55s
	  kubernetes-dashboard        kubernetes-dashboard-855c9754f9-mrcpd         0 (0%)        0 (0%)      0 (0%)           0 (0%)         55s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  100m (5%)
	  memory             220Mi (2%)  220Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                    From             Message
	  ----     ------                   ----                   ----             -------
	  Normal   Starting                 2m20s                  kube-proxy       
	  Normal   Starting                 57s                    kube-proxy       
	  Normal   Starting                 2m39s                  kubelet          Starting kubelet.
	  Warning  CgroupV1                 2m39s                  kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  2m39s (x8 over 2m39s)  kubelet          Node embed-certs-879000 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    2m39s (x8 over 2m39s)  kubelet          Node embed-certs-879000 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     2m39s (x8 over 2m39s)  kubelet          Node embed-certs-879000 status is now: NodeHasSufficientPID
	  Normal   NodeHasNoDiskPressure    2m26s                  kubelet          Node embed-certs-879000 status is now: NodeHasNoDiskPressure
	  Warning  CgroupV1                 2m26s                  kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  2m26s                  kubelet          Node embed-certs-879000 status is now: NodeHasSufficientMemory
	  Normal   NodeHasSufficientPID     2m26s                  kubelet          Node embed-certs-879000 status is now: NodeHasSufficientPID
	  Normal   Starting                 2m26s                  kubelet          Starting kubelet.
	  Normal   RegisteredNode           2m23s                  node-controller  Node embed-certs-879000 event: Registered Node embed-certs-879000 in Controller
	  Normal   NodeReady                101s                   kubelet          Node embed-certs-879000 status is now: NodeReady
	  Normal   Starting                 66s                    kubelet          Starting kubelet.
	  Warning  CgroupV1                 66s                    kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  66s (x8 over 66s)      kubelet          Node embed-certs-879000 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    66s (x8 over 66s)      kubelet          Node embed-certs-879000 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     66s (x8 over 66s)      kubelet          Node embed-certs-879000 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           56s                    node-controller  Node embed-certs-879000 event: Registered Node embed-certs-879000 in Controller
	
	
	==> dmesg <==
	[Nov22 00:35] overlayfs: idmapped layers are currently not supported
	[Nov22 00:36] overlayfs: idmapped layers are currently not supported
	[ +18.168104] overlayfs: idmapped layers are currently not supported
	[Nov22 00:37] overlayfs: idmapped layers are currently not supported
	[ +56.322609] overlayfs: idmapped layers are currently not supported
	[Nov22 00:38] overlayfs: idmapped layers are currently not supported
	[Nov22 00:39] overlayfs: idmapped layers are currently not supported
	[ +23.174928] overlayfs: idmapped layers are currently not supported
	[Nov22 00:41] overlayfs: idmapped layers are currently not supported
	[Nov22 00:42] overlayfs: idmapped layers are currently not supported
	[Nov22 00:44] overlayfs: idmapped layers are currently not supported
	[Nov22 00:45] overlayfs: idmapped layers are currently not supported
	[Nov22 00:46] overlayfs: idmapped layers are currently not supported
	[Nov22 00:48] overlayfs: idmapped layers are currently not supported
	[Nov22 00:50] overlayfs: idmapped layers are currently not supported
	[Nov22 00:51] overlayfs: idmapped layers are currently not supported
	[ +11.900293] overlayfs: idmapped layers are currently not supported
	[ +28.922055] overlayfs: idmapped layers are currently not supported
	[Nov22 00:52] overlayfs: idmapped layers are currently not supported
	[Nov22 00:53] overlayfs: idmapped layers are currently not supported
	[Nov22 00:54] overlayfs: idmapped layers are currently not supported
	[Nov22 00:55] overlayfs: idmapped layers are currently not supported
	[Nov22 00:56] overlayfs: idmapped layers are currently not supported
	[Nov22 00:57] overlayfs: idmapped layers are currently not supported
	[Nov22 00:58] overlayfs: idmapped layers are currently not supported
	
	
	==> etcd [9852dc8e953e4082535d9da09fe8c7c488642b2923223eea6484d9282094e3ea] <==
	{"level":"warn","ts":"2025-11-22T00:57:14.304034Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49110","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-22T00:57:14.345714Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49128","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-22T00:57:14.355069Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49152","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-22T00:57:14.373007Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49172","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-22T00:57:14.386613Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49198","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-22T00:57:14.410756Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49208","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-22T00:57:14.427155Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49220","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-22T00:57:14.446454Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49246","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-22T00:57:14.473086Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49254","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-22T00:57:14.501508Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49272","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-22T00:57:14.519917Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49292","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-22T00:57:14.567472Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49308","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-22T00:57:14.569705Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49324","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-22T00:57:14.580493Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49346","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-22T00:57:14.605225Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49360","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-22T00:57:14.624813Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49374","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-22T00:57:14.641009Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49384","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-22T00:57:14.659786Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49402","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-22T00:57:14.676654Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49422","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-22T00:57:14.694298Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49448","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-22T00:57:14.750798Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49454","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-22T00:57:14.776595Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49484","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-22T00:57:14.823767Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49506","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-22T00:57:14.856529Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49522","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-22T00:57:14.933021Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38118","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 00:58:16 up  5:40,  0 user,  load average: 3.53, 3.82, 2.94
	Linux embed-certs-879000 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [9fd53bbae898cfbeaff1f5002c8476e6762d7ebef17deeac9f498600ae2a7b1b] <==
	I1122 00:57:17.448640       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1122 00:57:17.449057       1 main.go:139] hostIP = 192.168.76.2
	podIP = 192.168.76.2
	I1122 00:57:17.449249       1 main.go:148] setting mtu 1500 for CNI 
	I1122 00:57:17.449303       1 main.go:178] kindnetd IP family: "ipv4"
	I1122 00:57:17.449337       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-11-22T00:57:17Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1122 00:57:17.624268       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1122 00:57:17.643238       1 controller.go:381] "Waiting for informer caches to sync"
	I1122 00:57:17.643278       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1122 00:57:17.644080       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	E1122 00:57:47.624569       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Namespace: Get \"https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Namespace"
	E1122 00:57:47.644195       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.96.0.1:443/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Node"
	E1122 00:57:47.644309       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Pod: Get \"https://10.96.0.1:443/api/v1/pods?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Pod"
	E1122 00:57:47.644392       1 reflector.go:200] "Failed to watch" err="failed to list *v1.NetworkPolicy: Get \"https://10.96.0.1:443/apis/networking.k8s.io/v1/networkpolicies?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.NetworkPolicy"
	I1122 00:57:48.744280       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1122 00:57:48.744322       1 metrics.go:72] Registering metrics
	I1122 00:57:48.744383       1 controller.go:711] "Syncing nftables rules"
	I1122 00:57:57.624568       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1122 00:57:57.624607       1 main.go:301] handling current node
	I1122 00:58:07.630555       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1122 00:58:07.630656       1 main.go:301] handling current node
	
	
	==> kube-apiserver [d2057db699cba9b5fd5582afa88f5011c61af54cfbf9b6be282bae14ccb3e06b] <==
	I1122 00:57:16.641644       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1122 00:57:16.656692       1 shared_informer.go:356] "Caches are synced" controller="configmaps"
	I1122 00:57:16.668839       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I1122 00:57:16.668881       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I1122 00:57:16.705884       1 shared_informer.go:356] "Caches are synced" controller="node_authorizer"
	I1122 00:57:16.709314       1 shared_informer.go:356] "Caches are synced" controller="*generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]"
	I1122 00:57:16.709340       1 policy_source.go:240] refreshing policies
	I1122 00:57:16.739151       1 cache.go:39] Caches are synced for LocalAvailability controller
	I1122 00:57:16.745722       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I1122 00:57:16.747896       1 cache.go:39] Caches are synced for autoregister controller
	I1122 00:57:16.748135       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1122 00:57:16.762518       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1122 00:57:16.779771       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I1122 00:57:16.843870       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	E1122 00:57:16.863877       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I1122 00:57:16.942170       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1122 00:57:17.484970       1 controller.go:667] quota admission added evaluator for: namespaces
	I1122 00:57:17.786616       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1122 00:57:17.920159       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1122 00:57:17.949221       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1122 00:57:18.180412       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.97.98.159"}
	I1122 00:57:18.215563       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.106.24.172"}
	I1122 00:57:20.545936       1 controller.go:667] quota admission added evaluator for: endpoints
	I1122 00:57:20.847254       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1122 00:57:21.060665       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	
	
	==> kube-controller-manager [f660ef303bd4694fbdad76a7eb87133a3cca27093a6685ac673521dce9c9d434] <==
	I1122 00:57:20.559800       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1122 00:57:20.559827       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1122 00:57:20.559854       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1122 00:57:20.568738       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1122 00:57:20.570504       1 shared_informer.go:356] "Caches are synced" controller="node"
	I1122 00:57:20.570567       1 range_allocator.go:177] "Sending events to api server" logger="node-ipam-controller"
	I1122 00:57:20.570595       1 range_allocator.go:183] "Starting range CIDR allocator" logger="node-ipam-controller"
	I1122 00:57:20.570607       1 shared_informer.go:349] "Waiting for caches to sync" controller="cidrallocator"
	I1122 00:57:20.570613       1 shared_informer.go:356] "Caches are synced" controller="cidrallocator"
	I1122 00:57:20.573678       1 shared_informer.go:356] "Caches are synced" controller="PVC protection"
	I1122 00:57:20.575914       1 shared_informer.go:356] "Caches are synced" controller="HPA"
	I1122 00:57:20.577051       1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
	I1122 00:57:20.578163       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I1122 00:57:20.579385       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I1122 00:57:20.582901       1 shared_informer.go:356] "Caches are synced" controller="legacy-service-account-token-cleaner"
	I1122 00:57:20.582946       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I1122 00:57:20.588954       1 shared_informer.go:356] "Caches are synced" controller="ReplicationController"
	I1122 00:57:20.589034       1 shared_informer.go:356] "Caches are synced" controller="job"
	I1122 00:57:20.588965       1 shared_informer.go:356] "Caches are synced" controller="PV protection"
	I1122 00:57:20.588981       1 shared_informer.go:356] "Caches are synced" controller="service account"
	I1122 00:57:20.588994       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	I1122 00:57:20.589015       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I1122 00:57:20.589007       1 shared_informer.go:356] "Caches are synced" controller="cronjob"
	I1122 00:57:20.589025       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1122 00:57:20.599028       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	
	
	==> kube-proxy [f859b19e5db26f0386393c8d593ce69ad672e813e636cf787d88dc587b72d3be] <==
	I1122 00:57:18.175287       1 server_linux.go:53] "Using iptables proxy"
	I1122 00:57:18.290478       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1122 00:57:18.391684       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1122 00:57:18.391791       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.76.2"]
	E1122 00:57:18.391914       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1122 00:57:18.421392       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1122 00:57:18.421506       1 server_linux.go:132] "Using iptables Proxier"
	I1122 00:57:18.425216       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1122 00:57:18.425582       1 server.go:527] "Version info" version="v1.34.1"
	I1122 00:57:18.425790       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1122 00:57:18.427139       1 config.go:200] "Starting service config controller"
	I1122 00:57:18.427351       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1122 00:57:18.427415       1 config.go:106] "Starting endpoint slice config controller"
	I1122 00:57:18.427466       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1122 00:57:18.427516       1 config.go:403] "Starting serviceCIDR config controller"
	I1122 00:57:18.427543       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1122 00:57:18.428412       1 config.go:309] "Starting node config controller"
	I1122 00:57:18.428463       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1122 00:57:18.428492       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1122 00:57:18.527665       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1122 00:57:18.527664       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1122 00:57:18.527684       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [53b12e2f48badfbf5f25cd651f43e00f0d1451191aa045dab44e2461293c766c] <==
	I1122 00:57:14.064884       1 serving.go:386] Generated self-signed cert in-memory
	I1122 00:57:17.415074       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.1"
	I1122 00:57:17.415102       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1122 00:57:17.470617       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1122 00:57:17.476216       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1122 00:57:17.476258       1 requestheader_controller.go:180] Starting RequestHeaderAuthRequestController
	I1122 00:57:17.476751       1 shared_informer.go:349] "Waiting for caches to sync" controller="RequestHeaderAuthRequestController"
	I1122 00:57:17.476272       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1122 00:57:17.497258       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1122 00:57:17.476282       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1122 00:57:17.497415       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1122 00:57:17.579514       1 shared_informer.go:356] "Caches are synced" controller="RequestHeaderAuthRequestController"
	I1122 00:57:17.597938       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1122 00:57:17.598757       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	
	
	==> kubelet <==
	Nov 22 00:57:21 embed-certs-879000 kubelet[782]: I1122 00:57:21.146664     782 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2qmgl\" (UniqueName: \"kubernetes.io/projected/10d444b4-3695-440b-8e1b-8ddb92023d36-kube-api-access-2qmgl\") pod \"kubernetes-dashboard-855c9754f9-mrcpd\" (UID: \"10d444b4-3695-440b-8e1b-8ddb92023d36\") " pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-mrcpd"
	Nov 22 00:57:21 embed-certs-879000 kubelet[782]: I1122 00:57:21.146687     782 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/10d444b4-3695-440b-8e1b-8ddb92023d36-tmp-volume\") pod \"kubernetes-dashboard-855c9754f9-mrcpd\" (UID: \"10d444b4-3695-440b-8e1b-8ddb92023d36\") " pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-mrcpd"
	Nov 22 00:57:21 embed-certs-879000 kubelet[782]: W1122 00:57:21.388577     782 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/a6fb6b81dce5b3616a7bbeb7273662d346745bb8e0cc70a843d43e0a09366fd0/crio-862b85935285388072e7bd0d6ae6fea17ac17f3a3c07a13f09c4f3675198fd7d WatchSource:0}: Error finding container 862b85935285388072e7bd0d6ae6fea17ac17f3a3c07a13f09c4f3675198fd7d: Status 404 returned error can't find the container with id 862b85935285388072e7bd0d6ae6fea17ac17f3a3c07a13f09c4f3675198fd7d
	Nov 22 00:57:26 embed-certs-879000 kubelet[782]: I1122 00:57:26.455796     782 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness"
	Nov 22 00:57:26 embed-certs-879000 kubelet[782]: I1122 00:57:26.814125     782 scope.go:117] "RemoveContainer" containerID="3002581a3fd9e6cde518227d2932ffb657ea413eb41ea3e05f30be06fe5d1d25"
	Nov 22 00:57:27 embed-certs-879000 kubelet[782]: I1122 00:57:27.820204     782 scope.go:117] "RemoveContainer" containerID="c008c2dd7377bbfd4128b50485dd8b601432d034f4890680bab4bcd2fdbea6a4"
	Nov 22 00:57:27 embed-certs-879000 kubelet[782]: E1122 00:57:27.820363     782 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-hnrr2_kubernetes-dashboard(28d67df6-e61b-4eae-9947-24db9e122425)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-hnrr2" podUID="28d67df6-e61b-4eae-9947-24db9e122425"
	Nov 22 00:57:27 embed-certs-879000 kubelet[782]: I1122 00:57:27.822789     782 scope.go:117] "RemoveContainer" containerID="3002581a3fd9e6cde518227d2932ffb657ea413eb41ea3e05f30be06fe5d1d25"
	Nov 22 00:57:28 embed-certs-879000 kubelet[782]: I1122 00:57:28.831714     782 scope.go:117] "RemoveContainer" containerID="c008c2dd7377bbfd4128b50485dd8b601432d034f4890680bab4bcd2fdbea6a4"
	Nov 22 00:57:28 embed-certs-879000 kubelet[782]: E1122 00:57:28.832094     782 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-hnrr2_kubernetes-dashboard(28d67df6-e61b-4eae-9947-24db9e122425)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-hnrr2" podUID="28d67df6-e61b-4eae-9947-24db9e122425"
	Nov 22 00:57:31 embed-certs-879000 kubelet[782]: I1122 00:57:31.350401     782 scope.go:117] "RemoveContainer" containerID="c008c2dd7377bbfd4128b50485dd8b601432d034f4890680bab4bcd2fdbea6a4"
	Nov 22 00:57:31 embed-certs-879000 kubelet[782]: E1122 00:57:31.350621     782 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-hnrr2_kubernetes-dashboard(28d67df6-e61b-4eae-9947-24db9e122425)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-hnrr2" podUID="28d67df6-e61b-4eae-9947-24db9e122425"
	Nov 22 00:57:44 embed-certs-879000 kubelet[782]: I1122 00:57:44.670872     782 scope.go:117] "RemoveContainer" containerID="c008c2dd7377bbfd4128b50485dd8b601432d034f4890680bab4bcd2fdbea6a4"
	Nov 22 00:57:44 embed-certs-879000 kubelet[782]: I1122 00:57:44.894381     782 scope.go:117] "RemoveContainer" containerID="c008c2dd7377bbfd4128b50485dd8b601432d034f4890680bab4bcd2fdbea6a4"
	Nov 22 00:57:44 embed-certs-879000 kubelet[782]: I1122 00:57:44.894623     782 scope.go:117] "RemoveContainer" containerID="93d281eec1d97f6f14ff89771acbf42bcdfcc26819b8af93e1dd9a6f16af4fd6"
	Nov 22 00:57:44 embed-certs-879000 kubelet[782]: E1122 00:57:44.894775     782 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-hnrr2_kubernetes-dashboard(28d67df6-e61b-4eae-9947-24db9e122425)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-hnrr2" podUID="28d67df6-e61b-4eae-9947-24db9e122425"
	Nov 22 00:57:44 embed-certs-879000 kubelet[782]: I1122 00:57:44.908739     782 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-mrcpd" podStartSLOduration=12.507392407 podStartE2EDuration="23.908718201s" podCreationTimestamp="2025-11-22 00:57:21 +0000 UTC" firstStartedPulling="2025-11-22 00:57:21.405329163 +0000 UTC m=+10.998230598" lastFinishedPulling="2025-11-22 00:57:32.806654965 +0000 UTC m=+22.399556392" observedRunningTime="2025-11-22 00:57:33.877751942 +0000 UTC m=+23.470653394" watchObservedRunningTime="2025-11-22 00:57:44.908718201 +0000 UTC m=+34.501619644"
	Nov 22 00:57:47 embed-certs-879000 kubelet[782]: I1122 00:57:47.905607     782 scope.go:117] "RemoveContainer" containerID="6904d457c7dc728f7026d679dcbeeb784ce896011c4ea8efb2ad461a00099705"
	Nov 22 00:57:51 embed-certs-879000 kubelet[782]: I1122 00:57:51.350447     782 scope.go:117] "RemoveContainer" containerID="93d281eec1d97f6f14ff89771acbf42bcdfcc26819b8af93e1dd9a6f16af4fd6"
	Nov 22 00:57:51 embed-certs-879000 kubelet[782]: E1122 00:57:51.350721     782 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-hnrr2_kubernetes-dashboard(28d67df6-e61b-4eae-9947-24db9e122425)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-hnrr2" podUID="28d67df6-e61b-4eae-9947-24db9e122425"
	Nov 22 00:58:03 embed-certs-879000 kubelet[782]: I1122 00:58:03.669745     782 scope.go:117] "RemoveContainer" containerID="93d281eec1d97f6f14ff89771acbf42bcdfcc26819b8af93e1dd9a6f16af4fd6"
	Nov 22 00:58:03 embed-certs-879000 kubelet[782]: E1122 00:58:03.670458     782 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-hnrr2_kubernetes-dashboard(28d67df6-e61b-4eae-9947-24db9e122425)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-hnrr2" podUID="28d67df6-e61b-4eae-9947-24db9e122425"
	Nov 22 00:58:10 embed-certs-879000 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Nov 22 00:58:10 embed-certs-879000 systemd[1]: kubelet.service: Deactivated successfully.
	Nov 22 00:58:10 embed-certs-879000 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	
	
	==> kubernetes-dashboard [94175cffd15a6a8cfd48697de8133883ffd4078df2399b4e302dafcd31ccd293] <==
	2025/11/22 00:57:32 Starting overwatch
	2025/11/22 00:57:32 Using namespace: kubernetes-dashboard
	2025/11/22 00:57:32 Using in-cluster config to connect to apiserver
	2025/11/22 00:57:32 Using secret token for csrf signing
	2025/11/22 00:57:32 Initializing csrf token from kubernetes-dashboard-csrf secret
	2025/11/22 00:57:32 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2025/11/22 00:57:32 Successful initial request to the apiserver, version: v1.34.1
	2025/11/22 00:57:32 Generating JWE encryption key
	2025/11/22 00:57:32 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2025/11/22 00:57:32 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2025/11/22 00:57:33 Initializing JWE encryption key from synchronized object
	2025/11/22 00:57:33 Creating in-cluster Sidecar client
	2025/11/22 00:57:33 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/11/22 00:57:33 Serving insecurely on HTTP port: 9090
	2025/11/22 00:58:03 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	
	
	==> storage-provisioner [6904d457c7dc728f7026d679dcbeeb784ce896011c4ea8efb2ad461a00099705] <==
	I1122 00:57:17.764166       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1122 00:57:47.771312       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	
	
	==> storage-provisioner [933cca2968517797342b88d5e9db0d039293c75efb66faae70c1f0e8a213eaaa] <==
	I1122 00:57:48.008397       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1122 00:57:48.008550       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W1122 00:57:48.015426       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1122 00:57:51.471559       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1122 00:57:55.731953       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1122 00:57:59.329933       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1122 00:58:02.383918       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1122 00:58:05.405893       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1122 00:58:05.411064       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1122 00:58:05.411292       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1122 00:58:05.411501       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_embed-certs-879000_14ebbff0-0f1c-4acd-8a13-d03de6ddee56!
	I1122 00:58:05.412440       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"2574480b-6478-427e-83f2-2c518ace1325", APIVersion:"v1", ResourceVersion:"679", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' embed-certs-879000_14ebbff0-0f1c-4acd-8a13-d03de6ddee56 became leader
	W1122 00:58:05.420998       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1122 00:58:05.429200       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1122 00:58:05.511731       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_embed-certs-879000_14ebbff0-0f1c-4acd-8a13-d03de6ddee56!
	W1122 00:58:07.432805       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1122 00:58:07.441515       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1122 00:58:09.452276       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1122 00:58:09.458631       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1122 00:58:11.471478       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1122 00:58:11.490044       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1122 00:58:13.493153       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1122 00:58:13.497718       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1122 00:58:15.501041       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1122 00:58:15.507555       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p embed-certs-879000 -n embed-certs-879000
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p embed-certs-879000 -n embed-certs-879000: exit status 2 (372.182817ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:269: (dbg) Run:  kubectl --context embed-certs-879000 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/embed-certs/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestStartStop/group/embed-certs/serial/Pause (6.93s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (2.93s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p default-k8s-diff-port-882305 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Non-zero exit: out/minikube-linux-arm64 addons enable metrics-server -p default-k8s-diff-port-882305 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: exit status 11 (275.488793ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-22T00:58:39Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_2bafae6fa40fec163538f94366e390b0317a8b15_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:205: failed to enable an addon post-stop. args "out/minikube-linux-arm64 addons enable metrics-server -p default-k8s-diff-port-882305 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain": exit status 11
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context default-k8s-diff-port-882305 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:213: (dbg) Non-zero exit: kubectl --context default-k8s-diff-port-882305 describe deploy/metrics-server -n kube-system: exit status 1 (99.006416ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): deployments.apps "metrics-server" not found

                                                
                                                
** /stderr **
start_stop_delete_test.go:215: failed to get info on auto-pause deployments. args "kubectl --context default-k8s-diff-port-882305 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:219: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect default-k8s-diff-port-882305
helpers_test.go:243: (dbg) docker inspect default-k8s-diff-port-882305:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "3f972239d661c1a7b13b144967a8f04cbcdb72c736bec80a78bae8fab1d357e1",
	        "Created": "2025-11-22T00:57:41.715477223Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 714799,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-11-22T00:57:41.79205986Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:97061622515a85470dad13cb046ad5fc5021fcd2b05aa921a620abb52951cd5d",
	        "ResolvConfPath": "/var/lib/docker/containers/3f972239d661c1a7b13b144967a8f04cbcdb72c736bec80a78bae8fab1d357e1/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/3f972239d661c1a7b13b144967a8f04cbcdb72c736bec80a78bae8fab1d357e1/hostname",
	        "HostsPath": "/var/lib/docker/containers/3f972239d661c1a7b13b144967a8f04cbcdb72c736bec80a78bae8fab1d357e1/hosts",
	        "LogPath": "/var/lib/docker/containers/3f972239d661c1a7b13b144967a8f04cbcdb72c736bec80a78bae8fab1d357e1/3f972239d661c1a7b13b144967a8f04cbcdb72c736bec80a78bae8fab1d357e1-json.log",
	        "Name": "/default-k8s-diff-port-882305",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "default-k8s-diff-port-882305:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "default-k8s-diff-port-882305",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8444/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "3f972239d661c1a7b13b144967a8f04cbcdb72c736bec80a78bae8fab1d357e1",
	                "LowerDir": "/var/lib/docker/overlay2/2a6ef4a6700f6fb0e1d43ffbcadcf526e3d5c7ff5a78ad3ae005fd03563625b2-init/diff:/var/lib/docker/overlay2/7e8788c6de692bc1c3758a2bb2c4b8da0fbba26855f855c0f3b655bfbdd92f8e/diff",
	                "MergedDir": "/var/lib/docker/overlay2/2a6ef4a6700f6fb0e1d43ffbcadcf526e3d5c7ff5a78ad3ae005fd03563625b2/merged",
	                "UpperDir": "/var/lib/docker/overlay2/2a6ef4a6700f6fb0e1d43ffbcadcf526e3d5c7ff5a78ad3ae005fd03563625b2/diff",
	                "WorkDir": "/var/lib/docker/overlay2/2a6ef4a6700f6fb0e1d43ffbcadcf526e3d5c7ff5a78ad3ae005fd03563625b2/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "default-k8s-diff-port-882305",
	                "Source": "/var/lib/docker/volumes/default-k8s-diff-port-882305/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "default-k8s-diff-port-882305",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8444/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763588073-21934@sha256:19d3da0413e1bfa354cbb88004c6796f8e9772a083e0230b0f6e50212ee04c7e",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "default-k8s-diff-port-882305",
	                "name.minikube.sigs.k8s.io": "default-k8s-diff-port-882305",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "6e59ce700f08e1114af56e2d244d932d672f0a90f760aec3bba0d1ce665a7509",
	            "SandboxKey": "/var/run/docker/netns/6e59ce700f08",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33802"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33803"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33806"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33804"
	                    }
	                ],
	                "8444/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33805"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "default-k8s-diff-port-882305": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.85.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "86:86:d8:84:45:d8",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "b345e3fe787228de3ab90525c1947dc1357720a8a249cb6a46c68e40ecbfe59b",
	                    "EndpointID": "89774dd98bb373a342644f493c38f0468175228629cce1c25b8aa6de95aa9f56",
	                    "Gateway": "192.168.85.1",
	                    "IPAddress": "192.168.85.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "default-k8s-diff-port-882305",
	                        "3f972239d661"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-882305 -n default-k8s-diff-port-882305
helpers_test.go:252: <<< TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p default-k8s-diff-port-882305 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p default-k8s-diff-port-882305 logs -n 25: (1.436142101s)
helpers_test.go:260: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬──────────────
───────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼──────────────
───────┤
	│ delete  │ -p old-k8s-version-625837                                                                                                                                                                                                                     │ old-k8s-version-625837       │ jenkins │ v1.37.0 │ 22 Nov 25 00:54 UTC │ 22 Nov 25 00:54 UTC │
	│ delete  │ -p old-k8s-version-625837                                                                                                                                                                                                                     │ old-k8s-version-625837       │ jenkins │ v1.37.0 │ 22 Nov 25 00:54 UTC │ 22 Nov 25 00:54 UTC │
	│ start   │ -p no-preload-165130 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                       │ no-preload-165130            │ jenkins │ v1.37.0 │ 22 Nov 25 00:54 UTC │ 22 Nov 25 00:56 UTC │
	│ delete  │ -p cert-expiration-621390                                                                                                                                                                                                                     │ cert-expiration-621390       │ jenkins │ v1.37.0 │ 22 Nov 25 00:55 UTC │ 22 Nov 25 00:55 UTC │
	│ start   │ -p embed-certs-879000 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                        │ embed-certs-879000           │ jenkins │ v1.37.0 │ 22 Nov 25 00:55 UTC │ 22 Nov 25 00:56 UTC │
	│ addons  │ enable metrics-server -p no-preload-165130 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ no-preload-165130            │ jenkins │ v1.37.0 │ 22 Nov 25 00:56 UTC │                     │
	│ stop    │ -p no-preload-165130 --alsologtostderr -v=3                                                                                                                                                                                                   │ no-preload-165130            │ jenkins │ v1.37.0 │ 22 Nov 25 00:56 UTC │ 22 Nov 25 00:56 UTC │
	│ addons  │ enable dashboard -p no-preload-165130 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ no-preload-165130            │ jenkins │ v1.37.0 │ 22 Nov 25 00:56 UTC │ 22 Nov 25 00:56 UTC │
	│ start   │ -p no-preload-165130 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                       │ no-preload-165130            │ jenkins │ v1.37.0 │ 22 Nov 25 00:56 UTC │ 22 Nov 25 00:57 UTC │
	│ addons  │ enable metrics-server -p embed-certs-879000 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                      │ embed-certs-879000           │ jenkins │ v1.37.0 │ 22 Nov 25 00:56 UTC │                     │
	│ stop    │ -p embed-certs-879000 --alsologtostderr -v=3                                                                                                                                                                                                  │ embed-certs-879000           │ jenkins │ v1.37.0 │ 22 Nov 25 00:56 UTC │ 22 Nov 25 00:57 UTC │
	│ addons  │ enable dashboard -p embed-certs-879000 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                 │ embed-certs-879000           │ jenkins │ v1.37.0 │ 22 Nov 25 00:57 UTC │ 22 Nov 25 00:57 UTC │
	│ start   │ -p embed-certs-879000 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                        │ embed-certs-879000           │ jenkins │ v1.37.0 │ 22 Nov 25 00:57 UTC │ 22 Nov 25 00:57 UTC │
	│ image   │ no-preload-165130 image list --format=json                                                                                                                                                                                                    │ no-preload-165130            │ jenkins │ v1.37.0 │ 22 Nov 25 00:57 UTC │ 22 Nov 25 00:57 UTC │
	│ pause   │ -p no-preload-165130 --alsologtostderr -v=1                                                                                                                                                                                                   │ no-preload-165130            │ jenkins │ v1.37.0 │ 22 Nov 25 00:57 UTC │                     │
	│ delete  │ -p no-preload-165130                                                                                                                                                                                                                          │ no-preload-165130            │ jenkins │ v1.37.0 │ 22 Nov 25 00:57 UTC │ 22 Nov 25 00:57 UTC │
	│ delete  │ -p no-preload-165130                                                                                                                                                                                                                          │ no-preload-165130            │ jenkins │ v1.37.0 │ 22 Nov 25 00:57 UTC │ 22 Nov 25 00:57 UTC │
	│ delete  │ -p disable-driver-mounts-046489                                                                                                                                                                                                               │ disable-driver-mounts-046489 │ jenkins │ v1.37.0 │ 22 Nov 25 00:57 UTC │ 22 Nov 25 00:57 UTC │
	│ start   │ -p default-k8s-diff-port-882305 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                      │ default-k8s-diff-port-882305 │ jenkins │ v1.37.0 │ 22 Nov 25 00:57 UTC │ 22 Nov 25 00:58 UTC │
	│ image   │ embed-certs-879000 image list --format=json                                                                                                                                                                                                   │ embed-certs-879000           │ jenkins │ v1.37.0 │ 22 Nov 25 00:58 UTC │ 22 Nov 25 00:58 UTC │
	│ pause   │ -p embed-certs-879000 --alsologtostderr -v=1                                                                                                                                                                                                  │ embed-certs-879000           │ jenkins │ v1.37.0 │ 22 Nov 25 00:58 UTC │                     │
	│ delete  │ -p embed-certs-879000                                                                                                                                                                                                                         │ embed-certs-879000           │ jenkins │ v1.37.0 │ 22 Nov 25 00:58 UTC │ 22 Nov 25 00:58 UTC │
	│ delete  │ -p embed-certs-879000                                                                                                                                                                                                                         │ embed-certs-879000           │ jenkins │ v1.37.0 │ 22 Nov 25 00:58 UTC │ 22 Nov 25 00:58 UTC │
	│ start   │ -p newest-cni-683181 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1 │ newest-cni-683181            │ jenkins │ v1.37.0 │ 22 Nov 25 00:58 UTC │                     │
	│ addons  │ enable metrics-server -p default-k8s-diff-port-882305 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                            │ default-k8s-diff-port-882305 │ jenkins │ v1.37.0 │ 22 Nov 25 00:58 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴──────────────
───────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/22 00:58:20
	Running on machine: ip-172-31-21-244
	Binary: Built with gc go1.25.3 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1122 00:58:20.016196  718325 out.go:360] Setting OutFile to fd 1 ...
	I1122 00:58:20.016341  718325 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1122 00:58:20.016356  718325 out.go:374] Setting ErrFile to fd 2...
	I1122 00:58:20.016362  718325 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1122 00:58:20.016621  718325 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21934-513600/.minikube/bin
	I1122 00:58:20.017096  718325 out.go:368] Setting JSON to false
	I1122 00:58:20.018127  718325 start.go:133] hostinfo: {"hostname":"ip-172-31-21-244","uptime":20416,"bootTime":1763752684,"procs":190,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"da8ac1fd-6236-412a-a346-95873c98230d"}
	I1122 00:58:20.018210  718325 start.go:143] virtualization:  
	I1122 00:58:20.022137  718325 out.go:179] * [newest-cni-683181] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1122 00:58:20.026388  718325 out.go:179]   - MINIKUBE_LOCATION=21934
	I1122 00:58:20.026466  718325 notify.go:221] Checking for updates...
	I1122 00:58:20.032987  718325 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1122 00:58:20.036020  718325 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21934-513600/kubeconfig
	I1122 00:58:20.039144  718325 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21934-513600/.minikube
	I1122 00:58:20.042275  718325 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1122 00:58:20.045378  718325 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1122 00:58:20.049209  718325 config.go:182] Loaded profile config "default-k8s-diff-port-882305": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1122 00:58:20.049390  718325 driver.go:422] Setting default libvirt URI to qemu:///system
	I1122 00:58:20.072089  718325 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1122 00:58:20.072224  718325 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1122 00:58:20.136291  718325 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-11-22 00:58:20.126065138 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1122 00:58:20.136399  718325 docker.go:319] overlay module found
	I1122 00:58:20.139668  718325 out.go:179] * Using the docker driver based on user configuration
	I1122 00:58:20.142580  718325 start.go:309] selected driver: docker
	I1122 00:58:20.142604  718325 start.go:930] validating driver "docker" against <nil>
	I1122 00:58:20.142633  718325 start.go:941] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1122 00:58:20.143390  718325 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1122 00:58:20.207731  718325 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-11-22 00:58:20.19830282 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aa
rch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pa
th:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1122 00:58:20.207901  718325 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	W1122 00:58:20.207926  718325 out.go:285] ! With --network-plugin=cni, you will need to provide your own CNI. See --cni flag as a user-friendly alternative
	I1122 00:58:20.208144  718325 start_flags.go:1011] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1122 00:58:20.211093  718325 out.go:179] * Using Docker driver with root privileges
	I1122 00:58:20.213967  718325 cni.go:84] Creating CNI manager for ""
	I1122 00:58:20.214048  718325 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1122 00:58:20.214063  718325 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1122 00:58:20.214155  718325 start.go:353] cluster config:
	{Name:newest-cni-683181 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763588073-21934@sha256:19d3da0413e1bfa354cbb88004c6796f8e9772a083e0230b0f6e50212ee04c7e Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-683181 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Containe
rRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnet
ClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1122 00:58:20.219151  718325 out.go:179] * Starting "newest-cni-683181" primary control-plane node in "newest-cni-683181" cluster
	I1122 00:58:20.222164  718325 cache.go:134] Beginning downloading kic base image for docker with crio
	I1122 00:58:20.225190  718325 out.go:179] * Pulling base image v0.0.48-1763588073-21934 ...
	I1122 00:58:20.228264  718325 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1122 00:58:20.228312  718325 preload.go:203] Found local preload: /home/jenkins/minikube-integration/21934-513600/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4
	I1122 00:58:20.228322  718325 cache.go:65] Caching tarball of preloaded images
	I1122 00:58:20.228334  718325 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763588073-21934@sha256:19d3da0413e1bfa354cbb88004c6796f8e9772a083e0230b0f6e50212ee04c7e in local docker daemon
	I1122 00:58:20.228436  718325 preload.go:238] Found /home/jenkins/minikube-integration/21934-513600/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1122 00:58:20.228447  718325 cache.go:68] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1122 00:58:20.228581  718325 profile.go:143] Saving config to /home/jenkins/minikube-integration/21934-513600/.minikube/profiles/newest-cni-683181/config.json ...
	I1122 00:58:20.228608  718325 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21934-513600/.minikube/profiles/newest-cni-683181/config.json: {Name:mkee66e4af3184b4059ec484aba2c7b75cd3be34 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1122 00:58:20.249431  718325 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763588073-21934@sha256:19d3da0413e1bfa354cbb88004c6796f8e9772a083e0230b0f6e50212ee04c7e in local docker daemon, skipping pull
	I1122 00:58:20.249458  718325 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763588073-21934@sha256:19d3da0413e1bfa354cbb88004c6796f8e9772a083e0230b0f6e50212ee04c7e exists in daemon, skipping load
	I1122 00:58:20.249477  718325 cache.go:243] Successfully downloaded all kic artifacts
	I1122 00:58:20.249501  718325 start.go:360] acquireMachinesLock for newest-cni-683181: {Name:mk27a4458a1236fbb3e5921a2f9459ba81f48a3a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1122 00:58:20.249626  718325 start.go:364] duration metric: took 105.703µs to acquireMachinesLock for "newest-cni-683181"
	I1122 00:58:20.249665  718325 start.go:93] Provisioning new machine with config: &{Name:newest-cni-683181 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763588073-21934@sha256:19d3da0413e1bfa354cbb88004c6796f8e9772a083e0230b0f6e50212ee04c7e Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-683181 Namespace:default APIServerHAVIP: APIServer
Name:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:fals
e DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1122 00:58:20.249866  718325 start.go:125] createHost starting for "" (driver="docker")
	W1122 00:58:17.556589  714411 node_ready.go:57] node "default-k8s-diff-port-882305" has "Ready":"False" status (will retry)
	W1122 00:58:20.056066  714411 node_ready.go:57] node "default-k8s-diff-port-882305" has "Ready":"False" status (will retry)
	I1122 00:58:20.253464  718325 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1122 00:58:20.253712  718325 start.go:159] libmachine.API.Create for "newest-cni-683181" (driver="docker")
	I1122 00:58:20.253748  718325 client.go:173] LocalClient.Create starting
	I1122 00:58:20.253857  718325 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21934-513600/.minikube/certs/ca.pem
	I1122 00:58:20.253897  718325 main.go:143] libmachine: Decoding PEM data...
	I1122 00:58:20.253913  718325 main.go:143] libmachine: Parsing certificate...
	I1122 00:58:20.253970  718325 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21934-513600/.minikube/certs/cert.pem
	I1122 00:58:20.253996  718325 main.go:143] libmachine: Decoding PEM data...
	I1122 00:58:20.254009  718325 main.go:143] libmachine: Parsing certificate...
	I1122 00:58:20.254379  718325 cli_runner.go:164] Run: docker network inspect newest-cni-683181 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1122 00:58:20.271598  718325 cli_runner.go:211] docker network inspect newest-cni-683181 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1122 00:58:20.271700  718325 network_create.go:284] running [docker network inspect newest-cni-683181] to gather additional debugging logs...
	I1122 00:58:20.271721  718325 cli_runner.go:164] Run: docker network inspect newest-cni-683181
	W1122 00:58:20.289384  718325 cli_runner.go:211] docker network inspect newest-cni-683181 returned with exit code 1
	I1122 00:58:20.289542  718325 network_create.go:287] error running [docker network inspect newest-cni-683181]: docker network inspect newest-cni-683181: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network newest-cni-683181 not found
	I1122 00:58:20.289566  718325 network_create.go:289] output of [docker network inspect newest-cni-683181]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network newest-cni-683181 not found
	
	** /stderr **
	I1122 00:58:20.289677  718325 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1122 00:58:20.308814  718325 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-b16c782e3da8 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:ba:82:00:9d:45:d0} reservation:<nil>}
	I1122 00:58:20.309221  718325 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-13c9c00b5de5 IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:7a:4e:a4:3d:42:9e} reservation:<nil>}
	I1122 00:58:20.309711  718325 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-3c074a6aa87b IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:9a:1f:77:e5:90:0b} reservation:<nil>}
	I1122 00:58:20.310260  718325 network.go:206] using free private subnet 192.168.76.0/24: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x4001a075c0}
	I1122 00:58:20.310300  718325 network_create.go:124] attempt to create docker network newest-cni-683181 192.168.76.0/24 with gateway 192.168.76.1 and MTU of 1500 ...
	I1122 00:58:20.310355  718325 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=newest-cni-683181 newest-cni-683181
	I1122 00:58:20.368329  718325 network_create.go:108] docker network newest-cni-683181 192.168.76.0/24 created
	I1122 00:58:20.368364  718325 kic.go:121] calculated static IP "192.168.76.2" for the "newest-cni-683181" container
	I1122 00:58:20.368435  718325 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1122 00:58:20.384996  718325 cli_runner.go:164] Run: docker volume create newest-cni-683181 --label name.minikube.sigs.k8s.io=newest-cni-683181 --label created_by.minikube.sigs.k8s.io=true
	I1122 00:58:20.403535  718325 oci.go:103] Successfully created a docker volume newest-cni-683181
	I1122 00:58:20.403629  718325 cli_runner.go:164] Run: docker run --rm --name newest-cni-683181-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=newest-cni-683181 --entrypoint /usr/bin/test -v newest-cni-683181:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763588073-21934@sha256:19d3da0413e1bfa354cbb88004c6796f8e9772a083e0230b0f6e50212ee04c7e -d /var/lib
	I1122 00:58:20.963905  718325 oci.go:107] Successfully prepared a docker volume newest-cni-683181
	I1122 00:58:20.963969  718325 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1122 00:58:20.963979  718325 kic.go:194] Starting extracting preloaded images to volume ...
	I1122 00:58:20.964048  718325 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21934-513600/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v newest-cni-683181:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763588073-21934@sha256:19d3da0413e1bfa354cbb88004c6796f8e9772a083e0230b0f6e50212ee04c7e -I lz4 -xf /preloaded.tar -C /extractDir
	W1122 00:58:22.554894  714411 node_ready.go:57] node "default-k8s-diff-port-882305" has "Ready":"False" status (will retry)
	W1122 00:58:25.054585  714411 node_ready.go:57] node "default-k8s-diff-port-882305" has "Ready":"False" status (will retry)
	I1122 00:58:25.555302  714411 node_ready.go:49] node "default-k8s-diff-port-882305" is "Ready"
	I1122 00:58:25.555333  714411 node_ready.go:38] duration metric: took 12.503505057s for node "default-k8s-diff-port-882305" to be "Ready" ...
	I1122 00:58:25.555348  714411 api_server.go:52] waiting for apiserver process to appear ...
	I1122 00:58:25.555409  714411 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1122 00:58:25.573366  714411 api_server.go:72] duration metric: took 14.061111538s to wait for apiserver process to appear ...
	I1122 00:58:25.573390  714411 api_server.go:88] waiting for apiserver healthz status ...
	I1122 00:58:25.573408  714411 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8444/healthz ...
	I1122 00:58:25.608242  714411 api_server.go:279] https://192.168.85.2:8444/healthz returned 200:
	ok
	I1122 00:58:25.609583  714411 api_server.go:141] control plane version: v1.34.1
	I1122 00:58:25.609606  714411 api_server.go:131] duration metric: took 36.209673ms to wait for apiserver health ...
	I1122 00:58:25.609616  714411 system_pods.go:43] waiting for kube-system pods to appear ...
	I1122 00:58:25.626929  714411 system_pods.go:59] 8 kube-system pods found
	I1122 00:58:25.626968  714411 system_pods.go:61] "coredns-66bc5c9577-448gn" [a2f33c9b-90d6-4197-9606-48fd95ff1ef2] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1122 00:58:25.626975  714411 system_pods.go:61] "etcd-default-k8s-diff-port-882305" [b7b7077d-891d-48c6-b3dc-2f137b395bc2] Running
	I1122 00:58:25.626980  714411 system_pods.go:61] "kindnet-kcwqj" [52f46f97-517a-4d53-9374-2313d6220643] Running
	I1122 00:58:25.626984  714411 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-882305" [64aeddd8-fe12-4e20-86f8-b6b94d180713] Running
	I1122 00:58:25.626988  714411 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-882305" [da7abe5d-c103-4152-a303-9cca02a54d69] Running
	I1122 00:58:25.626992  714411 system_pods.go:61] "kube-proxy-59l6x" [7cdb7bc0-14ce-4e33-aca8-95137883f5e0] Running
	I1122 00:58:25.626996  714411 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-882305" [5506ff95-9cc2-4344-b578-eca19040f97a] Running
	I1122 00:58:25.627001  714411 system_pods.go:61] "storage-provisioner" [fc6390d1-3d5c-4f70-a9bb-7e5d41d44f2a] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1122 00:58:25.627008  714411 system_pods.go:74] duration metric: took 17.385937ms to wait for pod list to return data ...
	I1122 00:58:25.627015  714411 default_sa.go:34] waiting for default service account to be created ...
	I1122 00:58:25.646362  714411 default_sa.go:45] found service account: "default"
	I1122 00:58:25.646437  714411 default_sa.go:55] duration metric: took 19.41517ms for default service account to be created ...
	I1122 00:58:25.646478  714411 system_pods.go:116] waiting for k8s-apps to be running ...
	I1122 00:58:25.659407  714411 system_pods.go:86] 8 kube-system pods found
	I1122 00:58:25.659437  714411 system_pods.go:89] "coredns-66bc5c9577-448gn" [a2f33c9b-90d6-4197-9606-48fd95ff1ef2] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1122 00:58:25.659443  714411 system_pods.go:89] "etcd-default-k8s-diff-port-882305" [b7b7077d-891d-48c6-b3dc-2f137b395bc2] Running
	I1122 00:58:25.659450  714411 system_pods.go:89] "kindnet-kcwqj" [52f46f97-517a-4d53-9374-2313d6220643] Running
	I1122 00:58:25.659454  714411 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-882305" [64aeddd8-fe12-4e20-86f8-b6b94d180713] Running
	I1122 00:58:25.659458  714411 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-882305" [da7abe5d-c103-4152-a303-9cca02a54d69] Running
	I1122 00:58:25.659462  714411 system_pods.go:89] "kube-proxy-59l6x" [7cdb7bc0-14ce-4e33-aca8-95137883f5e0] Running
	I1122 00:58:25.659466  714411 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-882305" [5506ff95-9cc2-4344-b578-eca19040f97a] Running
	I1122 00:58:25.659471  714411 system_pods.go:89] "storage-provisioner" [fc6390d1-3d5c-4f70-a9bb-7e5d41d44f2a] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1122 00:58:25.659495  714411 retry.go:31] will retry after 252.248803ms: missing components: kube-dns
	I1122 00:58:25.939045  714411 system_pods.go:86] 8 kube-system pods found
	I1122 00:58:25.939084  714411 system_pods.go:89] "coredns-66bc5c9577-448gn" [a2f33c9b-90d6-4197-9606-48fd95ff1ef2] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1122 00:58:25.939092  714411 system_pods.go:89] "etcd-default-k8s-diff-port-882305" [b7b7077d-891d-48c6-b3dc-2f137b395bc2] Running
	I1122 00:58:25.939097  714411 system_pods.go:89] "kindnet-kcwqj" [52f46f97-517a-4d53-9374-2313d6220643] Running
	I1122 00:58:25.939102  714411 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-882305" [64aeddd8-fe12-4e20-86f8-b6b94d180713] Running
	I1122 00:58:25.939106  714411 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-882305" [da7abe5d-c103-4152-a303-9cca02a54d69] Running
	I1122 00:58:25.939111  714411 system_pods.go:89] "kube-proxy-59l6x" [7cdb7bc0-14ce-4e33-aca8-95137883f5e0] Running
	I1122 00:58:25.939143  714411 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-882305" [5506ff95-9cc2-4344-b578-eca19040f97a] Running
	I1122 00:58:25.939153  714411 system_pods.go:89] "storage-provisioner" [fc6390d1-3d5c-4f70-a9bb-7e5d41d44f2a] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1122 00:58:25.939160  714411 system_pods.go:126] duration metric: took 292.665784ms to wait for k8s-apps to be running ...
	I1122 00:58:25.939169  714411 system_svc.go:44] waiting for kubelet service to be running ....
	I1122 00:58:25.939229  714411 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1122 00:58:26.024136  714411 system_svc.go:56] duration metric: took 84.955077ms WaitForService to wait for kubelet
	I1122 00:58:26.024166  714411 kubeadm.go:587] duration metric: took 14.511915793s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1122 00:58:26.024185  714411 node_conditions.go:102] verifying NodePressure condition ...
	I1122 00:58:26.035911  714411 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1122 00:58:26.035945  714411 node_conditions.go:123] node cpu capacity is 2
	I1122 00:58:26.035965  714411 node_conditions.go:105] duration metric: took 11.771537ms to run NodePressure ...
	I1122 00:58:26.035979  714411 start.go:242] waiting for startup goroutines ...
	I1122 00:58:26.035993  714411 start.go:247] waiting for cluster config update ...
	I1122 00:58:26.036005  714411 start.go:256] writing updated cluster config ...
	I1122 00:58:26.036355  714411 ssh_runner.go:195] Run: rm -f paused
	I1122 00:58:26.048953  714411 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1122 00:58:26.061156  714411 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-448gn" in "kube-system" namespace to be "Ready" or be gone ...
	I1122 00:58:27.067335  714411 pod_ready.go:94] pod "coredns-66bc5c9577-448gn" is "Ready"
	I1122 00:58:27.067361  714411 pod_ready.go:86] duration metric: took 1.006178521s for pod "coredns-66bc5c9577-448gn" in "kube-system" namespace to be "Ready" or be gone ...
	I1122 00:58:27.071556  714411 pod_ready.go:83] waiting for pod "etcd-default-k8s-diff-port-882305" in "kube-system" namespace to be "Ready" or be gone ...
	I1122 00:58:27.080815  714411 pod_ready.go:94] pod "etcd-default-k8s-diff-port-882305" is "Ready"
	I1122 00:58:27.080834  714411 pod_ready.go:86] duration metric: took 9.256312ms for pod "etcd-default-k8s-diff-port-882305" in "kube-system" namespace to be "Ready" or be gone ...
	I1122 00:58:27.084148  714411 pod_ready.go:83] waiting for pod "kube-apiserver-default-k8s-diff-port-882305" in "kube-system" namespace to be "Ready" or be gone ...
	I1122 00:58:27.089255  714411 pod_ready.go:94] pod "kube-apiserver-default-k8s-diff-port-882305" is "Ready"
	I1122 00:58:27.089276  714411 pod_ready.go:86] duration metric: took 5.106435ms for pod "kube-apiserver-default-k8s-diff-port-882305" in "kube-system" namespace to be "Ready" or be gone ...
	I1122 00:58:27.092123  714411 pod_ready.go:83] waiting for pod "kube-controller-manager-default-k8s-diff-port-882305" in "kube-system" namespace to be "Ready" or be gone ...
	I1122 00:58:27.265193  714411 pod_ready.go:94] pod "kube-controller-manager-default-k8s-diff-port-882305" is "Ready"
	I1122 00:58:27.265217  714411 pod_ready.go:86] duration metric: took 173.026962ms for pod "kube-controller-manager-default-k8s-diff-port-882305" in "kube-system" namespace to be "Ready" or be gone ...
	I1122 00:58:27.465956  714411 pod_ready.go:83] waiting for pod "kube-proxy-59l6x" in "kube-system" namespace to be "Ready" or be gone ...
	I1122 00:58:27.866426  714411 pod_ready.go:94] pod "kube-proxy-59l6x" is "Ready"
	I1122 00:58:27.866450  714411 pod_ready.go:86] duration metric: took 400.466437ms for pod "kube-proxy-59l6x" in "kube-system" namespace to be "Ready" or be gone ...
	I1122 00:58:28.066250  714411 pod_ready.go:83] waiting for pod "kube-scheduler-default-k8s-diff-port-882305" in "kube-system" namespace to be "Ready" or be gone ...
	I1122 00:58:28.465990  714411 pod_ready.go:94] pod "kube-scheduler-default-k8s-diff-port-882305" is "Ready"
	I1122 00:58:28.466015  714411 pod_ready.go:86] duration metric: took 399.73928ms for pod "kube-scheduler-default-k8s-diff-port-882305" in "kube-system" namespace to be "Ready" or be gone ...
	I1122 00:58:28.466027  714411 pod_ready.go:40] duration metric: took 2.417027362s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1122 00:58:28.547992  714411 start.go:628] kubectl: 1.33.2, cluster: 1.34.1 (minor skew: 1)
	I1122 00:58:28.551947  714411 out.go:179] * Done! kubectl is now configured to use "default-k8s-diff-port-882305" cluster and "default" namespace by default
	I1122 00:58:25.387277  718325 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21934-513600/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v newest-cni-683181:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763588073-21934@sha256:19d3da0413e1bfa354cbb88004c6796f8e9772a083e0230b0f6e50212ee04c7e -I lz4 -xf /preloaded.tar -C /extractDir: (4.423180639s)
	I1122 00:58:25.387315  718325 kic.go:203] duration metric: took 4.423333036s to extract preloaded images to volume ...
	W1122 00:58:25.387452  718325 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I1122 00:58:25.387575  718325 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1122 00:58:25.453537  718325 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname newest-cni-683181 --name newest-cni-683181 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=newest-cni-683181 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=newest-cni-683181 --network newest-cni-683181 --ip 192.168.76.2 --volume newest-cni-683181:/var --security-opt apparmor=unconfined --memory=3072mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763588073-21934@sha256:19d3da0413e1bfa354cbb88004c6796f8e9772a083e0230b0f6e50212ee04c7e
	I1122 00:58:25.893794  718325 cli_runner.go:164] Run: docker container inspect newest-cni-683181 --format={{.State.Running}}
	I1122 00:58:25.922328  718325 cli_runner.go:164] Run: docker container inspect newest-cni-683181 --format={{.State.Status}}
	I1122 00:58:25.951654  718325 cli_runner.go:164] Run: docker exec newest-cni-683181 stat /var/lib/dpkg/alternatives/iptables
	I1122 00:58:26.038539  718325 oci.go:144] the created container "newest-cni-683181" has a running status.
	I1122 00:58:26.038572  718325 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21934-513600/.minikube/machines/newest-cni-683181/id_rsa...
	I1122 00:58:26.930968  718325 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21934-513600/.minikube/machines/newest-cni-683181/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1122 00:58:26.950408  718325 cli_runner.go:164] Run: docker container inspect newest-cni-683181 --format={{.State.Status}}
	I1122 00:58:26.981454  718325 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1122 00:58:26.981483  718325 kic_runner.go:114] Args: [docker exec --privileged newest-cni-683181 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1122 00:58:27.031467  718325 cli_runner.go:164] Run: docker container inspect newest-cni-683181 --format={{.State.Status}}
	I1122 00:58:27.053183  718325 machine.go:94] provisionDockerMachine start ...
	I1122 00:58:27.053291  718325 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-683181
	I1122 00:58:27.080815  718325 main.go:143] libmachine: Using SSH client type: native
	I1122 00:58:27.081141  718325 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33807 <nil> <nil>}
	I1122 00:58:27.081150  718325 main.go:143] libmachine: About to run SSH command:
	hostname
	I1122 00:58:27.245936  718325 main.go:143] libmachine: SSH cmd err, output: <nil>: newest-cni-683181
	
	I1122 00:58:27.245963  718325 ubuntu.go:182] provisioning hostname "newest-cni-683181"
	I1122 00:58:27.246115  718325 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-683181
	I1122 00:58:27.272865  718325 main.go:143] libmachine: Using SSH client type: native
	I1122 00:58:27.273186  718325 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33807 <nil> <nil>}
	I1122 00:58:27.273205  718325 main.go:143] libmachine: About to run SSH command:
	sudo hostname newest-cni-683181 && echo "newest-cni-683181" | sudo tee /etc/hostname
	I1122 00:58:27.439154  718325 main.go:143] libmachine: SSH cmd err, output: <nil>: newest-cni-683181
	
	I1122 00:58:27.439252  718325 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-683181
	I1122 00:58:27.457079  718325 main.go:143] libmachine: Using SSH client type: native
	I1122 00:58:27.457383  718325 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33807 <nil> <nil>}
	I1122 00:58:27.457408  718325 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\snewest-cni-683181' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 newest-cni-683181/g' /etc/hosts;
				else 
					echo '127.0.1.1 newest-cni-683181' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1122 00:58:27.606057  718325 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1122 00:58:27.606088  718325 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21934-513600/.minikube CaCertPath:/home/jenkins/minikube-integration/21934-513600/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21934-513600/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21934-513600/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21934-513600/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21934-513600/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21934-513600/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21934-513600/.minikube}
	I1122 00:58:27.606133  718325 ubuntu.go:190] setting up certificates
	I1122 00:58:27.606143  718325 provision.go:84] configureAuth start
	I1122 00:58:27.606220  718325 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-683181
	I1122 00:58:27.625558  718325 provision.go:143] copyHostCerts
	I1122 00:58:27.625633  718325 exec_runner.go:144] found /home/jenkins/minikube-integration/21934-513600/.minikube/ca.pem, removing ...
	I1122 00:58:27.625645  718325 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21934-513600/.minikube/ca.pem
	I1122 00:58:27.625723  718325 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21934-513600/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21934-513600/.minikube/ca.pem (1078 bytes)
	I1122 00:58:27.625976  718325 exec_runner.go:144] found /home/jenkins/minikube-integration/21934-513600/.minikube/cert.pem, removing ...
	I1122 00:58:27.625989  718325 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21934-513600/.minikube/cert.pem
	I1122 00:58:27.626025  718325 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21934-513600/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21934-513600/.minikube/cert.pem (1123 bytes)
	I1122 00:58:27.626111  718325 exec_runner.go:144] found /home/jenkins/minikube-integration/21934-513600/.minikube/key.pem, removing ...
	I1122 00:58:27.626121  718325 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21934-513600/.minikube/key.pem
	I1122 00:58:27.626148  718325 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21934-513600/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21934-513600/.minikube/key.pem (1675 bytes)
	I1122 00:58:27.626211  718325 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21934-513600/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21934-513600/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21934-513600/.minikube/certs/ca-key.pem org=jenkins.newest-cni-683181 san=[127.0.0.1 192.168.76.2 localhost minikube newest-cni-683181]
	I1122 00:58:28.046573  718325 provision.go:177] copyRemoteCerts
	I1122 00:58:28.046647  718325 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1122 00:58:28.046697  718325 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-683181
	I1122 00:58:28.068453  718325 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33807 SSHKeyPath:/home/jenkins/minikube-integration/21934-513600/.minikube/machines/newest-cni-683181/id_rsa Username:docker}
	I1122 00:58:28.169685  718325 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21934-513600/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1122 00:58:28.188447  718325 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21934-513600/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1122 00:58:28.206543  718325 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21934-513600/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1122 00:58:28.225022  718325 provision.go:87] duration metric: took 618.852705ms to configureAuth
	I1122 00:58:28.225050  718325 ubuntu.go:206] setting minikube options for container-runtime
	I1122 00:58:28.225274  718325 config.go:182] Loaded profile config "newest-cni-683181": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1122 00:58:28.225382  718325 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-683181
	I1122 00:58:28.243244  718325 main.go:143] libmachine: Using SSH client type: native
	I1122 00:58:28.243570  718325 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33807 <nil> <nil>}
	I1122 00:58:28.243590  718325 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1122 00:58:28.574863  718325 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1122 00:58:28.574888  718325 machine.go:97] duration metric: took 1.521684556s to provisionDockerMachine
	I1122 00:58:28.574898  718325 client.go:176] duration metric: took 8.321144407s to LocalClient.Create
	I1122 00:58:28.574918  718325 start.go:167] duration metric: took 8.321208799s to libmachine.API.Create "newest-cni-683181"
	I1122 00:58:28.574927  718325 start.go:293] postStartSetup for "newest-cni-683181" (driver="docker")
	I1122 00:58:28.574937  718325 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1122 00:58:28.575014  718325 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1122 00:58:28.575053  718325 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-683181
	I1122 00:58:28.624858  718325 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33807 SSHKeyPath:/home/jenkins/minikube-integration/21934-513600/.minikube/machines/newest-cni-683181/id_rsa Username:docker}
	I1122 00:58:28.734790  718325 ssh_runner.go:195] Run: cat /etc/os-release
	I1122 00:58:28.738510  718325 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1122 00:58:28.738540  718325 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1122 00:58:28.738551  718325 filesync.go:126] Scanning /home/jenkins/minikube-integration/21934-513600/.minikube/addons for local assets ...
	I1122 00:58:28.738615  718325 filesync.go:126] Scanning /home/jenkins/minikube-integration/21934-513600/.minikube/files for local assets ...
	I1122 00:58:28.738703  718325 filesync.go:149] local asset: /home/jenkins/minikube-integration/21934-513600/.minikube/files/etc/ssl/certs/5169372.pem -> 5169372.pem in /etc/ssl/certs
	I1122 00:58:28.738809  718325 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1122 00:58:28.747055  718325 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21934-513600/.minikube/files/etc/ssl/certs/5169372.pem --> /etc/ssl/certs/5169372.pem (1708 bytes)
	I1122 00:58:28.767473  718325 start.go:296] duration metric: took 192.531131ms for postStartSetup
	I1122 00:58:28.767824  718325 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-683181
	I1122 00:58:28.792791  718325 profile.go:143] Saving config to /home/jenkins/minikube-integration/21934-513600/.minikube/profiles/newest-cni-683181/config.json ...
	I1122 00:58:28.793829  718325 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1122 00:58:28.793886  718325 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-683181
	I1122 00:58:28.826076  718325 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33807 SSHKeyPath:/home/jenkins/minikube-integration/21934-513600/.minikube/machines/newest-cni-683181/id_rsa Username:docker}
	I1122 00:58:28.931010  718325 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1122 00:58:28.935511  718325 start.go:128] duration metric: took 8.685614466s to createHost
	I1122 00:58:28.935577  718325 start.go:83] releasing machines lock for "newest-cni-683181", held for 8.685934096s
	I1122 00:58:28.935660  718325 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-683181
	I1122 00:58:28.953201  718325 ssh_runner.go:195] Run: cat /version.json
	I1122 00:58:28.953251  718325 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-683181
	I1122 00:58:28.953492  718325 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1122 00:58:28.953547  718325 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-683181
	I1122 00:58:28.974979  718325 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33807 SSHKeyPath:/home/jenkins/minikube-integration/21934-513600/.minikube/machines/newest-cni-683181/id_rsa Username:docker}
	I1122 00:58:28.982119  718325 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33807 SSHKeyPath:/home/jenkins/minikube-integration/21934-513600/.minikube/machines/newest-cni-683181/id_rsa Username:docker}
	I1122 00:58:29.089685  718325 ssh_runner.go:195] Run: systemctl --version
	I1122 00:58:29.188943  718325 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1122 00:58:29.230240  718325 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1122 00:58:29.234411  718325 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1122 00:58:29.234524  718325 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1122 00:58:29.264536  718325 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/10-crio-bridge.conflist.disabled] bridge cni config(s)
	I1122 00:58:29.264561  718325 start.go:496] detecting cgroup driver to use...
	I1122 00:58:29.264602  718325 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1122 00:58:29.264679  718325 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1122 00:58:29.283985  718325 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1122 00:58:29.297249  718325 docker.go:218] disabling cri-docker service (if available) ...
	I1122 00:58:29.297331  718325 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1122 00:58:29.316014  718325 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1122 00:58:29.336388  718325 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1122 00:58:29.460919  718325 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1122 00:58:29.600440  718325 docker.go:234] disabling docker service ...
	I1122 00:58:29.600508  718325 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1122 00:58:29.622503  718325 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1122 00:58:29.636431  718325 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1122 00:58:29.763282  718325 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1122 00:58:29.889410  718325 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1122 00:58:29.902435  718325 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1122 00:58:29.916305  718325 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1122 00:58:29.916401  718325 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1122 00:58:29.924927  718325 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1122 00:58:29.925022  718325 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1122 00:58:29.933735  718325 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1122 00:58:29.941959  718325 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1122 00:58:29.951233  718325 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1122 00:58:29.959541  718325 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1122 00:58:29.968253  718325 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1122 00:58:29.981658  718325 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1122 00:58:29.990863  718325 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1122 00:58:29.998382  718325 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1122 00:58:30.013931  718325 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1122 00:58:30.145049  718325 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1122 00:58:30.318841  718325 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1122 00:58:30.318968  718325 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1122 00:58:30.322748  718325 start.go:564] Will wait 60s for crictl version
	I1122 00:58:30.322845  718325 ssh_runner.go:195] Run: which crictl
	I1122 00:58:30.326408  718325 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1122 00:58:30.355132  718325 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.2
	RuntimeApiVersion:  v1
	I1122 00:58:30.355251  718325 ssh_runner.go:195] Run: crio --version
	I1122 00:58:30.384301  718325 ssh_runner.go:195] Run: crio --version
	I1122 00:58:30.419783  718325 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.2 ...
	I1122 00:58:30.424154  718325 cli_runner.go:164] Run: docker network inspect newest-cni-683181 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1122 00:58:30.440319  718325 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I1122 00:58:30.444086  718325 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1122 00:58:30.455827  718325 out.go:179]   - kubeadm.pod-network-cidr=10.42.0.0/16
	I1122 00:58:30.458083  718325 kubeadm.go:884] updating cluster {Name:newest-cni-683181 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763588073-21934@sha256:19d3da0413e1bfa354cbb88004c6796f8e9772a083e0230b0f6e50212ee04c7e Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-683181 Namespace:default APIServerHAVIP: APIServerName:minikubeCA API
ServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false Disab
leMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1122 00:58:30.458226  718325 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1122 00:58:30.458299  718325 ssh_runner.go:195] Run: sudo crictl images --output json
	I1122 00:58:30.494981  718325 crio.go:514] all images are preloaded for cri-o runtime.
	I1122 00:58:30.495012  718325 crio.go:433] Images already preloaded, skipping extraction
	I1122 00:58:30.495130  718325 ssh_runner.go:195] Run: sudo crictl images --output json
	I1122 00:58:30.525050  718325 crio.go:514] all images are preloaded for cri-o runtime.
	I1122 00:58:30.525075  718325 cache_images.go:86] Images are preloaded, skipping loading
	I1122 00:58:30.525083  718325 kubeadm.go:935] updating node { 192.168.76.2 8443 v1.34.1 crio true true} ...
	I1122 00:58:30.525171  718325 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=newest-cni-683181 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:newest-cni-683181 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1122 00:58:30.525258  718325 ssh_runner.go:195] Run: crio config
	I1122 00:58:30.600609  718325 cni.go:84] Creating CNI manager for ""
	I1122 00:58:30.600632  718325 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1122 00:58:30.600652  718325 kubeadm.go:85] Using pod CIDR: 10.42.0.0/16
	I1122 00:58:30.600677  718325 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.42.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:newest-cni-683181 NodeName:newest-cni-683181 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/
kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1122 00:58:30.600839  718325 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "newest-cni-683181"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.76.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.42.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.42.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1122 00:58:30.600913  718325 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1122 00:58:30.608728  718325 binaries.go:51] Found k8s binaries, skipping transfer
	I1122 00:58:30.608796  718325 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1122 00:58:30.616574  718325 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (367 bytes)
	I1122 00:58:30.630613  718325 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1122 00:58:30.644902  718325 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2212 bytes)
	I1122 00:58:30.658585  718325 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I1122 00:58:30.662287  718325 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1122 00:58:30.672483  718325 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1122 00:58:30.785208  718325 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1122 00:58:30.801883  718325 certs.go:69] Setting up /home/jenkins/minikube-integration/21934-513600/.minikube/profiles/newest-cni-683181 for IP: 192.168.76.2
	I1122 00:58:30.801904  718325 certs.go:195] generating shared ca certs ...
	I1122 00:58:30.801920  718325 certs.go:227] acquiring lock for ca certs: {Name:mkaf4c79493334cb2058b349c36be7473837f9b7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1122 00:58:30.802060  718325 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21934-513600/.minikube/ca.key
	I1122 00:58:30.802108  718325 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21934-513600/.minikube/proxy-client-ca.key
	I1122 00:58:30.802124  718325 certs.go:257] generating profile certs ...
	I1122 00:58:30.802178  718325 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21934-513600/.minikube/profiles/newest-cni-683181/client.key
	I1122 00:58:30.802201  718325 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21934-513600/.minikube/profiles/newest-cni-683181/client.crt with IP's: []
	I1122 00:58:31.031497  718325 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21934-513600/.minikube/profiles/newest-cni-683181/client.crt ...
	I1122 00:58:31.031531  718325 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21934-513600/.minikube/profiles/newest-cni-683181/client.crt: {Name:mkd19208e1bfd76d07486e29621b9d2b422e529a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1122 00:58:31.031733  718325 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21934-513600/.minikube/profiles/newest-cni-683181/client.key ...
	I1122 00:58:31.031747  718325 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21934-513600/.minikube/profiles/newest-cni-683181/client.key: {Name:mk8d0f18b80c38a42642ae7193114ed97d0458ef Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1122 00:58:31.033096  718325 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21934-513600/.minikube/profiles/newest-cni-683181/apiserver.key.458b4884
	I1122 00:58:31.033118  718325 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21934-513600/.minikube/profiles/newest-cni-683181/apiserver.crt.458b4884 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.76.2]
	I1122 00:58:31.326005  718325 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21934-513600/.minikube/profiles/newest-cni-683181/apiserver.crt.458b4884 ...
	I1122 00:58:31.326040  718325 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21934-513600/.minikube/profiles/newest-cni-683181/apiserver.crt.458b4884: {Name:mk5b66fb94768a5a8d32bfd54c2438a1111fe28d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1122 00:58:31.326226  718325 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21934-513600/.minikube/profiles/newest-cni-683181/apiserver.key.458b4884 ...
	I1122 00:58:31.326241  718325 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21934-513600/.minikube/profiles/newest-cni-683181/apiserver.key.458b4884: {Name:mk6a2095a7a20b98e395f295f496962d5b951e85 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1122 00:58:31.326328  718325 certs.go:382] copying /home/jenkins/minikube-integration/21934-513600/.minikube/profiles/newest-cni-683181/apiserver.crt.458b4884 -> /home/jenkins/minikube-integration/21934-513600/.minikube/profiles/newest-cni-683181/apiserver.crt
	I1122 00:58:31.326416  718325 certs.go:386] copying /home/jenkins/minikube-integration/21934-513600/.minikube/profiles/newest-cni-683181/apiserver.key.458b4884 -> /home/jenkins/minikube-integration/21934-513600/.minikube/profiles/newest-cni-683181/apiserver.key
	I1122 00:58:31.326482  718325 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21934-513600/.minikube/profiles/newest-cni-683181/proxy-client.key
	I1122 00:58:31.326502  718325 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21934-513600/.minikube/profiles/newest-cni-683181/proxy-client.crt with IP's: []
	I1122 00:58:31.541196  718325 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21934-513600/.minikube/profiles/newest-cni-683181/proxy-client.crt ...
	I1122 00:58:31.541237  718325 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21934-513600/.minikube/profiles/newest-cni-683181/proxy-client.crt: {Name:mk8c3166d3f4e910db3f351e3b5c9dc316b6b166 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1122 00:58:31.541418  718325 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21934-513600/.minikube/profiles/newest-cni-683181/proxy-client.key ...
	I1122 00:58:31.541434  718325 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21934-513600/.minikube/profiles/newest-cni-683181/proxy-client.key: {Name:mkaa41bb0ce1ff7b92188102ca686c8373f5ef2b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1122 00:58:31.541614  718325 certs.go:484] found cert: /home/jenkins/minikube-integration/21934-513600/.minikube/certs/516937.pem (1338 bytes)
	W1122 00:58:31.541662  718325 certs.go:480] ignoring /home/jenkins/minikube-integration/21934-513600/.minikube/certs/516937_empty.pem, impossibly tiny 0 bytes
	I1122 00:58:31.541675  718325 certs.go:484] found cert: /home/jenkins/minikube-integration/21934-513600/.minikube/certs/ca-key.pem (1679 bytes)
	I1122 00:58:31.541708  718325 certs.go:484] found cert: /home/jenkins/minikube-integration/21934-513600/.minikube/certs/ca.pem (1078 bytes)
	I1122 00:58:31.541735  718325 certs.go:484] found cert: /home/jenkins/minikube-integration/21934-513600/.minikube/certs/cert.pem (1123 bytes)
	I1122 00:58:31.541763  718325 certs.go:484] found cert: /home/jenkins/minikube-integration/21934-513600/.minikube/certs/key.pem (1675 bytes)
	I1122 00:58:31.541829  718325 certs.go:484] found cert: /home/jenkins/minikube-integration/21934-513600/.minikube/files/etc/ssl/certs/5169372.pem (1708 bytes)
	I1122 00:58:31.542407  718325 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21934-513600/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1122 00:58:31.562702  718325 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21934-513600/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1122 00:58:31.582670  718325 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21934-513600/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1122 00:58:31.601042  718325 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21934-513600/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1122 00:58:31.618260  718325 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21934-513600/.minikube/profiles/newest-cni-683181/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1122 00:58:31.636280  718325 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21934-513600/.minikube/profiles/newest-cni-683181/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1122 00:58:31.654933  718325 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21934-513600/.minikube/profiles/newest-cni-683181/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1122 00:58:31.673478  718325 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21934-513600/.minikube/profiles/newest-cni-683181/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1122 00:58:31.691901  718325 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21934-513600/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1122 00:58:31.710557  718325 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21934-513600/.minikube/certs/516937.pem --> /usr/share/ca-certificates/516937.pem (1338 bytes)
	I1122 00:58:31.728705  718325 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21934-513600/.minikube/files/etc/ssl/certs/5169372.pem --> /usr/share/ca-certificates/5169372.pem (1708 bytes)
	I1122 00:58:31.754438  718325 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1122 00:58:31.768993  718325 ssh_runner.go:195] Run: openssl version
	I1122 00:58:31.775710  718325 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1122 00:58:31.783961  718325 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1122 00:58:31.787848  718325 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov 21 23:48 /usr/share/ca-certificates/minikubeCA.pem
	I1122 00:58:31.787916  718325 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1122 00:58:31.828990  718325 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1122 00:58:31.837360  718325 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/516937.pem && ln -fs /usr/share/ca-certificates/516937.pem /etc/ssl/certs/516937.pem"
	I1122 00:58:31.845998  718325 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/516937.pem
	I1122 00:58:31.849771  718325 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov 21 23:55 /usr/share/ca-certificates/516937.pem
	I1122 00:58:31.850007  718325 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/516937.pem
	I1122 00:58:31.891930  718325 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/516937.pem /etc/ssl/certs/51391683.0"
	I1122 00:58:31.900727  718325 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/5169372.pem && ln -fs /usr/share/ca-certificates/5169372.pem /etc/ssl/certs/5169372.pem"
	I1122 00:58:31.909894  718325 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/5169372.pem
	I1122 00:58:31.914390  718325 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov 21 23:55 /usr/share/ca-certificates/5169372.pem
	I1122 00:58:31.914478  718325 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/5169372.pem
	I1122 00:58:31.955665  718325 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/5169372.pem /etc/ssl/certs/3ec20f2e.0"
	I1122 00:58:31.964297  718325 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1122 00:58:31.968175  718325 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1122 00:58:31.968228  718325 kubeadm.go:401] StartCluster: {Name:newest-cni-683181 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763588073-21934@sha256:19d3da0413e1bfa354cbb88004c6796f8e9772a083e0230b0f6e50212ee04c7e Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-683181 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISer
verNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableM
etrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1122 00:58:31.968312  718325 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1122 00:58:31.968373  718325 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1122 00:58:31.996552  718325 cri.go:89] found id: ""
	I1122 00:58:31.996621  718325 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1122 00:58:32.006910  718325 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1122 00:58:32.016202  718325 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1122 00:58:32.016313  718325 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1122 00:58:32.025734  718325 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1122 00:58:32.025757  718325 kubeadm.go:158] found existing configuration files:
	
	I1122 00:58:32.025861  718325 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1122 00:58:32.034545  718325 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1122 00:58:32.034626  718325 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1122 00:58:32.043755  718325 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1122 00:58:32.052628  718325 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1122 00:58:32.052719  718325 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1122 00:58:32.061057  718325 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1122 00:58:32.069876  718325 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1122 00:58:32.070002  718325 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1122 00:58:32.079438  718325 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1122 00:58:32.088320  718325 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1122 00:58:32.088438  718325 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1122 00:58:32.097411  718325 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1122 00:58:32.171006  718325 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is in maintenance mode, please migrate to cgroups v2
	I1122 00:58:32.171366  718325 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1122 00:58:32.249081  718325 kubeadm.go:319] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	
	
	==> CRI-O <==
	Nov 22 00:58:25 default-k8s-diff-port-882305 crio[835]: time="2025-11-22T00:58:25.80225482Z" level=info msg="Created container 2a17ae0f9b3a7e186c47a45ccb5860e88a849a8f7cc0a49a3c922bb88c4165e3: kube-system/coredns-66bc5c9577-448gn/coredns" id=680ab5a5-7095-4f03-9bc1-ced01c9bb7a6 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 22 00:58:25 default-k8s-diff-port-882305 crio[835]: time="2025-11-22T00:58:25.80402043Z" level=info msg="Starting container: 2a17ae0f9b3a7e186c47a45ccb5860e88a849a8f7cc0a49a3c922bb88c4165e3" id=3d4282a8-10ea-418c-aa9f-d9cbbcad74ca name=/runtime.v1.RuntimeService/StartContainer
	Nov 22 00:58:25 default-k8s-diff-port-882305 crio[835]: time="2025-11-22T00:58:25.80559247Z" level=info msg="Started container" PID=1720 containerID=2a17ae0f9b3a7e186c47a45ccb5860e88a849a8f7cc0a49a3c922bb88c4165e3 description=kube-system/coredns-66bc5c9577-448gn/coredns id=3d4282a8-10ea-418c-aa9f-d9cbbcad74ca name=/runtime.v1.RuntimeService/StartContainer sandboxID=c585dab44a49adbb40e16e50bde1f20a00c8a07d67567f02ae8096afc167b827
	Nov 22 00:58:29 default-k8s-diff-port-882305 crio[835]: time="2025-11-22T00:58:29.117848895Z" level=info msg="Running pod sandbox: default/busybox/POD" id=7a4aa7a4-dbc5-47fa-a6b2-62a7a7382dc5 name=/runtime.v1.RuntimeService/RunPodSandbox
	Nov 22 00:58:29 default-k8s-diff-port-882305 crio[835]: time="2025-11-22T00:58:29.117935858Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 22 00:58:29 default-k8s-diff-port-882305 crio[835]: time="2025-11-22T00:58:29.128903381Z" level=info msg="Got pod network &{Name:busybox Namespace:default ID:f684ca326a633dec162a1509289d97cfa22bfd367b43305bca5767f8e5db718d UID:c3ec38da-dbd6-47f5-acb8-b65445289488 NetNS:/var/run/netns/070b5466-1433-4fe7-aadd-6b16d67a4179 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0x400012da68}] Aliases:map[]}"
	Nov 22 00:58:29 default-k8s-diff-port-882305 crio[835]: time="2025-11-22T00:58:29.128944242Z" level=info msg="Adding pod default_busybox to CNI network \"kindnet\" (type=ptp)"
	Nov 22 00:58:29 default-k8s-diff-port-882305 crio[835]: time="2025-11-22T00:58:29.142554477Z" level=info msg="Got pod network &{Name:busybox Namespace:default ID:f684ca326a633dec162a1509289d97cfa22bfd367b43305bca5767f8e5db718d UID:c3ec38da-dbd6-47f5-acb8-b65445289488 NetNS:/var/run/netns/070b5466-1433-4fe7-aadd-6b16d67a4179 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0x400012da68}] Aliases:map[]}"
	Nov 22 00:58:29 default-k8s-diff-port-882305 crio[835]: time="2025-11-22T00:58:29.142773983Z" level=info msg="Checking pod default_busybox for CNI network kindnet (type=ptp)"
	Nov 22 00:58:29 default-k8s-diff-port-882305 crio[835]: time="2025-11-22T00:58:29.153709433Z" level=info msg="Ran pod sandbox f684ca326a633dec162a1509289d97cfa22bfd367b43305bca5767f8e5db718d with infra container: default/busybox/POD" id=7a4aa7a4-dbc5-47fa-a6b2-62a7a7382dc5 name=/runtime.v1.RuntimeService/RunPodSandbox
	Nov 22 00:58:29 default-k8s-diff-port-882305 crio[835]: time="2025-11-22T00:58:29.156295663Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=172f95a7-588b-4742-9491-dd1b7584b036 name=/runtime.v1.ImageService/ImageStatus
	Nov 22 00:58:29 default-k8s-diff-port-882305 crio[835]: time="2025-11-22T00:58:29.156551196Z" level=info msg="Image gcr.io/k8s-minikube/busybox:1.28.4-glibc not found" id=172f95a7-588b-4742-9491-dd1b7584b036 name=/runtime.v1.ImageService/ImageStatus
	Nov 22 00:58:29 default-k8s-diff-port-882305 crio[835]: time="2025-11-22T00:58:29.156664588Z" level=info msg="Neither image nor artfiact gcr.io/k8s-minikube/busybox:1.28.4-glibc found" id=172f95a7-588b-4742-9491-dd1b7584b036 name=/runtime.v1.ImageService/ImageStatus
	Nov 22 00:58:29 default-k8s-diff-port-882305 crio[835]: time="2025-11-22T00:58:29.157485593Z" level=info msg="Pulling image: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=661c7537-0d59-41e3-8a7b-c193e8057529 name=/runtime.v1.ImageService/PullImage
	Nov 22 00:58:29 default-k8s-diff-port-882305 crio[835]: time="2025-11-22T00:58:29.160122989Z" level=info msg="Trying to access \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\""
	Nov 22 00:58:31 default-k8s-diff-port-882305 crio[835]: time="2025-11-22T00:58:31.252573001Z" level=info msg="Pulled image: gcr.io/k8s-minikube/busybox@sha256:580b0aa58b210f512f818b7b7ef4f63c803f7a8cd6baf571b1462b79f7b7719e" id=661c7537-0d59-41e3-8a7b-c193e8057529 name=/runtime.v1.ImageService/PullImage
	Nov 22 00:58:31 default-k8s-diff-port-882305 crio[835]: time="2025-11-22T00:58:31.253353663Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=de294b72-9791-4c7d-b451-cd062d1175c2 name=/runtime.v1.ImageService/ImageStatus
	Nov 22 00:58:31 default-k8s-diff-port-882305 crio[835]: time="2025-11-22T00:58:31.257469522Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=e258a5e1-831d-4b9d-954d-674bfb3c9d70 name=/runtime.v1.ImageService/ImageStatus
	Nov 22 00:58:31 default-k8s-diff-port-882305 crio[835]: time="2025-11-22T00:58:31.266070674Z" level=info msg="Creating container: default/busybox/busybox" id=6279889c-93d7-4218-a2e8-6b3a7b2dda2c name=/runtime.v1.RuntimeService/CreateContainer
	Nov 22 00:58:31 default-k8s-diff-port-882305 crio[835]: time="2025-11-22T00:58:31.266230628Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 22 00:58:31 default-k8s-diff-port-882305 crio[835]: time="2025-11-22T00:58:31.271490281Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 22 00:58:31 default-k8s-diff-port-882305 crio[835]: time="2025-11-22T00:58:31.271953472Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 22 00:58:31 default-k8s-diff-port-882305 crio[835]: time="2025-11-22T00:58:31.291856619Z" level=info msg="Created container 92e2e999fe1c53b908c9ecda8a36d6cf03c05cf80d0ee5e169977f06fd832fbc: default/busybox/busybox" id=6279889c-93d7-4218-a2e8-6b3a7b2dda2c name=/runtime.v1.RuntimeService/CreateContainer
	Nov 22 00:58:31 default-k8s-diff-port-882305 crio[835]: time="2025-11-22T00:58:31.295460813Z" level=info msg="Starting container: 92e2e999fe1c53b908c9ecda8a36d6cf03c05cf80d0ee5e169977f06fd832fbc" id=6f1c8deb-4802-465a-917d-7aaf5a3a6fd7 name=/runtime.v1.RuntimeService/StartContainer
	Nov 22 00:58:31 default-k8s-diff-port-882305 crio[835]: time="2025-11-22T00:58:31.299390372Z" level=info msg="Started container" PID=1773 containerID=92e2e999fe1c53b908c9ecda8a36d6cf03c05cf80d0ee5e169977f06fd832fbc description=default/busybox/busybox id=6f1c8deb-4802-465a-917d-7aaf5a3a6fd7 name=/runtime.v1.RuntimeService/StartContainer sandboxID=f684ca326a633dec162a1509289d97cfa22bfd367b43305bca5767f8e5db718d
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                                    NAMESPACE
	92e2e999fe1c5       gcr.io/k8s-minikube/busybox@sha256:580b0aa58b210f512f818b7b7ef4f63c803f7a8cd6baf571b1462b79f7b7719e   9 seconds ago       Running             busybox                   0                   f684ca326a633       busybox                                                default
	2a17ae0f9b3a7       138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc                                      14 seconds ago      Running             coredns                   0                   c585dab44a49a       coredns-66bc5c9577-448gn                               kube-system
	df7389f58d2be       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                      14 seconds ago      Running             storage-provisioner       0                   57de35e151fe6       storage-provisioner                                    kube-system
	6374c9a498e5e       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c                                      26 seconds ago      Running             kindnet-cni               0                   8a594035f3873       kindnet-kcwqj                                          kube-system
	3ece61dc44bf3       05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9                                      26 seconds ago      Running             kube-proxy                0                   75c8beb7d7a7a       kube-proxy-59l6x                                       kube-system
	7feb7d632ef28       b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0                                      40 seconds ago      Running             kube-scheduler            0                   ccda207699c3f       kube-scheduler-default-k8s-diff-port-882305            kube-system
	421b47aa6e614       a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e                                      40 seconds ago      Running             etcd                      0                   51aaf6aee78bd       etcd-default-k8s-diff-port-882305                      kube-system
	f2438cab7689d       43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196                                      40 seconds ago      Running             kube-apiserver            0                   f5aceadffadf6       kube-apiserver-default-k8s-diff-port-882305            kube-system
	1bd9d2111f834       7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a                                      41 seconds ago      Running             kube-controller-manager   0                   4e2e634cb6cd9       kube-controller-manager-default-k8s-diff-port-882305   kube-system
	
	
	==> coredns [2a17ae0f9b3a7e186c47a45ccb5860e88a849a8f7cc0a49a3c922bb88c4165e3] <==
	maxprocs: Leaving GOMAXPROCS=2: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = fa9a0cdcdddcb4be74a0eaf7cfcb211c40e29ddf5507e03bbfc0065bade31f0f2641a2513136e246f32328dd126fc93236fb5c595246f0763926a524386705e8
	CoreDNS-1.12.1
	linux/arm64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:52025 - 27829 "HINFO IN 5886354095174913403.8046833604328528915. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.02337082s
	
	
	==> describe nodes <==
	Name:               default-k8s-diff-port-882305
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=default-k8s-diff-port-882305
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=299bbe887a12c40541707cc636234f35f4ff1785
	                    minikube.k8s.io/name=default-k8s-diff-port-882305
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_11_22T00_58_07_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 22 Nov 2025 00:58:03 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  default-k8s-diff-port-882305
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 22 Nov 2025 00:58:37 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 22 Nov 2025 00:58:37 +0000   Sat, 22 Nov 2025 00:57:59 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 22 Nov 2025 00:58:37 +0000   Sat, 22 Nov 2025 00:57:59 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 22 Nov 2025 00:58:37 +0000   Sat, 22 Nov 2025 00:57:59 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 22 Nov 2025 00:58:37 +0000   Sat, 22 Nov 2025 00:58:25 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.85.2
	  Hostname:    default-k8s-diff-port-882305
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	System Info:
	  Machine ID:                 c92eea4d3c03c0156e01fcf8691e3907
	  System UUID:                3e7302ec-f0a5-4d0d-8a5f-75986888bef8
	  Boot ID:                    72ac7385-472f-47d1-a23e-bd80468d6e09
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.2
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                    ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         12s
	  kube-system                 coredns-66bc5c9577-448gn                                100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     29s
	  kube-system                 etcd-default-k8s-diff-port-882305                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         33s
	  kube-system                 kindnet-kcwqj                                           100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      29s
	  kube-system                 kube-apiserver-default-k8s-diff-port-882305             250m (12%)    0 (0%)      0 (0%)           0 (0%)         34s
	  kube-system                 kube-controller-manager-default-k8s-diff-port-882305    200m (10%)    0 (0%)      0 (0%)           0 (0%)         33s
	  kube-system                 kube-proxy-59l6x                                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         29s
	  kube-system                 kube-scheduler-default-k8s-diff-port-882305             100m (5%)     0 (0%)      0 (0%)           0 (0%)         34s
	  kube-system                 storage-provisioner                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         27s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  100m (5%)
	  memory             220Mi (2%)  220Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                From             Message
	  ----     ------                   ----               ----             -------
	  Normal   Starting                 26s                kube-proxy       
	  Normal   NodeHasSufficientMemory  41s (x8 over 41s)  kubelet          Node default-k8s-diff-port-882305 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    41s (x8 over 41s)  kubelet          Node default-k8s-diff-port-882305 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     41s (x8 over 41s)  kubelet          Node default-k8s-diff-port-882305 status is now: NodeHasSufficientPID
	  Normal   Starting                 34s                kubelet          Starting kubelet.
	  Warning  CgroupV1                 34s                kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  34s                kubelet          Node default-k8s-diff-port-882305 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    34s                kubelet          Node default-k8s-diff-port-882305 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     34s                kubelet          Node default-k8s-diff-port-882305 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           30s                node-controller  Node default-k8s-diff-port-882305 event: Registered Node default-k8s-diff-port-882305 in Controller
	  Normal   NodeReady                15s                kubelet          Node default-k8s-diff-port-882305 status is now: NodeReady
	
	
	==> dmesg <==
	[Nov22 00:35] overlayfs: idmapped layers are currently not supported
	[Nov22 00:36] overlayfs: idmapped layers are currently not supported
	[ +18.168104] overlayfs: idmapped layers are currently not supported
	[Nov22 00:37] overlayfs: idmapped layers are currently not supported
	[ +56.322609] overlayfs: idmapped layers are currently not supported
	[Nov22 00:38] overlayfs: idmapped layers are currently not supported
	[Nov22 00:39] overlayfs: idmapped layers are currently not supported
	[ +23.174928] overlayfs: idmapped layers are currently not supported
	[Nov22 00:41] overlayfs: idmapped layers are currently not supported
	[Nov22 00:42] overlayfs: idmapped layers are currently not supported
	[Nov22 00:44] overlayfs: idmapped layers are currently not supported
	[Nov22 00:45] overlayfs: idmapped layers are currently not supported
	[Nov22 00:46] overlayfs: idmapped layers are currently not supported
	[Nov22 00:48] overlayfs: idmapped layers are currently not supported
	[Nov22 00:50] overlayfs: idmapped layers are currently not supported
	[Nov22 00:51] overlayfs: idmapped layers are currently not supported
	[ +11.900293] overlayfs: idmapped layers are currently not supported
	[ +28.922055] overlayfs: idmapped layers are currently not supported
	[Nov22 00:52] overlayfs: idmapped layers are currently not supported
	[Nov22 00:53] overlayfs: idmapped layers are currently not supported
	[Nov22 00:54] overlayfs: idmapped layers are currently not supported
	[Nov22 00:55] overlayfs: idmapped layers are currently not supported
	[Nov22 00:56] overlayfs: idmapped layers are currently not supported
	[Nov22 00:57] overlayfs: idmapped layers are currently not supported
	[Nov22 00:58] overlayfs: idmapped layers are currently not supported
	
	
	==> etcd [421b47aa6e614c28cd113fe0e4c7c1e5a8026a70f3bdcf7a46c414f525867a98] <==
	{"level":"warn","ts":"2025-11-22T00:58:02.486388Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49758","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-22T00:58:02.519527Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49772","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-22T00:58:02.549873Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49778","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-22T00:58:02.566811Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49792","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-22T00:58:02.627225Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49800","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-22T00:58:02.627951Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49812","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-22T00:58:02.654894Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49826","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-22T00:58:02.662788Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49844","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-22T00:58:02.680049Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49858","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-22T00:58:02.703807Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49870","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-22T00:58:02.713984Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49892","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-22T00:58:02.741202Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49916","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-22T00:58:02.771579Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49942","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-22T00:58:02.781192Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49960","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-22T00:58:02.798987Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49968","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-22T00:58:02.814553Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49986","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-22T00:58:02.828116Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50006","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-22T00:58:02.842290Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50020","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-22T00:58:02.857488Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50048","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-22T00:58:02.873529Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50072","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-22T00:58:02.895278Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50096","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-22T00:58:02.916047Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50106","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-22T00:58:02.931658Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50118","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-22T00:58:02.954740Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50132","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-22T00:58:03.020712Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50156","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 00:58:40 up  5:40,  0 user,  load average: 3.01, 3.68, 2.91
	Linux default-k8s-diff-port-882305 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [6374c9a498e5e7c06aaf21234670a28f3d303bb7e5eccc9c2c6640e897e0186e] <==
	I1122 00:58:14.512047       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1122 00:58:14.512383       1 main.go:139] hostIP = 192.168.85.2
	podIP = 192.168.85.2
	I1122 00:58:14.512543       1 main.go:148] setting mtu 1500 for CNI 
	I1122 00:58:14.512586       1 main.go:178] kindnetd IP family: "ipv4"
	I1122 00:58:14.512622       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-11-22T00:58:14Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1122 00:58:14.716948       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1122 00:58:14.717015       1 controller.go:381] "Waiting for informer caches to sync"
	I1122 00:58:14.717052       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1122 00:58:14.717225       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1122 00:58:14.917176       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1122 00:58:14.917293       1 metrics.go:72] Registering metrics
	I1122 00:58:14.917406       1 controller.go:711] "Syncing nftables rules"
	I1122 00:58:24.717939       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1122 00:58:24.718008       1 main.go:301] handling current node
	I1122 00:58:34.717890       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1122 00:58:34.717993       1 main.go:301] handling current node
	
	
	==> kube-apiserver [f2438cab7689d1617a859f298babbda7794638d4149b84d8b0c9853b900c542e] <==
	I1122 00:58:03.835043       1 shared_informer.go:356] "Caches are synced" controller="configmaps"
	I1122 00:58:03.843840       1 controller.go:667] quota admission added evaluator for: namespaces
	I1122 00:58:03.884026       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1122 00:58:03.884209       1 default_servicecidr_controller.go:228] Setting default ServiceCIDR condition Ready to True
	I1122 00:58:03.895469       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1122 00:58:03.914571       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1122 00:58:04.045372       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1122 00:58:04.563027       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I1122 00:58:04.571426       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I1122 00:58:04.571446       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1122 00:58:05.304862       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1122 00:58:05.382903       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1122 00:58:05.523230       1 alloc.go:328] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W1122 00:58:05.531031       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.85.2]
	I1122 00:58:05.532303       1 controller.go:667] quota admission added evaluator for: endpoints
	I1122 00:58:05.538050       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1122 00:58:05.692891       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1122 00:58:06.763593       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1122 00:58:06.785034       1 alloc.go:328] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I1122 00:58:06.805848       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I1122 00:58:10.997048       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1122 00:58:11.906559       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1122 00:58:11.923636       1 controller.go:667] quota admission added evaluator for: controllerrevisions.apps
	I1122 00:58:11.941237       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	E1122 00:58:38.989347       1 conn.go:339] Error on socket receive: read tcp 192.168.85.2:8444->192.168.85.1:37004: use of closed network connection
	
	
	==> kube-controller-manager [1bd9d2111f834e7fdb8c169092a31c0fec591b7766c1ffe5549f95e7ae30dca0] <==
	I1122 00:58:10.783361       1 node_lifecycle_controller.go:1025] "Controller detected that all Nodes are not-Ready. Entering master disruption mode" logger="node-lifecycle-controller"
	I1122 00:58:10.787148       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-serving"
	I1122 00:58:10.787164       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-client"
	I1122 00:58:10.788755       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kube-apiserver-client"
	I1122 00:58:10.790558       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1122 00:58:10.790575       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1122 00:58:10.790583       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1122 00:58:10.791600       1 shared_informer.go:356] "Caches are synced" controller="TTL"
	I1122 00:58:10.791844       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	I1122 00:58:10.791900       1 shared_informer.go:356] "Caches are synced" controller="ReplicationController"
	I1122 00:58:10.791705       1 shared_informer.go:356] "Caches are synced" controller="crt configmap"
	I1122 00:58:10.796349       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1122 00:58:10.791690       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I1122 00:58:10.796459       1 shared_informer.go:356] "Caches are synced" controller="service-cidr-controller"
	I1122 00:58:10.796637       1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
	I1122 00:58:10.800136       1 shared_informer.go:356] "Caches are synced" controller="node"
	I1122 00:58:10.800187       1 range_allocator.go:177] "Sending events to api server" logger="node-ipam-controller"
	I1122 00:58:10.800209       1 range_allocator.go:183] "Starting range CIDR allocator" logger="node-ipam-controller"
	I1122 00:58:10.800215       1 shared_informer.go:349] "Waiting for caches to sync" controller="cidrallocator"
	I1122 00:58:10.800220       1 shared_informer.go:356] "Caches are synced" controller="cidrallocator"
	I1122 00:58:10.804631       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1122 00:58:10.811855       1 range_allocator.go:428] "Set node PodCIDR" logger="node-ipam-controller" node="default-k8s-diff-port-882305" podCIDRs=["10.244.0.0/24"]
	I1122 00:58:10.815273       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1122 00:58:10.826079       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1122 00:58:25.785079       1 node_lifecycle_controller.go:1044] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	
	
	==> kube-proxy [3ece61dc44bf3dd53626afcb02728d5bf6dd3f0e0308df0dadaeba6c7c82c3f4] <==
	I1122 00:58:14.047495       1 server_linux.go:53] "Using iptables proxy"
	I1122 00:58:14.127511       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1122 00:58:14.228526       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1122 00:58:14.228660       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.85.2"]
	E1122 00:58:14.228769       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1122 00:58:14.288131       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1122 00:58:14.288237       1 server_linux.go:132] "Using iptables Proxier"
	I1122 00:58:14.294643       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1122 00:58:14.295060       1 server.go:527] "Version info" version="v1.34.1"
	I1122 00:58:14.295566       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1122 00:58:14.297119       1 config.go:200] "Starting service config controller"
	I1122 00:58:14.297168       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1122 00:58:14.297230       1 config.go:106] "Starting endpoint slice config controller"
	I1122 00:58:14.297270       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1122 00:58:14.297350       1 config.go:403] "Starting serviceCIDR config controller"
	I1122 00:58:14.297378       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1122 00:58:14.305505       1 config.go:309] "Starting node config controller"
	I1122 00:58:14.306778       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1122 00:58:14.306811       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1122 00:58:14.397430       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1122 00:58:14.397573       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1122 00:58:14.397881       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	
	
	==> kube-scheduler [7feb7d632ef28d58f8d91b1c32a669c2c955efe6d6d4db46e4d68ad1d05fea2d] <==
	E1122 00:58:03.858293       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1122 00:58:03.858349       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1122 00:58:03.866296       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_arm64.s:1223" type="*v1.ConfigMap"
	E1122 00:58:03.866496       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1122 00:58:03.866550       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1122 00:58:03.866666       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1122 00:58:03.866727       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1122 00:58:03.866774       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1122 00:58:03.866813       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1122 00:58:03.866908       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1122 00:58:03.866997       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1122 00:58:03.867073       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1122 00:58:03.867237       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1122 00:58:03.867328       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1122 00:58:04.686570       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1122 00:58:04.712451       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1122 00:58:04.757603       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1122 00:58:04.764090       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1122 00:58:04.878139       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1122 00:58:04.896039       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1122 00:58:04.905628       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1122 00:58:04.960427       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1122 00:58:04.966323       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1122 00:58:04.996900       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	I1122 00:58:05.445027       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Nov 22 00:58:12 default-k8s-diff-port-882305 kubelet[1296]: I1122 00:58:12.252941    1296 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/52f46f97-517a-4d53-9374-2313d6220643-cni-cfg\") pod \"kindnet-kcwqj\" (UID: \"52f46f97-517a-4d53-9374-2313d6220643\") " pod="kube-system/kindnet-kcwqj"
	Nov 22 00:58:12 default-k8s-diff-port-882305 kubelet[1296]: I1122 00:58:12.253024    1296 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/52f46f97-517a-4d53-9374-2313d6220643-xtables-lock\") pod \"kindnet-kcwqj\" (UID: \"52f46f97-517a-4d53-9374-2313d6220643\") " pod="kube-system/kindnet-kcwqj"
	Nov 22 00:58:12 default-k8s-diff-port-882305 kubelet[1296]: I1122 00:58:12.253042    1296 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/52f46f97-517a-4d53-9374-2313d6220643-lib-modules\") pod \"kindnet-kcwqj\" (UID: \"52f46f97-517a-4d53-9374-2313d6220643\") " pod="kube-system/kindnet-kcwqj"
	Nov 22 00:58:12 default-k8s-diff-port-882305 kubelet[1296]: I1122 00:58:12.253061    1296 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g2llq\" (UniqueName: \"kubernetes.io/projected/52f46f97-517a-4d53-9374-2313d6220643-kube-api-access-g2llq\") pod \"kindnet-kcwqj\" (UID: \"52f46f97-517a-4d53-9374-2313d6220643\") " pod="kube-system/kindnet-kcwqj"
	Nov 22 00:58:13 default-k8s-diff-port-882305 kubelet[1296]: E1122 00:58:13.296868    1296 projected.go:291] Couldn't get configMap kube-system/kube-root-ca.crt: failed to sync configmap cache: timed out waiting for the condition
	Nov 22 00:58:13 default-k8s-diff-port-882305 kubelet[1296]: E1122 00:58:13.297283    1296 projected.go:196] Error preparing data for projected volume kube-api-access-znf99 for pod kube-system/kube-proxy-59l6x: failed to sync configmap cache: timed out waiting for the condition
	Nov 22 00:58:13 default-k8s-diff-port-882305 kubelet[1296]: E1122 00:58:13.297440    1296 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/7cdb7bc0-14ce-4e33-aca8-95137883f5e0-kube-api-access-znf99 podName:7cdb7bc0-14ce-4e33-aca8-95137883f5e0 nodeName:}" failed. No retries permitted until 2025-11-22 00:58:13.797414023 +0000 UTC m=+7.212035373 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-znf99" (UniqueName: "kubernetes.io/projected/7cdb7bc0-14ce-4e33-aca8-95137883f5e0-kube-api-access-znf99") pod "kube-proxy-59l6x" (UID: "7cdb7bc0-14ce-4e33-aca8-95137883f5e0") : failed to sync configmap cache: timed out waiting for the condition
	Nov 22 00:58:13 default-k8s-diff-port-882305 kubelet[1296]: E1122 00:58:13.368091    1296 projected.go:291] Couldn't get configMap kube-system/kube-root-ca.crt: failed to sync configmap cache: timed out waiting for the condition
	Nov 22 00:58:13 default-k8s-diff-port-882305 kubelet[1296]: E1122 00:58:13.368248    1296 projected.go:196] Error preparing data for projected volume kube-api-access-g2llq for pod kube-system/kindnet-kcwqj: failed to sync configmap cache: timed out waiting for the condition
	Nov 22 00:58:13 default-k8s-diff-port-882305 kubelet[1296]: E1122 00:58:13.368377    1296 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/52f46f97-517a-4d53-9374-2313d6220643-kube-api-access-g2llq podName:52f46f97-517a-4d53-9374-2313d6220643 nodeName:}" failed. No retries permitted until 2025-11-22 00:58:13.868353666 +0000 UTC m=+7.282975024 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-g2llq" (UniqueName: "kubernetes.io/projected/52f46f97-517a-4d53-9374-2313d6220643-kube-api-access-g2llq") pod "kindnet-kcwqj" (UID: "52f46f97-517a-4d53-9374-2313d6220643") : failed to sync configmap cache: timed out waiting for the condition
	Nov 22 00:58:13 default-k8s-diff-port-882305 kubelet[1296]: I1122 00:58:13.863472    1296 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory"
	Nov 22 00:58:13 default-k8s-diff-port-882305 kubelet[1296]: W1122 00:58:13.932753    1296 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/3f972239d661c1a7b13b144967a8f04cbcdb72c736bec80a78bae8fab1d357e1/crio-75c8beb7d7a7a125535e68d59962b54f7737070c6647efa8a8939959b94fc0f0 WatchSource:0}: Error finding container 75c8beb7d7a7a125535e68d59962b54f7737070c6647efa8a8939959b94fc0f0: Status 404 returned error can't find the container with id 75c8beb7d7a7a125535e68d59962b54f7737070c6647efa8a8939959b94fc0f0
	Nov 22 00:58:14 default-k8s-diff-port-882305 kubelet[1296]: W1122 00:58:14.262824    1296 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/3f972239d661c1a7b13b144967a8f04cbcdb72c736bec80a78bae8fab1d357e1/crio-8a594035f38739fc613f73f4524003ec04ac054069c4e9b74651dd3f7b7ef463 WatchSource:0}: Error finding container 8a594035f38739fc613f73f4524003ec04ac054069c4e9b74651dd3f7b7ef463: Status 404 returned error can't find the container with id 8a594035f38739fc613f73f4524003ec04ac054069c4e9b74651dd3f7b7ef463
	Nov 22 00:58:14 default-k8s-diff-port-882305 kubelet[1296]: I1122 00:58:14.859554    1296 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kindnet-kcwqj" podStartSLOduration=3.859533587 podStartE2EDuration="3.859533587s" podCreationTimestamp="2025-11-22 00:58:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-22 00:58:14.82075251 +0000 UTC m=+8.235373876" watchObservedRunningTime="2025-11-22 00:58:14.859533587 +0000 UTC m=+8.274154945"
	Nov 22 00:58:17 default-k8s-diff-port-882305 kubelet[1296]: I1122 00:58:17.868108    1296 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-59l6x" podStartSLOduration=6.868087451 podStartE2EDuration="6.868087451s" podCreationTimestamp="2025-11-22 00:58:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-22 00:58:14.863744893 +0000 UTC m=+8.278366243" watchObservedRunningTime="2025-11-22 00:58:17.868087451 +0000 UTC m=+11.282708809"
	Nov 22 00:58:25 default-k8s-diff-port-882305 kubelet[1296]: I1122 00:58:25.096459    1296 kubelet_node_status.go:439] "Fast updating node status as it just became ready"
	Nov 22 00:58:25 default-k8s-diff-port-882305 kubelet[1296]: I1122 00:58:25.357705    1296 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/a2f33c9b-90d6-4197-9606-48fd95ff1ef2-config-volume\") pod \"coredns-66bc5c9577-448gn\" (UID: \"a2f33c9b-90d6-4197-9606-48fd95ff1ef2\") " pod="kube-system/coredns-66bc5c9577-448gn"
	Nov 22 00:58:25 default-k8s-diff-port-882305 kubelet[1296]: I1122 00:58:25.357782    1296 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gkvgc\" (UniqueName: \"kubernetes.io/projected/a2f33c9b-90d6-4197-9606-48fd95ff1ef2-kube-api-access-gkvgc\") pod \"coredns-66bc5c9577-448gn\" (UID: \"a2f33c9b-90d6-4197-9606-48fd95ff1ef2\") " pod="kube-system/coredns-66bc5c9577-448gn"
	Nov 22 00:58:25 default-k8s-diff-port-882305 kubelet[1296]: I1122 00:58:25.357833    1296 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/fc6390d1-3d5c-4f70-a9bb-7e5d41d44f2a-tmp\") pod \"storage-provisioner\" (UID: \"fc6390d1-3d5c-4f70-a9bb-7e5d41d44f2a\") " pod="kube-system/storage-provisioner"
	Nov 22 00:58:25 default-k8s-diff-port-882305 kubelet[1296]: I1122 00:58:25.357853    1296 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lx2qs\" (UniqueName: \"kubernetes.io/projected/fc6390d1-3d5c-4f70-a9bb-7e5d41d44f2a-kube-api-access-lx2qs\") pod \"storage-provisioner\" (UID: \"fc6390d1-3d5c-4f70-a9bb-7e5d41d44f2a\") " pod="kube-system/storage-provisioner"
	Nov 22 00:58:25 default-k8s-diff-port-882305 kubelet[1296]: W1122 00:58:25.728879    1296 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/3f972239d661c1a7b13b144967a8f04cbcdb72c736bec80a78bae8fab1d357e1/crio-c585dab44a49adbb40e16e50bde1f20a00c8a07d67567f02ae8096afc167b827 WatchSource:0}: Error finding container c585dab44a49adbb40e16e50bde1f20a00c8a07d67567f02ae8096afc167b827: Status 404 returned error can't find the container with id c585dab44a49adbb40e16e50bde1f20a00c8a07d67567f02ae8096afc167b827
	Nov 22 00:58:26 default-k8s-diff-port-882305 kubelet[1296]: I1122 00:58:26.021204    1296 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-448gn" podStartSLOduration=15.021185752 podStartE2EDuration="15.021185752s" podCreationTimestamp="2025-11-22 00:58:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-22 00:58:25.887221877 +0000 UTC m=+19.301843235" watchObservedRunningTime="2025-11-22 00:58:26.021185752 +0000 UTC m=+19.435807110"
	Nov 22 00:58:26 default-k8s-diff-port-882305 kubelet[1296]: I1122 00:58:26.022378    1296 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/storage-provisioner" podStartSLOduration=13.022362235 podStartE2EDuration="13.022362235s" podCreationTimestamp="2025-11-22 00:58:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-22 00:58:26.020858049 +0000 UTC m=+19.435479399" watchObservedRunningTime="2025-11-22 00:58:26.022362235 +0000 UTC m=+19.436983585"
	Nov 22 00:58:28 default-k8s-diff-port-882305 kubelet[1296]: I1122 00:58:28.903207    1296 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m9sjl\" (UniqueName: \"kubernetes.io/projected/c3ec38da-dbd6-47f5-acb8-b65445289488-kube-api-access-m9sjl\") pod \"busybox\" (UID: \"c3ec38da-dbd6-47f5-acb8-b65445289488\") " pod="default/busybox"
	Nov 22 00:58:38 default-k8s-diff-port-882305 kubelet[1296]: E1122 00:58:38.998304    1296 upgradeaware.go:427] Error proxying data from client to backend: readfrom tcp 127.0.0.1:60836->127.0.0.1:38035: write tcp 127.0.0.1:60836->127.0.0.1:38035: write: broken pipe
	
	
	==> storage-provisioner [df7389f58d2be61762cba968104178c65c4ab29eb057990b61233586e8b4b0d6] <==
	I1122 00:58:25.743613       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1122 00:58:25.780661       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1122 00:58:25.780707       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W1122 00:58:25.787295       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1122 00:58:25.802055       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1122 00:58:25.802433       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1122 00:58:25.802720       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-882305_46567fab-edc3-4101-a71e-de3a013bc8cd!
	I1122 00:58:25.828099       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"5cef023a-e193-4fc5-8350-b0d9fd8c5815", APIVersion:"v1", ResourceVersion:"446", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' default-k8s-diff-port-882305_46567fab-edc3-4101-a71e-de3a013bc8cd became leader
	W1122 00:58:25.829339       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1122 00:58:25.836836       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1122 00:58:25.915733       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-882305_46567fab-edc3-4101-a71e-de3a013bc8cd!
	W1122 00:58:27.840863       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1122 00:58:27.847861       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1122 00:58:29.852288       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1122 00:58:29.860891       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1122 00:58:31.878007       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1122 00:58:31.888962       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1122 00:58:33.893558       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1122 00:58:33.900475       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1122 00:58:35.904295       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1122 00:58:35.910153       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1122 00:58:37.913753       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1122 00:58:37.919532       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1122 00:58:39.923468       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1122 00:58:39.931229       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p default-k8s-diff-port-882305 -n default-k8s-diff-port-882305
helpers_test.go:269: (dbg) Run:  kubectl --context default-k8s-diff-port-882305 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (2.93s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (2.88s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p newest-cni-683181 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Non-zero exit: out/minikube-linux-arm64 addons enable metrics-server -p newest-cni-683181 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: exit status 11 (269.833928ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-22T00:58:58Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_2bafae6fa40fec163538f94366e390b0317a8b15_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:205: failed to enable an addon post-stop. args "out/minikube-linux-arm64 addons enable metrics-server -p newest-cni-683181 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain": exit status 11
start_stop_delete_test.go:209: WARNING: cni mode requires additional setup before pods can schedule :(
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/newest-cni/serial/EnableAddonWhileActive]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/newest-cni/serial/EnableAddonWhileActive]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect newest-cni-683181
helpers_test.go:243: (dbg) docker inspect newest-cni-683181:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "135c0ae6b0320d87bc11e3af1c9defb75409d4de75a05dd6fb885ab556eb0fcb",
	        "Created": "2025-11-22T00:58:25.478632838Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 718743,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-11-22T00:58:25.574466145Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:97061622515a85470dad13cb046ad5fc5021fcd2b05aa921a620abb52951cd5d",
	        "ResolvConfPath": "/var/lib/docker/containers/135c0ae6b0320d87bc11e3af1c9defb75409d4de75a05dd6fb885ab556eb0fcb/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/135c0ae6b0320d87bc11e3af1c9defb75409d4de75a05dd6fb885ab556eb0fcb/hostname",
	        "HostsPath": "/var/lib/docker/containers/135c0ae6b0320d87bc11e3af1c9defb75409d4de75a05dd6fb885ab556eb0fcb/hosts",
	        "LogPath": "/var/lib/docker/containers/135c0ae6b0320d87bc11e3af1c9defb75409d4de75a05dd6fb885ab556eb0fcb/135c0ae6b0320d87bc11e3af1c9defb75409d4de75a05dd6fb885ab556eb0fcb-json.log",
	        "Name": "/newest-cni-683181",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "newest-cni-683181:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "newest-cni-683181",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "135c0ae6b0320d87bc11e3af1c9defb75409d4de75a05dd6fb885ab556eb0fcb",
	                "LowerDir": "/var/lib/docker/overlay2/fe785fd9359d610347ceff171bafa42142111d5ef9b3343e32ddfef45bc62e2d-init/diff:/var/lib/docker/overlay2/7e8788c6de692bc1c3758a2bb2c4b8da0fbba26855f855c0f3b655bfbdd92f8e/diff",
	                "MergedDir": "/var/lib/docker/overlay2/fe785fd9359d610347ceff171bafa42142111d5ef9b3343e32ddfef45bc62e2d/merged",
	                "UpperDir": "/var/lib/docker/overlay2/fe785fd9359d610347ceff171bafa42142111d5ef9b3343e32ddfef45bc62e2d/diff",
	                "WorkDir": "/var/lib/docker/overlay2/fe785fd9359d610347ceff171bafa42142111d5ef9b3343e32ddfef45bc62e2d/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "newest-cni-683181",
	                "Source": "/var/lib/docker/volumes/newest-cni-683181/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "newest-cni-683181",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763588073-21934@sha256:19d3da0413e1bfa354cbb88004c6796f8e9772a083e0230b0f6e50212ee04c7e",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "newest-cni-683181",
	                "name.minikube.sigs.k8s.io": "newest-cni-683181",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "6e2f0b48671c0c12f1d1e4e3d6896b1bc05c185f5837ca32aaecea057618bcf7",
	            "SandboxKey": "/var/run/docker/netns/6e2f0b48671c",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33807"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33808"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33811"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33809"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33810"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "newest-cni-683181": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "2a:e3:84:aa:f1:ad",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "621737e0fccb923382516e66d395dca2d3f734654251424cf1f7cd380f8144e7",
	                    "EndpointID": "7ab9a1af78f5966082b937b7b572d9003dbbfdcdb7ab45eaec864333e933d649",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "newest-cni-683181",
	                        "135c0ae6b032"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-683181 -n newest-cni-683181
helpers_test.go:252: <<< TestStartStop/group/newest-cni/serial/EnableAddonWhileActive FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/newest-cni/serial/EnableAddonWhileActive]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p newest-cni-683181 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p newest-cni-683181 logs -n 25: (1.21619849s)
helpers_test.go:260: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬──────────────
───────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼──────────────
───────┤
	│ start   │ -p embed-certs-879000 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                        │ embed-certs-879000           │ jenkins │ v1.37.0 │ 22 Nov 25 00:55 UTC │ 22 Nov 25 00:56 UTC │
	│ addons  │ enable metrics-server -p no-preload-165130 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ no-preload-165130            │ jenkins │ v1.37.0 │ 22 Nov 25 00:56 UTC │                     │
	│ stop    │ -p no-preload-165130 --alsologtostderr -v=3                                                                                                                                                                                                   │ no-preload-165130            │ jenkins │ v1.37.0 │ 22 Nov 25 00:56 UTC │ 22 Nov 25 00:56 UTC │
	│ addons  │ enable dashboard -p no-preload-165130 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ no-preload-165130            │ jenkins │ v1.37.0 │ 22 Nov 25 00:56 UTC │ 22 Nov 25 00:56 UTC │
	│ start   │ -p no-preload-165130 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                       │ no-preload-165130            │ jenkins │ v1.37.0 │ 22 Nov 25 00:56 UTC │ 22 Nov 25 00:57 UTC │
	│ addons  │ enable metrics-server -p embed-certs-879000 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                      │ embed-certs-879000           │ jenkins │ v1.37.0 │ 22 Nov 25 00:56 UTC │                     │
	│ stop    │ -p embed-certs-879000 --alsologtostderr -v=3                                                                                                                                                                                                  │ embed-certs-879000           │ jenkins │ v1.37.0 │ 22 Nov 25 00:56 UTC │ 22 Nov 25 00:57 UTC │
	│ addons  │ enable dashboard -p embed-certs-879000 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                 │ embed-certs-879000           │ jenkins │ v1.37.0 │ 22 Nov 25 00:57 UTC │ 22 Nov 25 00:57 UTC │
	│ start   │ -p embed-certs-879000 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                        │ embed-certs-879000           │ jenkins │ v1.37.0 │ 22 Nov 25 00:57 UTC │ 22 Nov 25 00:57 UTC │
	│ image   │ no-preload-165130 image list --format=json                                                                                                                                                                                                    │ no-preload-165130            │ jenkins │ v1.37.0 │ 22 Nov 25 00:57 UTC │ 22 Nov 25 00:57 UTC │
	│ pause   │ -p no-preload-165130 --alsologtostderr -v=1                                                                                                                                                                                                   │ no-preload-165130            │ jenkins │ v1.37.0 │ 22 Nov 25 00:57 UTC │                     │
	│ delete  │ -p no-preload-165130                                                                                                                                                                                                                          │ no-preload-165130            │ jenkins │ v1.37.0 │ 22 Nov 25 00:57 UTC │ 22 Nov 25 00:57 UTC │
	│ delete  │ -p no-preload-165130                                                                                                                                                                                                                          │ no-preload-165130            │ jenkins │ v1.37.0 │ 22 Nov 25 00:57 UTC │ 22 Nov 25 00:57 UTC │
	│ delete  │ -p disable-driver-mounts-046489                                                                                                                                                                                                               │ disable-driver-mounts-046489 │ jenkins │ v1.37.0 │ 22 Nov 25 00:57 UTC │ 22 Nov 25 00:57 UTC │
	│ start   │ -p default-k8s-diff-port-882305 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                      │ default-k8s-diff-port-882305 │ jenkins │ v1.37.0 │ 22 Nov 25 00:57 UTC │ 22 Nov 25 00:58 UTC │
	│ image   │ embed-certs-879000 image list --format=json                                                                                                                                                                                                   │ embed-certs-879000           │ jenkins │ v1.37.0 │ 22 Nov 25 00:58 UTC │ 22 Nov 25 00:58 UTC │
	│ pause   │ -p embed-certs-879000 --alsologtostderr -v=1                                                                                                                                                                                                  │ embed-certs-879000           │ jenkins │ v1.37.0 │ 22 Nov 25 00:58 UTC │                     │
	│ delete  │ -p embed-certs-879000                                                                                                                                                                                                                         │ embed-certs-879000           │ jenkins │ v1.37.0 │ 22 Nov 25 00:58 UTC │ 22 Nov 25 00:58 UTC │
	│ delete  │ -p embed-certs-879000                                                                                                                                                                                                                         │ embed-certs-879000           │ jenkins │ v1.37.0 │ 22 Nov 25 00:58 UTC │ 22 Nov 25 00:58 UTC │
	│ start   │ -p newest-cni-683181 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1 │ newest-cni-683181            │ jenkins │ v1.37.0 │ 22 Nov 25 00:58 UTC │ 22 Nov 25 00:58 UTC │
	│ addons  │ enable metrics-server -p default-k8s-diff-port-882305 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                            │ default-k8s-diff-port-882305 │ jenkins │ v1.37.0 │ 22 Nov 25 00:58 UTC │                     │
	│ stop    │ -p default-k8s-diff-port-882305 --alsologtostderr -v=3                                                                                                                                                                                        │ default-k8s-diff-port-882305 │ jenkins │ v1.37.0 │ 22 Nov 25 00:58 UTC │ 22 Nov 25 00:58 UTC │
	│ addons  │ enable dashboard -p default-k8s-diff-port-882305 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                       │ default-k8s-diff-port-882305 │ jenkins │ v1.37.0 │ 22 Nov 25 00:58 UTC │ 22 Nov 25 00:58 UTC │
	│ start   │ -p default-k8s-diff-port-882305 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                      │ default-k8s-diff-port-882305 │ jenkins │ v1.37.0 │ 22 Nov 25 00:58 UTC │                     │
	│ addons  │ enable metrics-server -p newest-cni-683181 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ newest-cni-683181            │ jenkins │ v1.37.0 │ 22 Nov 25 00:58 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴──────────────
───────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/22 00:58:54
	Running on machine: ip-172-31-21-244
	Binary: Built with gc go1.25.3 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1122 00:58:54.419559  721299 out.go:360] Setting OutFile to fd 1 ...
	I1122 00:58:54.419707  721299 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1122 00:58:54.419719  721299 out.go:374] Setting ErrFile to fd 2...
	I1122 00:58:54.419725  721299 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1122 00:58:54.420083  721299 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21934-513600/.minikube/bin
	I1122 00:58:54.420561  721299 out.go:368] Setting JSON to false
	I1122 00:58:54.421590  721299 start.go:133] hostinfo: {"hostname":"ip-172-31-21-244","uptime":20451,"bootTime":1763752684,"procs":183,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"da8ac1fd-6236-412a-a346-95873c98230d"}
	I1122 00:58:54.421659  721299 start.go:143] virtualization:  
	I1122 00:58:54.424881  721299 out.go:179] * [default-k8s-diff-port-882305] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1122 00:58:54.429985  721299 out.go:179]   - MINIKUBE_LOCATION=21934
	I1122 00:58:54.430052  721299 notify.go:221] Checking for updates...
	I1122 00:58:54.438868  721299 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1122 00:58:54.441999  721299 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21934-513600/kubeconfig
	I1122 00:58:54.444866  721299 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21934-513600/.minikube
	I1122 00:58:54.447721  721299 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1122 00:58:54.450751  721299 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1122 00:58:54.454026  721299 config.go:182] Loaded profile config "default-k8s-diff-port-882305": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1122 00:58:54.454692  721299 driver.go:422] Setting default libvirt URI to qemu:///system
	I1122 00:58:54.497836  721299 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1122 00:58:54.497928  721299 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1122 00:58:54.573605  721299 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-11-22 00:58:54.563942877 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1122 00:58:54.573709  721299 docker.go:319] overlay module found
	I1122 00:58:54.576879  721299 out.go:179] * Using the docker driver based on existing profile
	I1122 00:58:54.579502  721299 start.go:309] selected driver: docker
	I1122 00:58:54.579520  721299 start.go:930] validating driver "docker" against &{Name:default-k8s-diff-port-882305 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763588073-21934@sha256:19d3da0413e1bfa354cbb88004c6796f8e9772a083e0230b0f6e50212ee04c7e Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-882305 Namespace:default APISe
rverHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] M
ountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1122 00:58:54.579628  721299 start.go:941] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1122 00:58:54.580316  721299 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1122 00:58:54.643756  721299 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-11-22 00:58:54.634569111 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1122 00:58:54.644121  721299 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1122 00:58:54.644147  721299 cni.go:84] Creating CNI manager for ""
	I1122 00:58:54.644201  721299 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1122 00:58:54.644237  721299 start.go:353] cluster config:
	{Name:default-k8s-diff-port-882305 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763588073-21934@sha256:19d3da0413e1bfa354cbb88004c6796f8e9772a083e0230b0f6e50212ee04c7e Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-882305 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:
cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false
DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1122 00:58:54.647363  721299 out.go:179] * Starting "default-k8s-diff-port-882305" primary control-plane node in "default-k8s-diff-port-882305" cluster
	I1122 00:58:54.650517  721299 cache.go:134] Beginning downloading kic base image for docker with crio
	I1122 00:58:54.653298  721299 out.go:179] * Pulling base image v0.0.48-1763588073-21934 ...
	I1122 00:58:54.656003  721299 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1122 00:58:54.656052  721299 preload.go:203] Found local preload: /home/jenkins/minikube-integration/21934-513600/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4
	I1122 00:58:54.656068  721299 cache.go:65] Caching tarball of preloaded images
	I1122 00:58:54.656075  721299 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763588073-21934@sha256:19d3da0413e1bfa354cbb88004c6796f8e9772a083e0230b0f6e50212ee04c7e in local docker daemon
	I1122 00:58:54.656150  721299 preload.go:238] Found /home/jenkins/minikube-integration/21934-513600/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1122 00:58:54.656161  721299 cache.go:68] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1122 00:58:54.656270  721299 profile.go:143] Saving config to /home/jenkins/minikube-integration/21934-513600/.minikube/profiles/default-k8s-diff-port-882305/config.json ...
	I1122 00:58:54.677360  721299 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763588073-21934@sha256:19d3da0413e1bfa354cbb88004c6796f8e9772a083e0230b0f6e50212ee04c7e in local docker daemon, skipping pull
	I1122 00:58:54.677384  721299 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763588073-21934@sha256:19d3da0413e1bfa354cbb88004c6796f8e9772a083e0230b0f6e50212ee04c7e exists in daemon, skipping load
	I1122 00:58:54.677399  721299 cache.go:243] Successfully downloaded all kic artifacts
	I1122 00:58:54.677422  721299 start.go:360] acquireMachinesLock for default-k8s-diff-port-882305: {Name:mk803954bb6347dd99a7e73d8fd5992e1319a31c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1122 00:58:54.677478  721299 start.go:364] duration metric: took 35.355µs to acquireMachinesLock for "default-k8s-diff-port-882305"
	I1122 00:58:54.677502  721299 start.go:96] Skipping create...Using existing machine configuration
	I1122 00:58:54.677512  721299 fix.go:54] fixHost starting: 
	I1122 00:58:54.677759  721299 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-882305 --format={{.State.Status}}
	I1122 00:58:54.715551  721299 fix.go:112] recreateIfNeeded on default-k8s-diff-port-882305: state=Stopped err=<nil>
	W1122 00:58:54.715579  721299 fix.go:138] unexpected machine state, will restart: <nil>
	I1122 00:58:50.912365  718325 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1122 00:58:50.916494  718325 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.1/kubectl ...
	I1122 00:58:50.916515  718325 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1122 00:58:50.929557  718325 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1122 00:58:51.236030  718325 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1122 00:58:51.236214  718325 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1122 00:58:51.236297  718325 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes newest-cni-683181 minikube.k8s.io/updated_at=2025_11_22T00_58_51_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=299bbe887a12c40541707cc636234f35f4ff1785 minikube.k8s.io/name=newest-cni-683181 minikube.k8s.io/primary=true
	I1122 00:58:51.250380  718325 ops.go:34] apiserver oom_adj: -16
	I1122 00:58:51.407570  718325 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1122 00:58:51.908661  718325 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1122 00:58:52.407681  718325 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1122 00:58:52.907705  718325 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1122 00:58:53.407986  718325 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1122 00:58:53.908308  718325 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1122 00:58:54.408555  718325 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1122 00:58:54.908341  718325 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1122 00:58:55.407997  718325 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1122 00:58:55.908516  718325 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1122 00:58:56.044227  718325 kubeadm.go:1114] duration metric: took 4.808082049s to wait for elevateKubeSystemPrivileges
	I1122 00:58:56.044256  718325 kubeadm.go:403] duration metric: took 24.07603084s to StartCluster
	I1122 00:58:56.044273  718325 settings.go:142] acquiring lock: {Name:mk6c31eb57ec65b047b78b4e1046e03fe7cc77bb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1122 00:58:56.044335  718325 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21934-513600/kubeconfig
	I1122 00:58:56.044995  718325 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21934-513600/kubeconfig: {Name:mkcd094d8a765e5a81369c0fb33cb5e696b17bb8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1122 00:58:56.045207  718325 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1122 00:58:56.045288  718325 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1122 00:58:56.045533  718325 config.go:182] Loaded profile config "newest-cni-683181": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1122 00:58:56.045565  718325 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1122 00:58:56.045623  718325 addons.go:70] Setting storage-provisioner=true in profile "newest-cni-683181"
	I1122 00:58:56.045636  718325 addons.go:239] Setting addon storage-provisioner=true in "newest-cni-683181"
	I1122 00:58:56.045656  718325 host.go:66] Checking if "newest-cni-683181" exists ...
	I1122 00:58:56.046184  718325 cli_runner.go:164] Run: docker container inspect newest-cni-683181 --format={{.State.Status}}
	I1122 00:58:56.046672  718325 addons.go:70] Setting default-storageclass=true in profile "newest-cni-683181"
	I1122 00:58:56.046706  718325 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "newest-cni-683181"
	I1122 00:58:56.047011  718325 cli_runner.go:164] Run: docker container inspect newest-cni-683181 --format={{.State.Status}}
	I1122 00:58:56.049818  718325 out.go:179] * Verifying Kubernetes components...
	I1122 00:58:56.058100  718325 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1122 00:58:56.088003  718325 addons.go:239] Setting addon default-storageclass=true in "newest-cni-683181"
	I1122 00:58:56.088052  718325 host.go:66] Checking if "newest-cni-683181" exists ...
	I1122 00:58:56.088501  718325 cli_runner.go:164] Run: docker container inspect newest-cni-683181 --format={{.State.Status}}
	I1122 00:58:56.095586  718325 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1122 00:58:56.099129  718325 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1122 00:58:56.099154  718325 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1122 00:58:56.099232  718325 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-683181
	I1122 00:58:56.119589  718325 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1122 00:58:56.119623  718325 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1122 00:58:56.119686  718325 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-683181
	I1122 00:58:56.166045  718325 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33807 SSHKeyPath:/home/jenkins/minikube-integration/21934-513600/.minikube/machines/newest-cni-683181/id_rsa Username:docker}
	I1122 00:58:56.168498  718325 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33807 SSHKeyPath:/home/jenkins/minikube-integration/21934-513600/.minikube/machines/newest-cni-683181/id_rsa Username:docker}
	I1122 00:58:56.473444  718325 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1122 00:58:56.491411  718325 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.76.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1122 00:58:56.491626  718325 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1122 00:58:56.530559  718325 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1122 00:58:57.195131  718325 start.go:977] {"host.minikube.internal": 192.168.76.1} host record injected into CoreDNS's ConfigMap
	I1122 00:58:57.196072  718325 api_server.go:52] waiting for apiserver process to appear ...
	I1122 00:58:57.197928  718325 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1122 00:58:57.230579  718325 api_server.go:72] duration metric: took 1.185345887s to wait for apiserver process to appear ...
	I1122 00:58:57.230602  718325 api_server.go:88] waiting for apiserver healthz status ...
	I1122 00:58:57.230627  718325 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1122 00:58:57.234516  718325 out.go:179] * Enabled addons: storage-provisioner, default-storageclass
	I1122 00:58:57.238149  718325 addons.go:530] duration metric: took 1.192575566s for enable addons: enabled=[storage-provisioner default-storageclass]
	I1122 00:58:57.241501  718325 api_server.go:279] https://192.168.76.2:8443/healthz returned 200:
	ok
	I1122 00:58:57.242476  718325 api_server.go:141] control plane version: v1.34.1
	I1122 00:58:57.242501  718325 api_server.go:131] duration metric: took 11.892116ms to wait for apiserver health ...
	I1122 00:58:57.242512  718325 system_pods.go:43] waiting for kube-system pods to appear ...
	I1122 00:58:57.246234  718325 system_pods.go:59] 9 kube-system pods found
	I1122 00:58:57.246283  718325 system_pods.go:61] "coredns-66bc5c9577-t729j" [aeaa479f-a434-45f0-a153-9930c355bc90] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. no new claims to deallocate, preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I1122 00:58:57.246294  718325 system_pods.go:61] "coredns-66bc5c9577-zxtxg" [0809577c-cba1-4ded-a828-b52619d317b0] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. no new claims to deallocate, preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I1122 00:58:57.246305  718325 system_pods.go:61] "etcd-newest-cni-683181" [a7afb010-b8c8-4f7c-b259-9bda74317a71] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1122 00:58:57.246317  718325 system_pods.go:61] "kindnet-bpmkp" [a8d571f3-91a7-4136-8402-f32f10864617] Running
	I1122 00:58:57.246322  718325 system_pods.go:61] "kube-apiserver-newest-cni-683181" [0ae77e9e-2bcc-4530-a9af-edb6a2775a1c] Running
	I1122 00:58:57.246340  718325 system_pods.go:61] "kube-controller-manager-newest-cni-683181" [b8386b4e-6a08-4989-b637-baf2a4d446bb] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1122 00:58:57.246344  718325 system_pods.go:61] "kube-proxy-s5mhf" [386ab39d-8d29-482b-b752-52257e97dde8] Running
	I1122 00:58:57.246352  718325 system_pods.go:61] "kube-scheduler-newest-cni-683181" [8b62cbb0-d4b5-487b-bc74-7459fb8fc92f] Running
	I1122 00:58:57.246359  718325 system_pods.go:61] "storage-provisioner" [1b4ee39a-586b-4b95-b610-8cd6ad0ca178] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. no new claims to deallocate, preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I1122 00:58:57.246366  718325 system_pods.go:74] duration metric: took 3.848412ms to wait for pod list to return data ...
	I1122 00:58:57.246377  718325 default_sa.go:34] waiting for default service account to be created ...
	I1122 00:58:57.249774  718325 default_sa.go:45] found service account: "default"
	I1122 00:58:57.249833  718325 default_sa.go:55] duration metric: took 3.414111ms for default service account to be created ...
	I1122 00:58:57.249847  718325 kubeadm.go:587] duration metric: took 1.204619046s to wait for: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1122 00:58:57.249874  718325 node_conditions.go:102] verifying NodePressure condition ...
	I1122 00:58:57.256138  718325 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1122 00:58:57.256173  718325 node_conditions.go:123] node cpu capacity is 2
	I1122 00:58:57.256195  718325 node_conditions.go:105] duration metric: took 6.314909ms to run NodePressure ...
	I1122 00:58:57.256208  718325 start.go:242] waiting for startup goroutines ...
	I1122 00:58:57.700244  718325 kapi.go:214] "coredns" deployment in "kube-system" namespace and "newest-cni-683181" context rescaled to 1 replicas
	I1122 00:58:57.700280  718325 start.go:247] waiting for cluster config update ...
	I1122 00:58:57.700293  718325 start.go:256] writing updated cluster config ...
	I1122 00:58:57.700589  718325 ssh_runner.go:195] Run: rm -f paused
	I1122 00:58:57.760266  718325 start.go:628] kubectl: 1.33.2, cluster: 1.34.1 (minor skew: 1)
	I1122 00:58:57.763536  718325 out.go:179] * Done! kubectl is now configured to use "newest-cni-683181" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Nov 22 00:58:55 newest-cni-683181 crio[840]: time="2025-11-22T00:58:55.949398815Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 22 00:58:55 newest-cni-683181 crio[840]: time="2025-11-22T00:58:55.95671239Z" level=warning msg="Skipping invalid sysctl specified by config {net.ipv4.ip_unprivileged_port_start 0}: \"net.ipv4.ip_unprivileged_port_start\" not allowed with host net enabled" id=b06e335f-7193-4fdc-9439-75447a5fee6f name=/runtime.v1.RuntimeService/RunPodSandbox
	Nov 22 00:58:55 newest-cni-683181 crio[840]: time="2025-11-22T00:58:55.9647125Z" level=info msg="Ran pod sandbox f468fcffcc257f0b734caf66dc4fe009b3947a62b97fef37b10b6a186302e2a9 with infra container: kube-system/kube-proxy-s5mhf/POD" id=b06e335f-7193-4fdc-9439-75447a5fee6f name=/runtime.v1.RuntimeService/RunPodSandbox
	Nov 22 00:58:55 newest-cni-683181 crio[840]: time="2025-11-22T00:58:55.968047615Z" level=info msg="Checking image status: registry.k8s.io/kube-proxy:v1.34.1" id=47b2367a-4372-4432-978e-6893eeb613fc name=/runtime.v1.ImageService/ImageStatus
	Nov 22 00:58:55 newest-cni-683181 crio[840]: time="2025-11-22T00:58:55.971612656Z" level=info msg="Checking image status: registry.k8s.io/kube-proxy:v1.34.1" id=78db17c0-c433-4717-9297-e9eef28c8c24 name=/runtime.v1.ImageService/ImageStatus
	Nov 22 00:58:55 newest-cni-683181 crio[840]: time="2025-11-22T00:58:55.978843107Z" level=info msg="Creating container: kube-system/kube-proxy-s5mhf/kube-proxy" id=0ad40860-b230-4a50-98ee-effaf4090b44 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 22 00:58:55 newest-cni-683181 crio[840]: time="2025-11-22T00:58:55.979107Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 22 00:58:55 newest-cni-683181 crio[840]: time="2025-11-22T00:58:55.996437275Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 22 00:58:56 newest-cni-683181 crio[840]: time="2025-11-22T00:58:56.007170984Z" level=info msg="Running pod sandbox: kube-system/kindnet-bpmkp/POD" id=ff847ab7-11af-4d47-9268-1166d5077cea name=/runtime.v1.RuntimeService/RunPodSandbox
	Nov 22 00:58:56 newest-cni-683181 crio[840]: time="2025-11-22T00:58:56.007399268Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 22 00:58:56 newest-cni-683181 crio[840]: time="2025-11-22T00:58:56.016593937Z" level=warning msg="Skipping invalid sysctl specified by config {net.ipv4.ip_unprivileged_port_start 0}: \"net.ipv4.ip_unprivileged_port_start\" not allowed with host net enabled" id=ff847ab7-11af-4d47-9268-1166d5077cea name=/runtime.v1.RuntimeService/RunPodSandbox
	Nov 22 00:58:56 newest-cni-683181 crio[840]: time="2025-11-22T00:58:56.023205198Z" level=info msg="Ran pod sandbox bd94455a51682e0e6c770e48cce92de6fcc2bcc148521d738ebf9d8ad11d7450 with infra container: kube-system/kindnet-bpmkp/POD" id=ff847ab7-11af-4d47-9268-1166d5077cea name=/runtime.v1.RuntimeService/RunPodSandbox
	Nov 22 00:58:56 newest-cni-683181 crio[840]: time="2025-11-22T00:58:56.024946521Z" level=info msg="Checking image status: docker.io/kindest/kindnetd:v20250512-df8de77b" id=7e4ee947-bf32-4346-b737-71416da082c4 name=/runtime.v1.ImageService/ImageStatus
	Nov 22 00:58:56 newest-cni-683181 crio[840]: time="2025-11-22T00:58:56.031049547Z" level=info msg="Checking image status: docker.io/kindest/kindnetd:v20250512-df8de77b" id=3c9929d6-ca8b-4cf7-a8b8-aa4b4fa3f1b7 name=/runtime.v1.ImageService/ImageStatus
	Nov 22 00:58:56 newest-cni-683181 crio[840]: time="2025-11-22T00:58:56.032575204Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 22 00:58:56 newest-cni-683181 crio[840]: time="2025-11-22T00:58:56.056326436Z" level=info msg="Creating container: kube-system/kindnet-bpmkp/kindnet-cni" id=6b7d4454-5dbd-44ee-8031-8b5fd742d4be name=/runtime.v1.RuntimeService/CreateContainer
	Nov 22 00:58:56 newest-cni-683181 crio[840]: time="2025-11-22T00:58:56.056583561Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 22 00:58:56 newest-cni-683181 crio[840]: time="2025-11-22T00:58:56.07657578Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 22 00:58:56 newest-cni-683181 crio[840]: time="2025-11-22T00:58:56.087029644Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 22 00:58:56 newest-cni-683181 crio[840]: time="2025-11-22T00:58:56.140495822Z" level=info msg="Created container 0decc4b30b93f1424a6fa01bc66b4ef23c086373b4df5895555b4d542c904b6c: kube-system/kube-proxy-s5mhf/kube-proxy" id=0ad40860-b230-4a50-98ee-effaf4090b44 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 22 00:58:56 newest-cni-683181 crio[840]: time="2025-11-22T00:58:56.145188222Z" level=info msg="Starting container: 0decc4b30b93f1424a6fa01bc66b4ef23c086373b4df5895555b4d542c904b6c" id=2965cb65-1351-4095-ab01-a0e9928ba64b name=/runtime.v1.RuntimeService/StartContainer
	Nov 22 00:58:56 newest-cni-683181 crio[840]: time="2025-11-22T00:58:56.151730693Z" level=info msg="Started container" PID=1436 containerID=0decc4b30b93f1424a6fa01bc66b4ef23c086373b4df5895555b4d542c904b6c description=kube-system/kube-proxy-s5mhf/kube-proxy id=2965cb65-1351-4095-ab01-a0e9928ba64b name=/runtime.v1.RuntimeService/StartContainer sandboxID=f468fcffcc257f0b734caf66dc4fe009b3947a62b97fef37b10b6a186302e2a9
	Nov 22 00:58:56 newest-cni-683181 crio[840]: time="2025-11-22T00:58:56.152740255Z" level=info msg="Created container 81259c3be0988e9210f35f16ee8bb7982f05720bd7472b9649c7c7ee9bf93e66: kube-system/kindnet-bpmkp/kindnet-cni" id=6b7d4454-5dbd-44ee-8031-8b5fd742d4be name=/runtime.v1.RuntimeService/CreateContainer
	Nov 22 00:58:56 newest-cni-683181 crio[840]: time="2025-11-22T00:58:56.154474596Z" level=info msg="Starting container: 81259c3be0988e9210f35f16ee8bb7982f05720bd7472b9649c7c7ee9bf93e66" id=aa04de52-1a9e-44cc-bf65-7e0deedcf511 name=/runtime.v1.RuntimeService/StartContainer
	Nov 22 00:58:56 newest-cni-683181 crio[840]: time="2025-11-22T00:58:56.156055564Z" level=info msg="Started container" PID=1438 containerID=81259c3be0988e9210f35f16ee8bb7982f05720bd7472b9649c7c7ee9bf93e66 description=kube-system/kindnet-bpmkp/kindnet-cni id=aa04de52-1a9e-44cc-bf65-7e0deedcf511 name=/runtime.v1.RuntimeService/StartContainer sandboxID=bd94455a51682e0e6c770e48cce92de6fcc2bcc148521d738ebf9d8ad11d7450
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                         NAMESPACE
	81259c3be0988       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c   3 seconds ago       Running             kindnet-cni               0                   bd94455a51682       kindnet-bpmkp                               kube-system
	0decc4b30b93f       05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9   3 seconds ago       Running             kube-proxy                0                   f468fcffcc257       kube-proxy-s5mhf                            kube-system
	c06fbfdefbe28       7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a   16 seconds ago      Running             kube-controller-manager   0                   abac565af1575       kube-controller-manager-newest-cni-683181   kube-system
	a92ce1e82a4e6       a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e   16 seconds ago      Running             etcd                      0                   af539842695be       etcd-newest-cni-683181                      kube-system
	74651e2c9cc4e       b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0   16 seconds ago      Running             kube-scheduler            0                   afe4aac04bbad       kube-scheduler-newest-cni-683181            kube-system
	2a544c723f726       43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196   16 seconds ago      Running             kube-apiserver            0                   691a38bbb55d2       kube-apiserver-newest-cni-683181            kube-system
	
	
	==> describe nodes <==
	Name:               newest-cni-683181
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=newest-cni-683181
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=299bbe887a12c40541707cc636234f35f4ff1785
	                    minikube.k8s.io/name=newest-cni-683181
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_11_22T00_58_51_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 22 Nov 2025 00:58:47 +0000
	Taints:             node.kubernetes.io/not-ready:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  newest-cni-683181
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 22 Nov 2025 00:58:50 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 22 Nov 2025 00:58:50 +0000   Sat, 22 Nov 2025 00:58:43 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 22 Nov 2025 00:58:50 +0000   Sat, 22 Nov 2025 00:58:43 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 22 Nov 2025 00:58:50 +0000   Sat, 22 Nov 2025 00:58:43 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            False   Sat, 22 Nov 2025 00:58:50 +0000   Sat, 22 Nov 2025 00:58:43 +0000   KubeletNotReady              container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/cni/net.d/. Has your network provider started?
	Addresses:
	  InternalIP:  192.168.76.2
	  Hostname:    newest-cni-683181
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	System Info:
	  Machine ID:                 c92eea4d3c03c0156e01fcf8691e3907
	  System UUID:                6a33f369-61a2-4323-af82-24618416d16b
	  Boot ID:                    72ac7385-472f-47d1-a23e-bd80468d6e09
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.2
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.42.0.0/24
	PodCIDRs:                     10.42.0.0/24
	Non-terminated Pods:          (6 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  kube-system                 etcd-newest-cni-683181                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         9s
	  kube-system                 kindnet-bpmkp                                100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      4s
	  kube-system                 kube-apiserver-newest-cni-683181             250m (12%)    0 (0%)      0 (0%)           0 (0%)         9s
	  kube-system                 kube-controller-manager-newest-cni-683181    200m (10%)    0 (0%)      0 (0%)           0 (0%)         9s
	  kube-system                 kube-proxy-s5mhf                             0 (0%)        0 (0%)      0 (0%)           0 (0%)         4s
	  kube-system                 kube-scheduler-newest-cni-683181             100m (5%)     0 (0%)      0 (0%)           0 (0%)         9s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  100m (5%)
	  memory             150Mi (1%)  50Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                From             Message
	  ----     ------                   ----               ----             -------
	  Normal   Starting                 2s                 kube-proxy       
	  Normal   NodeHasSufficientMemory  17s (x8 over 17s)  kubelet          Node newest-cni-683181 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    17s (x8 over 17s)  kubelet          Node newest-cni-683181 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     17s (x8 over 17s)  kubelet          Node newest-cni-683181 status is now: NodeHasSufficientPID
	  Normal   Starting                 9s                 kubelet          Starting kubelet.
	  Warning  CgroupV1                 9s                 kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  9s                 kubelet          Node newest-cni-683181 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    9s                 kubelet          Node newest-cni-683181 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     9s                 kubelet          Node newest-cni-683181 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           5s                 node-controller  Node newest-cni-683181 event: Registered Node newest-cni-683181 in Controller
	
	
	==> dmesg <==
	[Nov22 00:36] overlayfs: idmapped layers are currently not supported
	[ +18.168104] overlayfs: idmapped layers are currently not supported
	[Nov22 00:37] overlayfs: idmapped layers are currently not supported
	[ +56.322609] overlayfs: idmapped layers are currently not supported
	[Nov22 00:38] overlayfs: idmapped layers are currently not supported
	[Nov22 00:39] overlayfs: idmapped layers are currently not supported
	[ +23.174928] overlayfs: idmapped layers are currently not supported
	[Nov22 00:41] overlayfs: idmapped layers are currently not supported
	[Nov22 00:42] overlayfs: idmapped layers are currently not supported
	[Nov22 00:44] overlayfs: idmapped layers are currently not supported
	[Nov22 00:45] overlayfs: idmapped layers are currently not supported
	[Nov22 00:46] overlayfs: idmapped layers are currently not supported
	[Nov22 00:48] overlayfs: idmapped layers are currently not supported
	[Nov22 00:50] overlayfs: idmapped layers are currently not supported
	[Nov22 00:51] overlayfs: idmapped layers are currently not supported
	[ +11.900293] overlayfs: idmapped layers are currently not supported
	[ +28.922055] overlayfs: idmapped layers are currently not supported
	[Nov22 00:52] overlayfs: idmapped layers are currently not supported
	[Nov22 00:53] overlayfs: idmapped layers are currently not supported
	[Nov22 00:54] overlayfs: idmapped layers are currently not supported
	[Nov22 00:55] overlayfs: idmapped layers are currently not supported
	[Nov22 00:56] overlayfs: idmapped layers are currently not supported
	[Nov22 00:57] overlayfs: idmapped layers are currently not supported
	[Nov22 00:58] overlayfs: idmapped layers are currently not supported
	[ +43.407301] overlayfs: idmapped layers are currently not supported
	
	
	==> etcd [a92ce1e82a4e6d8e162eb24d582401d66ed110a357d0978c812f816871f5c2ef] <==
	{"level":"warn","ts":"2025-11-22T00:58:46.570260Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38178","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-22T00:58:46.587000Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38202","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-22T00:58:46.608758Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38214","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-22T00:58:46.621527Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38232","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-22T00:58:46.637731Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38242","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-22T00:58:46.662305Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38268","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-22T00:58:46.676311Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38284","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-22T00:58:46.693161Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38298","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-22T00:58:46.708498Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38324","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-22T00:58:46.746595Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38344","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-22T00:58:46.765196Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38354","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-22T00:58:46.778338Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38374","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-22T00:58:46.798539Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38388","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-22T00:58:46.810476Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38414","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-22T00:58:46.830862Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38432","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-22T00:58:46.846778Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38438","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-22T00:58:46.863126Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38456","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-22T00:58:46.873571Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38484","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-22T00:58:46.898361Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38500","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-22T00:58:46.914928Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38514","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-22T00:58:46.931359Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38540","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-22T00:58:46.958762Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38558","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-22T00:58:46.970842Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38586","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-22T00:58:46.994457Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38598","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-22T00:58:47.098304Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38630","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 00:58:59 up  5:40,  0 user,  load average: 3.12, 3.67, 2.92
	Linux newest-cni-683181 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [81259c3be0988e9210f35f16ee8bb7982f05720bd7472b9649c7c7ee9bf93e66] <==
	I1122 00:58:56.313293       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1122 00:58:56.315385       1 main.go:139] hostIP = 192.168.76.2
	podIP = 192.168.76.2
	I1122 00:58:56.315585       1 main.go:148] setting mtu 1500 for CNI 
	I1122 00:58:56.315643       1 main.go:178] kindnetd IP family: "ipv4"
	I1122 00:58:56.315752       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-11-22T00:58:56Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1122 00:58:56.515593       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1122 00:58:56.515613       1 controller.go:381] "Waiting for informer caches to sync"
	I1122 00:58:56.515621       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1122 00:58:56.515903       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	
	
	==> kube-apiserver [2a544c723f726dffc4a071bb1369b0afd3901023acd63c0795beea4219c483fc] <==
	I1122 00:58:47.900851       1 policy_source.go:240] refreshing policies
	I1122 00:58:47.910666       1 controller.go:667] quota admission added evaluator for: namespaces
	I1122 00:58:47.925000       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	E1122 00:58:47.925365       1 controller.go:148] "Unhandled Error" err="while syncing ConfigMap \"kube-system/kube-apiserver-legacy-service-account-token-tracking\", err: namespaces \"kube-system\" not found" logger="UnhandledError"
	I1122 00:58:47.963662       1 default_servicecidr_controller.go:228] Setting default ServiceCIDR condition Ready to True
	I1122 00:58:47.964257       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1122 00:58:47.985955       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1122 00:58:47.986098       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1122 00:58:48.625701       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I1122 00:58:48.630725       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I1122 00:58:48.630753       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1122 00:58:49.350015       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1122 00:58:49.399589       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1122 00:58:49.543402       1 alloc.go:328] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W1122 00:58:49.556317       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.76.2]
	I1122 00:58:49.557515       1 controller.go:667] quota admission added evaluator for: endpoints
	I1122 00:58:49.566988       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1122 00:58:49.786394       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1122 00:58:50.294709       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1122 00:58:50.322873       1 alloc.go:328] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I1122 00:58:50.336994       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I1122 00:58:55.591563       1 controller.go:667] quota admission added evaluator for: controllerrevisions.apps
	I1122 00:58:55.767028       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1122 00:58:55.921744       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1122 00:58:55.930778       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	
	
	==> kube-controller-manager [c06fbfdefbe281979664c7c5964b6d63a7794c52b551800ba421675a6eaaa255] <==
	I1122 00:58:54.853111       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I1122 00:58:54.853170       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I1122 00:58:54.853510       1 shared_informer.go:356] "Caches are synced" controller="namespace"
	I1122 00:58:54.853556       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-serving"
	I1122 00:58:54.853615       1 shared_informer.go:356] "Caches are synced" controller="ReplicationController"
	I1122 00:58:54.853639       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-client"
	I1122 00:58:54.853664       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kube-apiserver-client"
	I1122 00:58:54.854784       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1122 00:58:54.854816       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I1122 00:58:54.854854       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	I1122 00:58:54.854915       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I1122 00:58:54.863326       1 shared_informer.go:356] "Caches are synced" controller="node"
	I1122 00:58:54.863445       1 range_allocator.go:177] "Sending events to api server" logger="node-ipam-controller"
	I1122 00:58:54.863503       1 range_allocator.go:183] "Starting range CIDR allocator" logger="node-ipam-controller"
	I1122 00:58:54.865209       1 shared_informer.go:349] "Waiting for caches to sync" controller="cidrallocator"
	I1122 00:58:54.865289       1 shared_informer.go:356] "Caches are synced" controller="cidrallocator"
	I1122 00:58:54.873060       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I1122 00:58:54.879847       1 range_allocator.go:428] "Set node PodCIDR" logger="node-ipam-controller" node="newest-cni-683181" podCIDRs=["10.42.0.0/24"]
	I1122 00:58:54.886821       1 shared_informer.go:356] "Caches are synced" controller="VAC protection"
	I1122 00:58:54.886829       1 shared_informer.go:356] "Caches are synced" controller="disruption"
	I1122 00:58:54.886895       1 shared_informer.go:356] "Caches are synced" controller="service-cidr-controller"
	I1122 00:58:54.889365       1 shared_informer.go:356] "Caches are synced" controller="cronjob"
	I1122 00:58:54.891780       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1122 00:58:54.898286       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1122 00:58:54.901908       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	
	
	==> kube-proxy [0decc4b30b93f1424a6fa01bc66b4ef23c086373b4df5895555b4d542c904b6c] <==
	I1122 00:58:56.314891       1 server_linux.go:53] "Using iptables proxy"
	I1122 00:58:56.455676       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1122 00:58:56.556300       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1122 00:58:56.556340       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.76.2"]
	E1122 00:58:56.556423       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1122 00:58:56.647136       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1122 00:58:56.647188       1 server_linux.go:132] "Using iptables Proxier"
	I1122 00:58:56.654767       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1122 00:58:56.655064       1 server.go:527] "Version info" version="v1.34.1"
	I1122 00:58:56.655089       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1122 00:58:56.656478       1 config.go:200] "Starting service config controller"
	I1122 00:58:56.656489       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1122 00:58:56.656522       1 config.go:106] "Starting endpoint slice config controller"
	I1122 00:58:56.656527       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1122 00:58:56.656545       1 config.go:403] "Starting serviceCIDR config controller"
	I1122 00:58:56.656549       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1122 00:58:56.657163       1 config.go:309] "Starting node config controller"
	I1122 00:58:56.657171       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1122 00:58:56.657177       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1122 00:58:56.757921       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1122 00:58:56.757963       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1122 00:58:56.758006       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [74651e2c9cc4e9d0523204344583ea8602e0e850200775c3e126173ac13d4c7e] <==
	E1122 00:58:47.903850       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1122 00:58:47.903904       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1122 00:58:47.904019       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1122 00:58:47.905453       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1122 00:58:47.905664       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1122 00:58:47.905771       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1122 00:58:47.905921       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1122 00:58:47.906023       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1122 00:58:47.906128       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1122 00:58:47.906241       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1122 00:58:47.906408       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1122 00:58:47.906492       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1122 00:58:47.906648       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1122 00:58:48.734675       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1122 00:58:48.754097       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1122 00:58:48.811874       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1122 00:58:48.824682       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1122 00:58:48.858259       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1122 00:58:48.865937       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1122 00:58:48.898019       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1122 00:58:48.936569       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_arm64.s:1223" type="*v1.ConfigMap"
	E1122 00:58:48.976049       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1122 00:58:48.976975       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1122 00:58:49.055666       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	I1122 00:58:50.882879       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Nov 22 00:58:50 newest-cni-683181 kubelet[1312]: I1122 00:58:50.635685    1312 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-local-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/d207dcf2421bcd07613fd4f094cf303b-usr-local-share-ca-certificates\") pod \"kube-apiserver-newest-cni-683181\" (UID: \"d207dcf2421bcd07613fd4f094cf303b\") " pod="kube-system/kube-apiserver-newest-cni-683181"
	Nov 22 00:58:50 newest-cni-683181 kubelet[1312]: I1122 00:58:50.635704    1312 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/d207dcf2421bcd07613fd4f094cf303b-usr-share-ca-certificates\") pod \"kube-apiserver-newest-cni-683181\" (UID: \"d207dcf2421bcd07613fd4f094cf303b\") " pod="kube-system/kube-apiserver-newest-cni-683181"
	Nov 22 00:58:51 newest-cni-683181 kubelet[1312]: I1122 00:58:51.195241    1312 apiserver.go:52] "Watching apiserver"
	Nov 22 00:58:51 newest-cni-683181 kubelet[1312]: I1122 00:58:51.232029    1312 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world"
	Nov 22 00:58:51 newest-cni-683181 kubelet[1312]: I1122 00:58:51.324880    1312 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-newest-cni-683181"
	Nov 22 00:58:51 newest-cni-683181 kubelet[1312]: I1122 00:58:51.325079    1312 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/etcd-newest-cni-683181"
	Nov 22 00:58:51 newest-cni-683181 kubelet[1312]: I1122 00:58:51.369709    1312 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-newest-cni-683181" podStartSLOduration=1.369693544 podStartE2EDuration="1.369693544s" podCreationTimestamp="2025-11-22 00:58:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-22 00:58:51.369337714 +0000 UTC m=+1.252540886" watchObservedRunningTime="2025-11-22 00:58:51.369693544 +0000 UTC m=+1.252896699"
	Nov 22 00:58:51 newest-cni-683181 kubelet[1312]: E1122 00:58:51.369974    1312 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"etcd-newest-cni-683181\" already exists" pod="kube-system/etcd-newest-cni-683181"
	Nov 22 00:58:51 newest-cni-683181 kubelet[1312]: E1122 00:58:51.370204    1312 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-apiserver-newest-cni-683181\" already exists" pod="kube-system/kube-apiserver-newest-cni-683181"
	Nov 22 00:58:51 newest-cni-683181 kubelet[1312]: I1122 00:58:51.408664    1312 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-newest-cni-683181" podStartSLOduration=1.408633525 podStartE2EDuration="1.408633525s" podCreationTimestamp="2025-11-22 00:58:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-22 00:58:51.389937063 +0000 UTC m=+1.273140226" watchObservedRunningTime="2025-11-22 00:58:51.408633525 +0000 UTC m=+1.291836704"
	Nov 22 00:58:51 newest-cni-683181 kubelet[1312]: I1122 00:58:51.408776    1312 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-newest-cni-683181" podStartSLOduration=1.408770423 podStartE2EDuration="1.408770423s" podCreationTimestamp="2025-11-22 00:58:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-22 00:58:51.408420075 +0000 UTC m=+1.291623238" watchObservedRunningTime="2025-11-22 00:58:51.408770423 +0000 UTC m=+1.291973586"
	Nov 22 00:58:51 newest-cni-683181 kubelet[1312]: I1122 00:58:51.457972    1312 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/etcd-newest-cni-683181" podStartSLOduration=1.45795159 podStartE2EDuration="1.45795159s" podCreationTimestamp="2025-11-22 00:58:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-22 00:58:51.424381864 +0000 UTC m=+1.307585035" watchObservedRunningTime="2025-11-22 00:58:51.45795159 +0000 UTC m=+1.341154745"
	Nov 22 00:58:54 newest-cni-683181 kubelet[1312]: I1122 00:58:54.915937    1312 kuberuntime_manager.go:1828] "Updating runtime config through cri with podcidr" CIDR="10.42.0.0/24"
	Nov 22 00:58:54 newest-cni-683181 kubelet[1312]: I1122 00:58:54.917658    1312 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.42.0.0/24"
	Nov 22 00:58:55 newest-cni-683181 kubelet[1312]: I1122 00:58:55.670880    1312 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4mnw9\" (UniqueName: \"kubernetes.io/projected/386ab39d-8d29-482b-b752-52257e97dde8-kube-api-access-4mnw9\") pod \"kube-proxy-s5mhf\" (UID: \"386ab39d-8d29-482b-b752-52257e97dde8\") " pod="kube-system/kube-proxy-s5mhf"
	Nov 22 00:58:55 newest-cni-683181 kubelet[1312]: I1122 00:58:55.671148    1312 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/386ab39d-8d29-482b-b752-52257e97dde8-kube-proxy\") pod \"kube-proxy-s5mhf\" (UID: \"386ab39d-8d29-482b-b752-52257e97dde8\") " pod="kube-system/kube-proxy-s5mhf"
	Nov 22 00:58:55 newest-cni-683181 kubelet[1312]: I1122 00:58:55.671292    1312 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/386ab39d-8d29-482b-b752-52257e97dde8-lib-modules\") pod \"kube-proxy-s5mhf\" (UID: \"386ab39d-8d29-482b-b752-52257e97dde8\") " pod="kube-system/kube-proxy-s5mhf"
	Nov 22 00:58:55 newest-cni-683181 kubelet[1312]: I1122 00:58:55.671413    1312 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/386ab39d-8d29-482b-b752-52257e97dde8-xtables-lock\") pod \"kube-proxy-s5mhf\" (UID: \"386ab39d-8d29-482b-b752-52257e97dde8\") " pod="kube-system/kube-proxy-s5mhf"
	Nov 22 00:58:55 newest-cni-683181 kubelet[1312]: I1122 00:58:55.772211    1312 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/a8d571f3-91a7-4136-8402-f32f10864617-cni-cfg\") pod \"kindnet-bpmkp\" (UID: \"a8d571f3-91a7-4136-8402-f32f10864617\") " pod="kube-system/kindnet-bpmkp"
	Nov 22 00:58:55 newest-cni-683181 kubelet[1312]: I1122 00:58:55.772260    1312 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/a8d571f3-91a7-4136-8402-f32f10864617-xtables-lock\") pod \"kindnet-bpmkp\" (UID: \"a8d571f3-91a7-4136-8402-f32f10864617\") " pod="kube-system/kindnet-bpmkp"
	Nov 22 00:58:55 newest-cni-683181 kubelet[1312]: I1122 00:58:55.772280    1312 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2zvhl\" (UniqueName: \"kubernetes.io/projected/a8d571f3-91a7-4136-8402-f32f10864617-kube-api-access-2zvhl\") pod \"kindnet-bpmkp\" (UID: \"a8d571f3-91a7-4136-8402-f32f10864617\") " pod="kube-system/kindnet-bpmkp"
	Nov 22 00:58:55 newest-cni-683181 kubelet[1312]: I1122 00:58:55.772320    1312 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/a8d571f3-91a7-4136-8402-f32f10864617-lib-modules\") pod \"kindnet-bpmkp\" (UID: \"a8d571f3-91a7-4136-8402-f32f10864617\") " pod="kube-system/kindnet-bpmkp"
	Nov 22 00:58:55 newest-cni-683181 kubelet[1312]: I1122 00:58:55.816039    1312 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory"
	Nov 22 00:58:56 newest-cni-683181 kubelet[1312]: I1122 00:58:56.376163    1312 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kindnet-bpmkp" podStartSLOduration=1.376143492 podStartE2EDuration="1.376143492s" podCreationTimestamp="2025-11-22 00:58:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-22 00:58:56.37601164 +0000 UTC m=+6.259214803" watchObservedRunningTime="2025-11-22 00:58:56.376143492 +0000 UTC m=+6.259346655"
	Nov 22 00:58:56 newest-cni-683181 kubelet[1312]: I1122 00:58:56.416730    1312 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-s5mhf" podStartSLOduration=1.416712119 podStartE2EDuration="1.416712119s" podCreationTimestamp="2025-11-22 00:58:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-22 00:58:56.416434999 +0000 UTC m=+6.299638154" watchObservedRunningTime="2025-11-22 00:58:56.416712119 +0000 UTC m=+6.299915372"
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-683181 -n newest-cni-683181
helpers_test.go:269: (dbg) Run:  kubectl --context newest-cni-683181 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:280: non-running pods: coredns-66bc5c9577-t729j storage-provisioner
helpers_test.go:282: ======> post-mortem[TestStartStop/group/newest-cni/serial/EnableAddonWhileActive]: describe non-running pods <======
helpers_test.go:285: (dbg) Run:  kubectl --context newest-cni-683181 describe pod coredns-66bc5c9577-t729j storage-provisioner
helpers_test.go:285: (dbg) Non-zero exit: kubectl --context newest-cni-683181 describe pod coredns-66bc5c9577-t729j storage-provisioner: exit status 1 (114.452381ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "coredns-66bc5c9577-t729j" not found
	Error from server (NotFound): pods "storage-provisioner" not found

                                                
                                                
** /stderr **
helpers_test.go:287: kubectl --context newest-cni-683181 describe pod coredns-66bc5c9577-t729j storage-provisioner: exit status 1
--- FAIL: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (2.88s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (6.88s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 pause -p newest-cni-683181 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-arm64 pause -p newest-cni-683181 --alsologtostderr -v=1: exit status 80 (2.506157806s)

                                                
                                                
-- stdout --
	* Pausing node newest-cni-683181 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1122 00:59:24.662956  725942 out.go:360] Setting OutFile to fd 1 ...
	I1122 00:59:24.663148  725942 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1122 00:59:24.663173  725942 out.go:374] Setting ErrFile to fd 2...
	I1122 00:59:24.663203  725942 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1122 00:59:24.663533  725942 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21934-513600/.minikube/bin
	I1122 00:59:24.663849  725942 out.go:368] Setting JSON to false
	I1122 00:59:24.663894  725942 mustload.go:66] Loading cluster: newest-cni-683181
	I1122 00:59:24.664359  725942 config.go:182] Loaded profile config "newest-cni-683181": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1122 00:59:24.664869  725942 cli_runner.go:164] Run: docker container inspect newest-cni-683181 --format={{.State.Status}}
	I1122 00:59:24.692073  725942 host.go:66] Checking if "newest-cni-683181" exists ...
	I1122 00:59:24.692429  725942 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1122 00:59:24.790748  725942 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:2 ContainersRunning:2 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:49 OomKillDisable:true NGoroutines:62 SystemTime:2025-11-22 00:59:24.775059484 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1122 00:59:24.791438  725942 pause.go:60] "namespaces" [kube-system kubernetes-dashboard istio-operator]="keys" map[addons:[] all:%!s(bool=false) apiserver-ips:[] apiserver-name:minikubeCA apiserver-names:[] apiserver-port:%!s(int=8443) auto-pause-interval:1m0s auto-update-drivers:%!s(bool=true) base-image:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763588073-21934@sha256:19d3da0413e1bfa354cbb88004c6796f8e9772a083e0230b0f6e50212ee04c7e binary-mirror: bootstrapper:kubeadm cache-images:%!s(bool=true) cancel-scheduled:%!s(bool=false) cert-expiration:26280h0m0s cni: container-runtime: cpus:2 cri-socket: delete-on-failure:%!s(bool=false) disable-coredns-log:%!s(bool=false) disable-driver-mounts:%!s(bool=false) disable-metrics:%!s(bool=false) disable-optimizations:%!s(bool=false) disk-size:20000mb dns-domain:cluster.local dns-proxy:%!s(bool=false) docker-env:[] docker-opt:[] download-only:%!s(bool=false) driver: dry-run:%!s(bool=false) embed-certs:%!s(bool=false) embedcerts:%!s(bool=false) enable-default-
cni:%!s(bool=false) extra-config: extra-disks:%!s(int=0) feature-gates: force:%!s(bool=false) force-systemd:%!s(bool=false) gpus: ha:%!s(bool=false) host-dns-resolver:%!s(bool=true) host-only-cidr:192.168.59.1/24 host-only-nic-type:virtio hyperkit-vpnkit-sock: hyperkit-vsock-ports:[] hyperv-external-adapter: hyperv-use-external-switch:%!s(bool=false) hyperv-virtual-switch: image-mirror-country: image-repository: insecure-registry:[] install-addons:%!s(bool=true) interactive:%!s(bool=true) iso-url:[https://storage.googleapis.com/minikube-builds/iso/21924/minikube-v1.37.0-1763503576-21924-arm64.iso https://github.com/kubernetes/minikube/releases/download/v1.37.0-1763503576-21924/minikube-v1.37.0-1763503576-21924-arm64.iso https://kubernetes.oss-cn-hangzhou.aliyuncs.com/minikube/iso/minikube-v1.37.0-1763503576-21924-arm64.iso] keep-context:%!s(bool=false) keep-context-active:%!s(bool=false) kubernetes-version: kvm-gpu:%!s(bool=false) kvm-hidden:%!s(bool=false) kvm-network:default kvm-numa-count:%!s(int=1) kvm-qe
mu-uri:qemu:///system listen-address: maxauditentries:%!s(int=1000) memory: mount:%!s(bool=false) mount-9p-version:9p2000.L mount-gid:docker mount-ip: mount-msize:%!s(int=262144) mount-options:[] mount-port:0 mount-string: mount-type:9p mount-uid:docker namespace:default nat-nic-type:virtio native-ssh:%!s(bool=true) network: network-plugin: nfs-share:[] nfs-shares-root:/nfsshares no-kubernetes:%!s(bool=false) no-vtx-check:%!s(bool=false) nodes:%!s(int=1) output:text ports:[] preload:%!s(bool=true) profile:newest-cni-683181 purge:%!s(bool=false) qemu-firmware-path: registry-mirror:[] reminderwaitperiodinhours:%!s(int=24) rootless:%!s(bool=false) schedule:0s service-cluster-ip-range:10.96.0.0/12 skip-audit:%!s(bool=false) socket-vmnet-client-path: socket-vmnet-path: ssh-ip-address: ssh-key: ssh-port:%!s(int=22) ssh-user:root static-ip: subnet: trace: user: uuid: vm:%!s(bool=false) vm-driver: wait:[apiserver system_pods] wait-timeout:6m0s wantnonedriverwarning:%!s(bool=true) wantupdatenotification:%!s(bool=true)
wantvirtualboxdriverwarning:%!s(bool=true)]="(MISSING)"
	I1122 00:59:24.796931  725942 out.go:179] * Pausing node newest-cni-683181 ... 
	I1122 00:59:24.801680  725942 host.go:66] Checking if "newest-cni-683181" exists ...
	I1122 00:59:24.802024  725942 ssh_runner.go:195] Run: systemctl --version
	I1122 00:59:24.802067  725942 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-683181
	I1122 00:59:24.835860  725942 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33817 SSHKeyPath:/home/jenkins/minikube-integration/21934-513600/.minikube/machines/newest-cni-683181/id_rsa Username:docker}
	I1122 00:59:24.951520  725942 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1122 00:59:24.977628  725942 pause.go:52] kubelet running: true
	I1122 00:59:24.977696  725942 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1122 00:59:25.414071  725942 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1122 00:59:25.414158  725942 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1122 00:59:25.521441  725942 cri.go:89] found id: "c6dc4dd7fc04ddecd3b2bb2080ec2863a7ec44e94fdeabd8e2cccba7fd814d22"
	I1122 00:59:25.521511  725942 cri.go:89] found id: "7e89eb5cadf037e487a32d9ea4517b84e0355838991972f6b49fefb3847298aa"
	I1122 00:59:25.521530  725942 cri.go:89] found id: "8765e88de49e2be9fd655ccd870bf0fbf040cf37c257af74ba0018ab6313b34a"
	I1122 00:59:25.521549  725942 cri.go:89] found id: "5b3cac023bb69eb303946916eb3bee91968b9f879ebd1c3aacc0ade3047e950b"
	I1122 00:59:25.521567  725942 cri.go:89] found id: "2cf11a2399791e45cf7ed67b2198c31cbb95b3ccb3913ab1861bd2d43031f670"
	I1122 00:59:25.521597  725942 cri.go:89] found id: "a49ade414a411cf4537b0004c3cb9293ea4b12b1790212ea98dcd0dc746c2e0f"
	I1122 00:59:25.521621  725942 cri.go:89] found id: ""
	I1122 00:59:25.521707  725942 ssh_runner.go:195] Run: sudo runc list -f json
	I1122 00:59:25.540527  725942 retry.go:31] will retry after 371.835506ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-22T00:59:25Z" level=error msg="open /run/runc: no such file or directory"
	I1122 00:59:25.912778  725942 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1122 00:59:25.926757  725942 pause.go:52] kubelet running: false
	I1122 00:59:25.926869  725942 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1122 00:59:26.125371  725942 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1122 00:59:26.125492  725942 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1122 00:59:26.242838  725942 cri.go:89] found id: "c6dc4dd7fc04ddecd3b2bb2080ec2863a7ec44e94fdeabd8e2cccba7fd814d22"
	I1122 00:59:26.242875  725942 cri.go:89] found id: "7e89eb5cadf037e487a32d9ea4517b84e0355838991972f6b49fefb3847298aa"
	I1122 00:59:26.242880  725942 cri.go:89] found id: "8765e88de49e2be9fd655ccd870bf0fbf040cf37c257af74ba0018ab6313b34a"
	I1122 00:59:26.242885  725942 cri.go:89] found id: "5b3cac023bb69eb303946916eb3bee91968b9f879ebd1c3aacc0ade3047e950b"
	I1122 00:59:26.242888  725942 cri.go:89] found id: "2cf11a2399791e45cf7ed67b2198c31cbb95b3ccb3913ab1861bd2d43031f670"
	I1122 00:59:26.242892  725942 cri.go:89] found id: "a49ade414a411cf4537b0004c3cb9293ea4b12b1790212ea98dcd0dc746c2e0f"
	I1122 00:59:26.242895  725942 cri.go:89] found id: ""
	I1122 00:59:26.242966  725942 ssh_runner.go:195] Run: sudo runc list -f json
	I1122 00:59:26.255651  725942 retry.go:31] will retry after 513.251961ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-22T00:59:26Z" level=error msg="open /run/runc: no such file or directory"
	I1122 00:59:26.769133  725942 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1122 00:59:26.781705  725942 pause.go:52] kubelet running: false
	I1122 00:59:26.781769  725942 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1122 00:59:26.969035  725942 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1122 00:59:26.969159  725942 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1122 00:59:27.055945  725942 cri.go:89] found id: "c6dc4dd7fc04ddecd3b2bb2080ec2863a7ec44e94fdeabd8e2cccba7fd814d22"
	I1122 00:59:27.055965  725942 cri.go:89] found id: "7e89eb5cadf037e487a32d9ea4517b84e0355838991972f6b49fefb3847298aa"
	I1122 00:59:27.055970  725942 cri.go:89] found id: "8765e88de49e2be9fd655ccd870bf0fbf040cf37c257af74ba0018ab6313b34a"
	I1122 00:59:27.055974  725942 cri.go:89] found id: "5b3cac023bb69eb303946916eb3bee91968b9f879ebd1c3aacc0ade3047e950b"
	I1122 00:59:27.055977  725942 cri.go:89] found id: "2cf11a2399791e45cf7ed67b2198c31cbb95b3ccb3913ab1861bd2d43031f670"
	I1122 00:59:27.055981  725942 cri.go:89] found id: "a49ade414a411cf4537b0004c3cb9293ea4b12b1790212ea98dcd0dc746c2e0f"
	I1122 00:59:27.055984  725942 cri.go:89] found id: ""
	I1122 00:59:27.056033  725942 ssh_runner.go:195] Run: sudo runc list -f json
	I1122 00:59:27.072630  725942 out.go:203] 
	W1122 00:59:27.076019  725942 out.go:285] X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-22T00:59:27Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-22T00:59:27Z" level=error msg="open /run/runc: no such file or directory"
	
	W1122 00:59:27.076038  725942 out.go:285] * 
	* 
	W1122 00:59:27.083286  725942 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1122 00:59:27.086577  725942 out.go:203] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:309: out/minikube-linux-arm64 pause -p newest-cni-683181 --alsologtostderr -v=1 failed: exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/newest-cni/serial/Pause]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/newest-cni/serial/Pause]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect newest-cni-683181
helpers_test.go:243: (dbg) docker inspect newest-cni-683181:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "135c0ae6b0320d87bc11e3af1c9defb75409d4de75a05dd6fb885ab556eb0fcb",
	        "Created": "2025-11-22T00:58:25.478632838Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 723649,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-11-22T00:59:02.915247398Z",
	            "FinishedAt": "2025-11-22T00:59:01.859414867Z"
	        },
	        "Image": "sha256:97061622515a85470dad13cb046ad5fc5021fcd2b05aa921a620abb52951cd5d",
	        "ResolvConfPath": "/var/lib/docker/containers/135c0ae6b0320d87bc11e3af1c9defb75409d4de75a05dd6fb885ab556eb0fcb/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/135c0ae6b0320d87bc11e3af1c9defb75409d4de75a05dd6fb885ab556eb0fcb/hostname",
	        "HostsPath": "/var/lib/docker/containers/135c0ae6b0320d87bc11e3af1c9defb75409d4de75a05dd6fb885ab556eb0fcb/hosts",
	        "LogPath": "/var/lib/docker/containers/135c0ae6b0320d87bc11e3af1c9defb75409d4de75a05dd6fb885ab556eb0fcb/135c0ae6b0320d87bc11e3af1c9defb75409d4de75a05dd6fb885ab556eb0fcb-json.log",
	        "Name": "/newest-cni-683181",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "newest-cni-683181:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "newest-cni-683181",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "135c0ae6b0320d87bc11e3af1c9defb75409d4de75a05dd6fb885ab556eb0fcb",
	                "LowerDir": "/var/lib/docker/overlay2/fe785fd9359d610347ceff171bafa42142111d5ef9b3343e32ddfef45bc62e2d-init/diff:/var/lib/docker/overlay2/7e8788c6de692bc1c3758a2bb2c4b8da0fbba26855f855c0f3b655bfbdd92f8e/diff",
	                "MergedDir": "/var/lib/docker/overlay2/fe785fd9359d610347ceff171bafa42142111d5ef9b3343e32ddfef45bc62e2d/merged",
	                "UpperDir": "/var/lib/docker/overlay2/fe785fd9359d610347ceff171bafa42142111d5ef9b3343e32ddfef45bc62e2d/diff",
	                "WorkDir": "/var/lib/docker/overlay2/fe785fd9359d610347ceff171bafa42142111d5ef9b3343e32ddfef45bc62e2d/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "newest-cni-683181",
	                "Source": "/var/lib/docker/volumes/newest-cni-683181/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "newest-cni-683181",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763588073-21934@sha256:19d3da0413e1bfa354cbb88004c6796f8e9772a083e0230b0f6e50212ee04c7e",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "newest-cni-683181",
	                "name.minikube.sigs.k8s.io": "newest-cni-683181",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "ec79e1719321d681fac3b3ed01fa96bd8ef54dfd72f12f0cd6dc8901a7b9d91b",
	            "SandboxKey": "/var/run/docker/netns/ec79e1719321",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33817"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33818"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33821"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33819"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33820"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "newest-cni-683181": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "e2:ba:e3:ed:12:61",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "621737e0fccb923382516e66d395dca2d3f734654251424cf1f7cd380f8144e7",
	                    "EndpointID": "1fee398ed76d45ff55b3a57ec14d889b3eae056ad25c39ada92aebb9ec09b60c",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "newest-cni-683181",
	                        "135c0ae6b032"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-683181 -n newest-cni-683181
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-683181 -n newest-cni-683181: exit status 2 (435.341613ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestStartStop/group/newest-cni/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/newest-cni/serial/Pause]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p newest-cni-683181 logs -n 25
E1122 00:59:27.712973  516937 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21934-513600/.minikube/profiles/old-k8s-version-625837/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p newest-cni-683181 logs -n 25: (1.174639331s)
helpers_test.go:260: TestStartStop/group/newest-cni/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬──────────────
───────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼──────────────
───────┤
	│ addons  │ enable metrics-server -p embed-certs-879000 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                      │ embed-certs-879000           │ jenkins │ v1.37.0 │ 22 Nov 25 00:56 UTC │                     │
	│ stop    │ -p embed-certs-879000 --alsologtostderr -v=3                                                                                                                                                                                                  │ embed-certs-879000           │ jenkins │ v1.37.0 │ 22 Nov 25 00:56 UTC │ 22 Nov 25 00:57 UTC │
	│ addons  │ enable dashboard -p embed-certs-879000 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                 │ embed-certs-879000           │ jenkins │ v1.37.0 │ 22 Nov 25 00:57 UTC │ 22 Nov 25 00:57 UTC │
	│ start   │ -p embed-certs-879000 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                        │ embed-certs-879000           │ jenkins │ v1.37.0 │ 22 Nov 25 00:57 UTC │ 22 Nov 25 00:57 UTC │
	│ image   │ no-preload-165130 image list --format=json                                                                                                                                                                                                    │ no-preload-165130            │ jenkins │ v1.37.0 │ 22 Nov 25 00:57 UTC │ 22 Nov 25 00:57 UTC │
	│ pause   │ -p no-preload-165130 --alsologtostderr -v=1                                                                                                                                                                                                   │ no-preload-165130            │ jenkins │ v1.37.0 │ 22 Nov 25 00:57 UTC │                     │
	│ delete  │ -p no-preload-165130                                                                                                                                                                                                                          │ no-preload-165130            │ jenkins │ v1.37.0 │ 22 Nov 25 00:57 UTC │ 22 Nov 25 00:57 UTC │
	│ delete  │ -p no-preload-165130                                                                                                                                                                                                                          │ no-preload-165130            │ jenkins │ v1.37.0 │ 22 Nov 25 00:57 UTC │ 22 Nov 25 00:57 UTC │
	│ delete  │ -p disable-driver-mounts-046489                                                                                                                                                                                                               │ disable-driver-mounts-046489 │ jenkins │ v1.37.0 │ 22 Nov 25 00:57 UTC │ 22 Nov 25 00:57 UTC │
	│ start   │ -p default-k8s-diff-port-882305 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                      │ default-k8s-diff-port-882305 │ jenkins │ v1.37.0 │ 22 Nov 25 00:57 UTC │ 22 Nov 25 00:58 UTC │
	│ image   │ embed-certs-879000 image list --format=json                                                                                                                                                                                                   │ embed-certs-879000           │ jenkins │ v1.37.0 │ 22 Nov 25 00:58 UTC │ 22 Nov 25 00:58 UTC │
	│ pause   │ -p embed-certs-879000 --alsologtostderr -v=1                                                                                                                                                                                                  │ embed-certs-879000           │ jenkins │ v1.37.0 │ 22 Nov 25 00:58 UTC │                     │
	│ delete  │ -p embed-certs-879000                                                                                                                                                                                                                         │ embed-certs-879000           │ jenkins │ v1.37.0 │ 22 Nov 25 00:58 UTC │ 22 Nov 25 00:58 UTC │
	│ delete  │ -p embed-certs-879000                                                                                                                                                                                                                         │ embed-certs-879000           │ jenkins │ v1.37.0 │ 22 Nov 25 00:58 UTC │ 22 Nov 25 00:58 UTC │
	│ start   │ -p newest-cni-683181 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1 │ newest-cni-683181            │ jenkins │ v1.37.0 │ 22 Nov 25 00:58 UTC │ 22 Nov 25 00:58 UTC │
	│ addons  │ enable metrics-server -p default-k8s-diff-port-882305 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                            │ default-k8s-diff-port-882305 │ jenkins │ v1.37.0 │ 22 Nov 25 00:58 UTC │                     │
	│ stop    │ -p default-k8s-diff-port-882305 --alsologtostderr -v=3                                                                                                                                                                                        │ default-k8s-diff-port-882305 │ jenkins │ v1.37.0 │ 22 Nov 25 00:58 UTC │ 22 Nov 25 00:58 UTC │
	│ addons  │ enable dashboard -p default-k8s-diff-port-882305 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                       │ default-k8s-diff-port-882305 │ jenkins │ v1.37.0 │ 22 Nov 25 00:58 UTC │ 22 Nov 25 00:58 UTC │
	│ start   │ -p default-k8s-diff-port-882305 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                      │ default-k8s-diff-port-882305 │ jenkins │ v1.37.0 │ 22 Nov 25 00:58 UTC │                     │
	│ addons  │ enable metrics-server -p newest-cni-683181 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ newest-cni-683181            │ jenkins │ v1.37.0 │ 22 Nov 25 00:58 UTC │                     │
	│ stop    │ -p newest-cni-683181 --alsologtostderr -v=3                                                                                                                                                                                                   │ newest-cni-683181            │ jenkins │ v1.37.0 │ 22 Nov 25 00:59 UTC │ 22 Nov 25 00:59 UTC │
	│ addons  │ enable dashboard -p newest-cni-683181 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ newest-cni-683181            │ jenkins │ v1.37.0 │ 22 Nov 25 00:59 UTC │ 22 Nov 25 00:59 UTC │
	│ start   │ -p newest-cni-683181 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1 │ newest-cni-683181            │ jenkins │ v1.37.0 │ 22 Nov 25 00:59 UTC │ 22 Nov 25 00:59 UTC │
	│ image   │ newest-cni-683181 image list --format=json                                                                                                                                                                                                    │ newest-cni-683181            │ jenkins │ v1.37.0 │ 22 Nov 25 00:59 UTC │ 22 Nov 25 00:59 UTC │
	│ pause   │ -p newest-cni-683181 --alsologtostderr -v=1                                                                                                                                                                                                   │ newest-cni-683181            │ jenkins │ v1.37.0 │ 22 Nov 25 00:59 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴──────────────
───────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/22 00:59:02
	Running on machine: ip-172-31-21-244
	Binary: Built with gc go1.25.3 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1122 00:59:02.503300  723417 out.go:360] Setting OutFile to fd 1 ...
	I1122 00:59:02.503528  723417 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1122 00:59:02.503552  723417 out.go:374] Setting ErrFile to fd 2...
	I1122 00:59:02.503572  723417 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1122 00:59:02.503848  723417 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21934-513600/.minikube/bin
	I1122 00:59:02.504432  723417 out.go:368] Setting JSON to false
	I1122 00:59:02.505759  723417 start.go:133] hostinfo: {"hostname":"ip-172-31-21-244","uptime":20459,"bootTime":1763752684,"procs":173,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"da8ac1fd-6236-412a-a346-95873c98230d"}
	I1122 00:59:02.505870  723417 start.go:143] virtualization:  
	I1122 00:59:02.511613  723417 out.go:179] * [newest-cni-683181] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1122 00:59:02.514744  723417 out.go:179]   - MINIKUBE_LOCATION=21934
	I1122 00:59:02.514827  723417 notify.go:221] Checking for updates...
	I1122 00:59:02.520745  723417 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1122 00:59:02.523857  723417 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21934-513600/kubeconfig
	I1122 00:59:02.526853  723417 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21934-513600/.minikube
	I1122 00:59:02.529711  723417 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1122 00:59:02.532900  723417 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1122 00:59:02.536696  723417 config.go:182] Loaded profile config "newest-cni-683181": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1122 00:59:02.537324  723417 driver.go:422] Setting default libvirt URI to qemu:///system
	I1122 00:59:02.581313  723417 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1122 00:59:02.581437  723417 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1122 00:59:02.697293  723417 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:38 OomKillDisable:true NGoroutines:53 SystemTime:2025-11-22 00:59:02.68660273 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aa
rch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pa
th:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1122 00:59:02.697390  723417 docker.go:319] overlay module found
	I1122 00:59:02.701233  723417 out.go:179] * Using the docker driver based on existing profile
	I1122 00:59:02.704018  723417 start.go:309] selected driver: docker
	I1122 00:59:02.704038  723417 start.go:930] validating driver "docker" against &{Name:newest-cni-683181 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763588073-21934@sha256:19d3da0413e1bfa354cbb88004c6796f8e9772a083e0230b0f6e50212ee04c7e Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-683181 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker Mou
ntIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1122 00:59:02.704143  723417 start.go:941] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1122 00:59:02.704780  723417 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1122 00:59:02.811205  723417 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:38 OomKillDisable:true NGoroutines:53 SystemTime:2025-11-22 00:59:02.800456631 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1122 00:59:02.811525  723417 start_flags.go:1011] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1122 00:59:02.811551  723417 cni.go:84] Creating CNI manager for ""
	I1122 00:59:02.811603  723417 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1122 00:59:02.811640  723417 start.go:353] cluster config:
	{Name:newest-cni-683181 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763588073-21934@sha256:19d3da0413e1bfa354cbb88004c6796f8e9772a083e0230b0f6e50212ee04c7e Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-683181 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Containe
rRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker
BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1122 00:59:02.815206  723417 out.go:179] * Starting "newest-cni-683181" primary control-plane node in "newest-cni-683181" cluster
	I1122 00:59:02.818167  723417 cache.go:134] Beginning downloading kic base image for docker with crio
	I1122 00:59:02.821166  723417 out.go:179] * Pulling base image v0.0.48-1763588073-21934 ...
	I1122 00:59:02.824043  723417 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1122 00:59:02.824093  723417 preload.go:203] Found local preload: /home/jenkins/minikube-integration/21934-513600/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4
	I1122 00:59:02.824103  723417 cache.go:65] Caching tarball of preloaded images
	I1122 00:59:02.824199  723417 preload.go:238] Found /home/jenkins/minikube-integration/21934-513600/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1122 00:59:02.824211  723417 cache.go:68] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1122 00:59:02.824326  723417 profile.go:143] Saving config to /home/jenkins/minikube-integration/21934-513600/.minikube/profiles/newest-cni-683181/config.json ...
	I1122 00:59:02.824539  723417 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763588073-21934@sha256:19d3da0413e1bfa354cbb88004c6796f8e9772a083e0230b0f6e50212ee04c7e in local docker daemon
	I1122 00:59:02.851660  723417 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763588073-21934@sha256:19d3da0413e1bfa354cbb88004c6796f8e9772a083e0230b0f6e50212ee04c7e in local docker daemon, skipping pull
	I1122 00:59:02.851680  723417 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763588073-21934@sha256:19d3da0413e1bfa354cbb88004c6796f8e9772a083e0230b0f6e50212ee04c7e exists in daemon, skipping load
	I1122 00:59:02.851701  723417 cache.go:243] Successfully downloaded all kic artifacts
	I1122 00:59:02.851732  723417 start.go:360] acquireMachinesLock for newest-cni-683181: {Name:mk27a4458a1236fbb3e5921a2f9459ba81f48a3a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1122 00:59:02.851800  723417 start.go:364] duration metric: took 50.305µs to acquireMachinesLock for "newest-cni-683181"
	I1122 00:59:02.851820  723417 start.go:96] Skipping create...Using existing machine configuration
	I1122 00:59:02.851824  723417 fix.go:54] fixHost starting: 
	I1122 00:59:02.852105  723417 cli_runner.go:164] Run: docker container inspect newest-cni-683181 --format={{.State.Status}}
	I1122 00:59:02.875029  723417 fix.go:112] recreateIfNeeded on newest-cni-683181: state=Stopped err=<nil>
	W1122 00:59:02.875051  723417 fix.go:138] unexpected machine state, will restart: <nil>
	I1122 00:59:01.995469  721299 cli_runner.go:164] Run: docker network inspect default-k8s-diff-port-882305 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1122 00:59:02.014997  721299 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I1122 00:59:02.019513  721299 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.85.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1122 00:59:02.043925  721299 kubeadm.go:884] updating cluster {Name:default-k8s-diff-port-882305 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763588073-21934@sha256:19d3da0413e1bfa354cbb88004c6796f8e9772a083e0230b0f6e50212ee04c7e Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-882305 Namespace:default APIServerHAVIP: APISer
verName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountT
ype:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1122 00:59:02.044057  721299 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1122 00:59:02.044109  721299 ssh_runner.go:195] Run: sudo crictl images --output json
	I1122 00:59:02.122530  721299 crio.go:514] all images are preloaded for cri-o runtime.
	I1122 00:59:02.122551  721299 crio.go:433] Images already preloaded, skipping extraction
	I1122 00:59:02.122617  721299 ssh_runner.go:195] Run: sudo crictl images --output json
	I1122 00:59:02.176099  721299 crio.go:514] all images are preloaded for cri-o runtime.
	I1122 00:59:02.176121  721299 cache_images.go:86] Images are preloaded, skipping loading
	I1122 00:59:02.176129  721299 kubeadm.go:935] updating node { 192.168.85.2 8444 v1.34.1 crio true true} ...
	I1122 00:59:02.176224  721299 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=default-k8s-diff-port-882305 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-882305 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1122 00:59:02.176318  721299 ssh_runner.go:195] Run: crio config
	I1122 00:59:02.250179  721299 cni.go:84] Creating CNI manager for ""
	I1122 00:59:02.250200  721299 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1122 00:59:02.250223  721299 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1122 00:59:02.250246  721299 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8444 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-882305 NodeName:default-k8s-diff-port-882305 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.
crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1122 00:59:02.250398  721299 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-882305"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.85.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1122 00:59:02.250472  721299 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1122 00:59:02.258934  721299 binaries.go:51] Found k8s binaries, skipping transfer
	I1122 00:59:02.259003  721299 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1122 00:59:02.267225  721299 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (378 bytes)
	I1122 00:59:02.292784  721299 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1122 00:59:02.308235  721299 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2225 bytes)
	I1122 00:59:02.326134  721299 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I1122 00:59:02.330342  721299 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1122 00:59:02.340868  721299 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1122 00:59:02.487730  721299 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1122 00:59:02.504813  721299 certs.go:69] Setting up /home/jenkins/minikube-integration/21934-513600/.minikube/profiles/default-k8s-diff-port-882305 for IP: 192.168.85.2
	I1122 00:59:02.504834  721299 certs.go:195] generating shared ca certs ...
	I1122 00:59:02.504856  721299 certs.go:227] acquiring lock for ca certs: {Name:mkaf4c79493334cb2058b349c36be7473837f9b7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1122 00:59:02.504986  721299 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21934-513600/.minikube/ca.key
	I1122 00:59:02.505033  721299 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21934-513600/.minikube/proxy-client-ca.key
	I1122 00:59:02.505044  721299 certs.go:257] generating profile certs ...
	I1122 00:59:02.505146  721299 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21934-513600/.minikube/profiles/default-k8s-diff-port-882305/client.key
	I1122 00:59:02.505214  721299 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21934-513600/.minikube/profiles/default-k8s-diff-port-882305/apiserver.key.14c699f7
	I1122 00:59:02.505253  721299 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21934-513600/.minikube/profiles/default-k8s-diff-port-882305/proxy-client.key
	I1122 00:59:02.505371  721299 certs.go:484] found cert: /home/jenkins/minikube-integration/21934-513600/.minikube/certs/516937.pem (1338 bytes)
	W1122 00:59:02.505403  721299 certs.go:480] ignoring /home/jenkins/minikube-integration/21934-513600/.minikube/certs/516937_empty.pem, impossibly tiny 0 bytes
	I1122 00:59:02.505416  721299 certs.go:484] found cert: /home/jenkins/minikube-integration/21934-513600/.minikube/certs/ca-key.pem (1679 bytes)
	I1122 00:59:02.505442  721299 certs.go:484] found cert: /home/jenkins/minikube-integration/21934-513600/.minikube/certs/ca.pem (1078 bytes)
	I1122 00:59:02.505473  721299 certs.go:484] found cert: /home/jenkins/minikube-integration/21934-513600/.minikube/certs/cert.pem (1123 bytes)
	I1122 00:59:02.505499  721299 certs.go:484] found cert: /home/jenkins/minikube-integration/21934-513600/.minikube/certs/key.pem (1675 bytes)
	I1122 00:59:02.505556  721299 certs.go:484] found cert: /home/jenkins/minikube-integration/21934-513600/.minikube/files/etc/ssl/certs/5169372.pem (1708 bytes)
	I1122 00:59:02.506504  721299 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21934-513600/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1122 00:59:02.535525  721299 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21934-513600/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1122 00:59:02.595366  721299 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21934-513600/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1122 00:59:02.642848  721299 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21934-513600/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1122 00:59:02.693548  721299 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21934-513600/.minikube/profiles/default-k8s-diff-port-882305/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I1122 00:59:02.745426  721299 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21934-513600/.minikube/profiles/default-k8s-diff-port-882305/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1122 00:59:02.785837  721299 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21934-513600/.minikube/profiles/default-k8s-diff-port-882305/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1122 00:59:02.809952  721299 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21934-513600/.minikube/profiles/default-k8s-diff-port-882305/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1122 00:59:02.834365  721299 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21934-513600/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1122 00:59:02.855606  721299 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21934-513600/.minikube/certs/516937.pem --> /usr/share/ca-certificates/516937.pem (1338 bytes)
	I1122 00:59:02.874418  721299 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21934-513600/.minikube/files/etc/ssl/certs/5169372.pem --> /usr/share/ca-certificates/5169372.pem (1708 bytes)
	I1122 00:59:02.892936  721299 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1122 00:59:02.916373  721299 ssh_runner.go:195] Run: openssl version
	I1122 00:59:02.931005  721299 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1122 00:59:02.945307  721299 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1122 00:59:02.949616  721299 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov 21 23:48 /usr/share/ca-certificates/minikubeCA.pem
	I1122 00:59:02.949683  721299 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1122 00:59:02.999513  721299 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1122 00:59:03.020595  721299 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/516937.pem && ln -fs /usr/share/ca-certificates/516937.pem /etc/ssl/certs/516937.pem"
	I1122 00:59:03.032434  721299 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/516937.pem
	I1122 00:59:03.036106  721299 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov 21 23:55 /usr/share/ca-certificates/516937.pem
	I1122 00:59:03.036169  721299 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/516937.pem
	I1122 00:59:03.080976  721299 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/516937.pem /etc/ssl/certs/51391683.0"
	I1122 00:59:03.090266  721299 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/5169372.pem && ln -fs /usr/share/ca-certificates/5169372.pem /etc/ssl/certs/5169372.pem"
	I1122 00:59:03.100350  721299 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/5169372.pem
	I1122 00:59:03.104729  721299 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov 21 23:55 /usr/share/ca-certificates/5169372.pem
	I1122 00:59:03.104805  721299 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/5169372.pem
	I1122 00:59:03.152781  721299 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/5169372.pem /etc/ssl/certs/3ec20f2e.0"
	I1122 00:59:03.166431  721299 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1122 00:59:03.170534  721299 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1122 00:59:03.272325  721299 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1122 00:59:03.394650  721299 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1122 00:59:03.562785  721299 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1122 00:59:03.696773  721299 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1122 00:59:03.762454  721299 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1122 00:59:03.860436  721299 kubeadm.go:401] StartCluster: {Name:default-k8s-diff-port-882305 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763588073-21934@sha256:19d3da0413e1bfa354cbb88004c6796f8e9772a083e0230b0f6e50212ee04c7e Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-882305 Namespace:default APIServerHAVIP: APIServer
Name:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType
:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1122 00:59:03.860526  721299 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1122 00:59:03.860586  721299 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1122 00:59:03.921643  721299 cri.go:89] found id: "ef5cf3bc0e8a1e84b865765165a5244f97715b14ad4afe6bdecb47483cb802ba"
	I1122 00:59:03.921660  721299 cri.go:89] found id: "d1d854f1c70c8c8f58aacea7d3bc3bea0c433b6787c467ffaf9f43d30127f3aa"
	I1122 00:59:03.921671  721299 cri.go:89] found id: "c0ae03824089747781ca3fa95c137501b3b35608e772c7bf534789a146554e3c"
	I1122 00:59:03.921676  721299 cri.go:89] found id: "1ce380445cfc1fe8d2cbb405092ab03fd65cb6c2cf8bac3317898266e679c5d3"
	I1122 00:59:03.921679  721299 cri.go:89] found id: ""
	I1122 00:59:03.921756  721299 ssh_runner.go:195] Run: sudo runc list -f json
	W1122 00:59:03.939051  721299 kubeadm.go:408] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-22T00:59:03Z" level=error msg="open /run/runc: no such file or directory"
	I1122 00:59:03.939116  721299 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1122 00:59:03.954585  721299 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1122 00:59:03.954602  721299 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1122 00:59:03.954654  721299 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1122 00:59:03.965013  721299 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1122 00:59:03.965529  721299 kubeconfig.go:47] verify endpoint returned: get endpoint: "default-k8s-diff-port-882305" does not appear in /home/jenkins/minikube-integration/21934-513600/kubeconfig
	I1122 00:59:03.965693  721299 kubeconfig.go:62] /home/jenkins/minikube-integration/21934-513600/kubeconfig needs updating (will repair): [kubeconfig missing "default-k8s-diff-port-882305" cluster setting kubeconfig missing "default-k8s-diff-port-882305" context setting]
	I1122 00:59:03.966120  721299 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21934-513600/kubeconfig: {Name:mkcd094d8a765e5a81369c0fb33cb5e696b17bb8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1122 00:59:03.968808  721299 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1122 00:59:03.979602  721299 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.85.2
	I1122 00:59:03.979682  721299 kubeadm.go:602] duration metric: took 25.073203ms to restartPrimaryControlPlane
	I1122 00:59:03.979706  721299 kubeadm.go:403] duration metric: took 119.278562ms to StartCluster
	I1122 00:59:03.979751  721299 settings.go:142] acquiring lock: {Name:mk6c31eb57ec65b047b78b4e1046e03fe7cc77bb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1122 00:59:03.979855  721299 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21934-513600/kubeconfig
	I1122 00:59:03.980650  721299 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21934-513600/kubeconfig: {Name:mkcd094d8a765e5a81369c0fb33cb5e696b17bb8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1122 00:59:03.980931  721299 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.85.2 Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1122 00:59:03.981384  721299 config.go:182] Loaded profile config "default-k8s-diff-port-882305": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1122 00:59:03.981337  721299 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1122 00:59:03.981469  721299 addons.go:70] Setting storage-provisioner=true in profile "default-k8s-diff-port-882305"
	I1122 00:59:03.981493  721299 addons.go:239] Setting addon storage-provisioner=true in "default-k8s-diff-port-882305"
	W1122 00:59:03.981505  721299 addons.go:248] addon storage-provisioner should already be in state true
	I1122 00:59:03.981535  721299 host.go:66] Checking if "default-k8s-diff-port-882305" exists ...
	I1122 00:59:03.981573  721299 addons.go:70] Setting default-storageclass=true in profile "default-k8s-diff-port-882305"
	I1122 00:59:03.981717  721299 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-882305"
	I1122 00:59:03.982038  721299 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-882305 --format={{.State.Status}}
	I1122 00:59:03.982308  721299 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-882305 --format={{.State.Status}}
	I1122 00:59:03.981537  721299 addons.go:70] Setting dashboard=true in profile "default-k8s-diff-port-882305"
	I1122 00:59:03.982806  721299 addons.go:239] Setting addon dashboard=true in "default-k8s-diff-port-882305"
	W1122 00:59:03.982820  721299 addons.go:248] addon dashboard should already be in state true
	I1122 00:59:03.982847  721299 host.go:66] Checking if "default-k8s-diff-port-882305" exists ...
	I1122 00:59:03.983278  721299 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-882305 --format={{.State.Status}}
	I1122 00:59:03.987111  721299 out.go:179] * Verifying Kubernetes components...
	I1122 00:59:03.990162  721299 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1122 00:59:04.046335  721299 addons.go:239] Setting addon default-storageclass=true in "default-k8s-diff-port-882305"
	W1122 00:59:04.046357  721299 addons.go:248] addon default-storageclass should already be in state true
	I1122 00:59:04.046381  721299 host.go:66] Checking if "default-k8s-diff-port-882305" exists ...
	I1122 00:59:04.046804  721299 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-882305 --format={{.State.Status}}
	I1122 00:59:04.059982  721299 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	I1122 00:59:04.062953  721299 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1122 00:59:04.065869  721299 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1122 00:59:04.065980  721299 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1122 00:59:04.065990  721299 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1122 00:59:04.066053  721299 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-882305
	I1122 00:59:04.068799  721299 addons.go:436] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1122 00:59:04.068822  721299 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1122 00:59:04.068964  721299 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-882305
	I1122 00:59:04.113989  721299 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1122 00:59:04.114011  721299 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1122 00:59:04.114073  721299 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-882305
	I1122 00:59:04.138125  721299 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33812 SSHKeyPath:/home/jenkins/minikube-integration/21934-513600/.minikube/machines/default-k8s-diff-port-882305/id_rsa Username:docker}
	I1122 00:59:04.157153  721299 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33812 SSHKeyPath:/home/jenkins/minikube-integration/21934-513600/.minikube/machines/default-k8s-diff-port-882305/id_rsa Username:docker}
	I1122 00:59:04.158719  721299 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33812 SSHKeyPath:/home/jenkins/minikube-integration/21934-513600/.minikube/machines/default-k8s-diff-port-882305/id_rsa Username:docker}
	I1122 00:59:04.312234  721299 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1122 00:59:04.345077  721299 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1122 00:59:04.409899  721299 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1122 00:59:04.409925  721299 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1122 00:59:02.878250  723417 out.go:252] * Restarting existing docker container for "newest-cni-683181" ...
	I1122 00:59:02.878328  723417 cli_runner.go:164] Run: docker start newest-cni-683181
	I1122 00:59:03.202125  723417 cli_runner.go:164] Run: docker container inspect newest-cni-683181 --format={{.State.Status}}
	I1122 00:59:03.231871  723417 kic.go:430] container "newest-cni-683181" state is running.
	I1122 00:59:03.232226  723417 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-683181
	I1122 00:59:03.263960  723417 profile.go:143] Saving config to /home/jenkins/minikube-integration/21934-513600/.minikube/profiles/newest-cni-683181/config.json ...
	I1122 00:59:03.264183  723417 machine.go:94] provisionDockerMachine start ...
	I1122 00:59:03.264249  723417 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-683181
	I1122 00:59:03.296456  723417 main.go:143] libmachine: Using SSH client type: native
	I1122 00:59:03.296780  723417 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33817 <nil> <nil>}
	I1122 00:59:03.296789  723417 main.go:143] libmachine: About to run SSH command:
	hostname
	I1122 00:59:03.297331  723417 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:55510->127.0.0.1:33817: read: connection reset by peer
	I1122 00:59:06.457716  723417 main.go:143] libmachine: SSH cmd err, output: <nil>: newest-cni-683181
	
	I1122 00:59:06.457760  723417 ubuntu.go:182] provisioning hostname "newest-cni-683181"
	I1122 00:59:06.457844  723417 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-683181
	I1122 00:59:06.484016  723417 main.go:143] libmachine: Using SSH client type: native
	I1122 00:59:06.484318  723417 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33817 <nil> <nil>}
	I1122 00:59:06.484329  723417 main.go:143] libmachine: About to run SSH command:
	sudo hostname newest-cni-683181 && echo "newest-cni-683181" | sudo tee /etc/hostname
	I1122 00:59:06.671423  723417 main.go:143] libmachine: SSH cmd err, output: <nil>: newest-cni-683181
	
	I1122 00:59:06.671543  723417 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-683181
	I1122 00:59:06.696749  723417 main.go:143] libmachine: Using SSH client type: native
	I1122 00:59:06.697129  723417 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33817 <nil> <nil>}
	I1122 00:59:06.697152  723417 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\snewest-cni-683181' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 newest-cni-683181/g' /etc/hosts;
				else 
					echo '127.0.1.1 newest-cni-683181' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1122 00:59:06.866237  723417 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1122 00:59:06.866265  723417 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21934-513600/.minikube CaCertPath:/home/jenkins/minikube-integration/21934-513600/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21934-513600/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21934-513600/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21934-513600/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21934-513600/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21934-513600/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21934-513600/.minikube}
	I1122 00:59:06.866304  723417 ubuntu.go:190] setting up certificates
	I1122 00:59:06.866314  723417 provision.go:84] configureAuth start
	I1122 00:59:06.866389  723417 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-683181
	I1122 00:59:06.896431  723417 provision.go:143] copyHostCerts
	I1122 00:59:06.896506  723417 exec_runner.go:144] found /home/jenkins/minikube-integration/21934-513600/.minikube/ca.pem, removing ...
	I1122 00:59:06.896520  723417 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21934-513600/.minikube/ca.pem
	I1122 00:59:06.896600  723417 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21934-513600/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21934-513600/.minikube/ca.pem (1078 bytes)
	I1122 00:59:06.896712  723417 exec_runner.go:144] found /home/jenkins/minikube-integration/21934-513600/.minikube/cert.pem, removing ...
	I1122 00:59:06.896724  723417 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21934-513600/.minikube/cert.pem
	I1122 00:59:06.896755  723417 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21934-513600/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21934-513600/.minikube/cert.pem (1123 bytes)
	I1122 00:59:06.896872  723417 exec_runner.go:144] found /home/jenkins/minikube-integration/21934-513600/.minikube/key.pem, removing ...
	I1122 00:59:06.896883  723417 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21934-513600/.minikube/key.pem
	I1122 00:59:06.896916  723417 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21934-513600/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21934-513600/.minikube/key.pem (1675 bytes)
	I1122 00:59:06.896979  723417 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21934-513600/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21934-513600/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21934-513600/.minikube/certs/ca-key.pem org=jenkins.newest-cni-683181 san=[127.0.0.1 192.168.76.2 localhost minikube newest-cni-683181]
	I1122 00:59:07.001299  723417 provision.go:177] copyRemoteCerts
	I1122 00:59:07.001464  723417 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1122 00:59:07.001536  723417 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-683181
	I1122 00:59:07.019769  723417 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33817 SSHKeyPath:/home/jenkins/minikube-integration/21934-513600/.minikube/machines/newest-cni-683181/id_rsa Username:docker}
	I1122 00:59:07.135805  723417 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21934-513600/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1122 00:59:07.168308  723417 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21934-513600/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1122 00:59:07.199555  723417 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21934-513600/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1122 00:59:07.229013  723417 provision.go:87] duration metric: took 362.67383ms to configureAuth
	I1122 00:59:07.229082  723417 ubuntu.go:206] setting minikube options for container-runtime
	I1122 00:59:07.229327  723417 config.go:182] Loaded profile config "newest-cni-683181": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1122 00:59:07.229473  723417 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-683181
	I1122 00:59:07.252081  723417 main.go:143] libmachine: Using SSH client type: native
	I1122 00:59:07.252390  723417 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33817 <nil> <nil>}
	I1122 00:59:07.252410  723417 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1122 00:59:04.439090  721299 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1122 00:59:04.474959  721299 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1122 00:59:04.474979  721299 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1122 00:59:04.562957  721299 addons.go:436] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1122 00:59:04.562978  721299 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1122 00:59:04.615816  721299 addons.go:436] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1122 00:59:04.615868  721299 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1122 00:59:04.675067  721299 addons.go:436] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1122 00:59:04.675133  721299 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1122 00:59:04.694319  721299 addons.go:436] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1122 00:59:04.694388  721299 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1122 00:59:04.715314  721299 addons.go:436] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1122 00:59:04.715379  721299 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1122 00:59:04.734131  721299 addons.go:436] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1122 00:59:04.734196  721299 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1122 00:59:04.750497  721299 addons.go:436] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1122 00:59:04.750569  721299 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1122 00:59:04.770972  721299 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1122 00:59:07.722329  723417 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1122 00:59:07.722375  723417 machine.go:97] duration metric: took 4.458175162s to provisionDockerMachine
	I1122 00:59:07.722386  723417 start.go:293] postStartSetup for "newest-cni-683181" (driver="docker")
	I1122 00:59:07.722397  723417 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1122 00:59:07.722483  723417 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1122 00:59:07.722533  723417 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-683181
	I1122 00:59:07.753762  723417 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33817 SSHKeyPath:/home/jenkins/minikube-integration/21934-513600/.minikube/machines/newest-cni-683181/id_rsa Username:docker}
	I1122 00:59:07.874962  723417 ssh_runner.go:195] Run: cat /etc/os-release
	I1122 00:59:07.879196  723417 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1122 00:59:07.879228  723417 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1122 00:59:07.879240  723417 filesync.go:126] Scanning /home/jenkins/minikube-integration/21934-513600/.minikube/addons for local assets ...
	I1122 00:59:07.879299  723417 filesync.go:126] Scanning /home/jenkins/minikube-integration/21934-513600/.minikube/files for local assets ...
	I1122 00:59:07.879384  723417 filesync.go:149] local asset: /home/jenkins/minikube-integration/21934-513600/.minikube/files/etc/ssl/certs/5169372.pem -> 5169372.pem in /etc/ssl/certs
	I1122 00:59:07.879495  723417 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1122 00:59:07.890236  723417 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21934-513600/.minikube/files/etc/ssl/certs/5169372.pem --> /etc/ssl/certs/5169372.pem (1708 bytes)
	I1122 00:59:07.919125  723417 start.go:296] duration metric: took 196.723ms for postStartSetup
	I1122 00:59:07.919247  723417 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1122 00:59:07.919298  723417 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-683181
	I1122 00:59:07.937135  723417 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33817 SSHKeyPath:/home/jenkins/minikube-integration/21934-513600/.minikube/machines/newest-cni-683181/id_rsa Username:docker}
	I1122 00:59:08.040307  723417 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1122 00:59:08.046798  723417 fix.go:56] duration metric: took 5.194965571s for fixHost
	I1122 00:59:08.046893  723417 start.go:83] releasing machines lock for "newest-cni-683181", held for 5.195062472s
	I1122 00:59:08.046993  723417 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-683181
	I1122 00:59:08.074215  723417 ssh_runner.go:195] Run: cat /version.json
	I1122 00:59:08.074278  723417 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1122 00:59:08.074346  723417 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-683181
	I1122 00:59:08.074280  723417 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-683181
	I1122 00:59:08.113980  723417 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33817 SSHKeyPath:/home/jenkins/minikube-integration/21934-513600/.minikube/machines/newest-cni-683181/id_rsa Username:docker}
	I1122 00:59:08.116442  723417 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33817 SSHKeyPath:/home/jenkins/minikube-integration/21934-513600/.minikube/machines/newest-cni-683181/id_rsa Username:docker}
	I1122 00:59:08.351101  723417 ssh_runner.go:195] Run: systemctl --version
	I1122 00:59:08.358178  723417 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1122 00:59:08.414487  723417 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1122 00:59:08.419622  723417 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1122 00:59:08.419701  723417 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1122 00:59:08.434364  723417 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1122 00:59:08.434411  723417 start.go:496] detecting cgroup driver to use...
	I1122 00:59:08.434444  723417 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1122 00:59:08.434511  723417 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1122 00:59:08.454654  723417 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1122 00:59:08.469117  723417 docker.go:218] disabling cri-docker service (if available) ...
	I1122 00:59:08.469180  723417 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1122 00:59:08.489744  723417 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1122 00:59:08.518366  723417 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1122 00:59:08.709392  723417 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1122 00:59:08.905180  723417 docker.go:234] disabling docker service ...
	I1122 00:59:08.905257  723417 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1122 00:59:08.927428  723417 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1122 00:59:08.947641  723417 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1122 00:59:09.155408  723417 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1122 00:59:09.338450  723417 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1122 00:59:09.355624  723417 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1122 00:59:09.375009  723417 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1122 00:59:09.375085  723417 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1122 00:59:09.392206  723417 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1122 00:59:09.392312  723417 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1122 00:59:09.408908  723417 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1122 00:59:09.424892  723417 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1122 00:59:09.443007  723417 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1122 00:59:09.456492  723417 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1122 00:59:09.472740  723417 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1122 00:59:09.484559  723417 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1122 00:59:09.498124  723417 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1122 00:59:09.509830  723417 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1122 00:59:09.523713  723417 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1122 00:59:09.726324  723417 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1122 00:59:09.988596  723417 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1122 00:59:09.988668  723417 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1122 00:59:09.994953  723417 start.go:564] Will wait 60s for crictl version
	I1122 00:59:09.995017  723417 ssh_runner.go:195] Run: which crictl
	I1122 00:59:09.999299  723417 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1122 00:59:10.050245  723417 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.2
	RuntimeApiVersion:  v1
	I1122 00:59:10.050332  723417 ssh_runner.go:195] Run: crio --version
	I1122 00:59:10.103645  723417 ssh_runner.go:195] Run: crio --version
	I1122 00:59:10.162439  723417 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.2 ...
	I1122 00:59:10.165362  723417 cli_runner.go:164] Run: docker network inspect newest-cni-683181 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1122 00:59:10.188504  723417 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I1122 00:59:10.192964  723417 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1122 00:59:10.209216  723417 out.go:179]   - kubeadm.pod-network-cidr=10.42.0.0/16
	I1122 00:59:10.778865  721299 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (6.466557053s)
	I1122 00:59:10.778940  721299 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (6.433830001s)
	I1122 00:59:10.778974  721299 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-882305" to be "Ready" ...
	I1122 00:59:10.779306  721299 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (6.340188909s)
	I1122 00:59:10.779562  721299 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (6.008491961s)
	I1122 00:59:10.782770  721299 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p default-k8s-diff-port-882305 addons enable metrics-server
	
	I1122 00:59:10.881477  721299 node_ready.go:49] node "default-k8s-diff-port-882305" is "Ready"
	I1122 00:59:10.881508  721299 node_ready.go:38] duration metric: took 102.515664ms for node "default-k8s-diff-port-882305" to be "Ready" ...
	I1122 00:59:10.881522  721299 api_server.go:52] waiting for apiserver process to appear ...
	I1122 00:59:10.881578  721299 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1122 00:59:10.926162  721299 out.go:179] * Enabled addons: storage-provisioner, dashboard, default-storageclass
	I1122 00:59:10.212156  723417 kubeadm.go:884] updating cluster {Name:newest-cni-683181 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763588073-21934@sha256:19d3da0413e1bfa354cbb88004c6796f8e9772a083e0230b0f6e50212ee04c7e Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-683181 Namespace:default APIServerHAVIP: APIServerName:minikubeCA API
ServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:
262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1122 00:59:10.212323  723417 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1122 00:59:10.212400  723417 ssh_runner.go:195] Run: sudo crictl images --output json
	I1122 00:59:10.268918  723417 crio.go:514] all images are preloaded for cri-o runtime.
	I1122 00:59:10.268937  723417 crio.go:433] Images already preloaded, skipping extraction
	I1122 00:59:10.268989  723417 ssh_runner.go:195] Run: sudo crictl images --output json
	I1122 00:59:10.346312  723417 crio.go:514] all images are preloaded for cri-o runtime.
	I1122 00:59:10.346336  723417 cache_images.go:86] Images are preloaded, skipping loading
	I1122 00:59:10.346343  723417 kubeadm.go:935] updating node { 192.168.76.2 8443 v1.34.1 crio true true} ...
	I1122 00:59:10.346437  723417 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=newest-cni-683181 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:newest-cni-683181 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1122 00:59:10.346517  723417 ssh_runner.go:195] Run: crio config
	I1122 00:59:10.438766  723417 cni.go:84] Creating CNI manager for ""
	I1122 00:59:10.438833  723417 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1122 00:59:10.438857  723417 kubeadm.go:85] Using pod CIDR: 10.42.0.0/16
	I1122 00:59:10.438887  723417 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.42.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:newest-cni-683181 NodeName:newest-cni-683181 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/
kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1122 00:59:10.439101  723417 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "newest-cni-683181"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.76.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.42.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.42.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1122 00:59:10.439230  723417 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1122 00:59:10.450839  723417 binaries.go:51] Found k8s binaries, skipping transfer
	I1122 00:59:10.450974  723417 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1122 00:59:10.458114  723417 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (367 bytes)
	I1122 00:59:10.475213  723417 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1122 00:59:10.488054  723417 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2212 bytes)
	I1122 00:59:10.516017  723417 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I1122 00:59:10.519936  723417 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1122 00:59:10.537302  723417 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1122 00:59:10.766134  723417 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1122 00:59:10.800565  723417 certs.go:69] Setting up /home/jenkins/minikube-integration/21934-513600/.minikube/profiles/newest-cni-683181 for IP: 192.168.76.2
	I1122 00:59:10.800628  723417 certs.go:195] generating shared ca certs ...
	I1122 00:59:10.800658  723417 certs.go:227] acquiring lock for ca certs: {Name:mkaf4c79493334cb2058b349c36be7473837f9b7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1122 00:59:10.800823  723417 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21934-513600/.minikube/ca.key
	I1122 00:59:10.800910  723417 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21934-513600/.minikube/proxy-client-ca.key
	I1122 00:59:10.800946  723417 certs.go:257] generating profile certs ...
	I1122 00:59:10.801069  723417 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21934-513600/.minikube/profiles/newest-cni-683181/client.key
	I1122 00:59:10.801156  723417 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21934-513600/.minikube/profiles/newest-cni-683181/apiserver.key.458b4884
	I1122 00:59:10.801240  723417 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21934-513600/.minikube/profiles/newest-cni-683181/proxy-client.key
	I1122 00:59:10.801381  723417 certs.go:484] found cert: /home/jenkins/minikube-integration/21934-513600/.minikube/certs/516937.pem (1338 bytes)
	W1122 00:59:10.801440  723417 certs.go:480] ignoring /home/jenkins/minikube-integration/21934-513600/.minikube/certs/516937_empty.pem, impossibly tiny 0 bytes
	I1122 00:59:10.801464  723417 certs.go:484] found cert: /home/jenkins/minikube-integration/21934-513600/.minikube/certs/ca-key.pem (1679 bytes)
	I1122 00:59:10.801527  723417 certs.go:484] found cert: /home/jenkins/minikube-integration/21934-513600/.minikube/certs/ca.pem (1078 bytes)
	I1122 00:59:10.801576  723417 certs.go:484] found cert: /home/jenkins/minikube-integration/21934-513600/.minikube/certs/cert.pem (1123 bytes)
	I1122 00:59:10.801629  723417 certs.go:484] found cert: /home/jenkins/minikube-integration/21934-513600/.minikube/certs/key.pem (1675 bytes)
	I1122 00:59:10.801709  723417 certs.go:484] found cert: /home/jenkins/minikube-integration/21934-513600/.minikube/files/etc/ssl/certs/5169372.pem (1708 bytes)
	I1122 00:59:10.802410  723417 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21934-513600/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1122 00:59:10.835955  723417 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21934-513600/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1122 00:59:10.863704  723417 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21934-513600/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1122 00:59:10.894131  723417 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21934-513600/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1122 00:59:10.943558  723417 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21934-513600/.minikube/profiles/newest-cni-683181/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1122 00:59:10.968974  723417 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21934-513600/.minikube/profiles/newest-cni-683181/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1122 00:59:11.000134  723417 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21934-513600/.minikube/profiles/newest-cni-683181/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1122 00:59:11.066994  723417 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21934-513600/.minikube/profiles/newest-cni-683181/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1122 00:59:11.102718  723417 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21934-513600/.minikube/certs/516937.pem --> /usr/share/ca-certificates/516937.pem (1338 bytes)
	I1122 00:59:11.159619  723417 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21934-513600/.minikube/files/etc/ssl/certs/5169372.pem --> /usr/share/ca-certificates/5169372.pem (1708 bytes)
	I1122 00:59:11.209925  723417 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21934-513600/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1122 00:59:11.247743  723417 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1122 00:59:11.290352  723417 ssh_runner.go:195] Run: openssl version
	I1122 00:59:11.307175  723417 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/5169372.pem && ln -fs /usr/share/ca-certificates/5169372.pem /etc/ssl/certs/5169372.pem"
	I1122 00:59:11.331931  723417 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/5169372.pem
	I1122 00:59:11.336290  723417 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov 21 23:55 /usr/share/ca-certificates/5169372.pem
	I1122 00:59:11.336393  723417 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/5169372.pem
	I1122 00:59:11.396981  723417 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/5169372.pem /etc/ssl/certs/3ec20f2e.0"
	I1122 00:59:11.406734  723417 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1122 00:59:11.420121  723417 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1122 00:59:11.427494  723417 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov 21 23:48 /usr/share/ca-certificates/minikubeCA.pem
	I1122 00:59:11.427577  723417 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1122 00:59:11.481193  723417 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1122 00:59:11.490649  723417 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/516937.pem && ln -fs /usr/share/ca-certificates/516937.pem /etc/ssl/certs/516937.pem"
	I1122 00:59:11.505120  723417 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/516937.pem
	I1122 00:59:11.511278  723417 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov 21 23:55 /usr/share/ca-certificates/516937.pem
	I1122 00:59:11.511358  723417 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/516937.pem
	I1122 00:59:11.562438  723417 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/516937.pem /etc/ssl/certs/51391683.0"
	I1122 00:59:11.572701  723417 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1122 00:59:11.578098  723417 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1122 00:59:11.622808  723417 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1122 00:59:11.669794  723417 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1122 00:59:11.715356  723417 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1122 00:59:11.760034  723417 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1122 00:59:11.810989  723417 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1122 00:59:11.872227  723417 kubeadm.go:401] StartCluster: {Name:newest-cni-683181 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763588073-21934@sha256:19d3da0413e1bfa354cbb88004c6796f8e9772a083e0230b0f6e50212ee04c7e Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-683181 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISer
verNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262
144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1122 00:59:11.872337  723417 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1122 00:59:11.872417  723417 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1122 00:59:11.944703  723417 cri.go:89] found id: ""
	I1122 00:59:11.944783  723417 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1122 00:59:11.957367  723417 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1122 00:59:11.957389  723417 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1122 00:59:11.957472  723417 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1122 00:59:11.967493  723417 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1122 00:59:11.968150  723417 kubeconfig.go:47] verify endpoint returned: get endpoint: "newest-cni-683181" does not appear in /home/jenkins/minikube-integration/21934-513600/kubeconfig
	I1122 00:59:11.968438  723417 kubeconfig.go:62] /home/jenkins/minikube-integration/21934-513600/kubeconfig needs updating (will repair): [kubeconfig missing "newest-cni-683181" cluster setting kubeconfig missing "newest-cni-683181" context setting]
	I1122 00:59:11.968967  723417 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21934-513600/kubeconfig: {Name:mkcd094d8a765e5a81369c0fb33cb5e696b17bb8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1122 00:59:11.970846  723417 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1122 00:59:11.984168  723417 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.76.2
	I1122 00:59:11.984211  723417 kubeadm.go:602] duration metric: took 26.815683ms to restartPrimaryControlPlane
	I1122 00:59:11.984220  723417 kubeadm.go:403] duration metric: took 112.007514ms to StartCluster
	I1122 00:59:11.984235  723417 settings.go:142] acquiring lock: {Name:mk6c31eb57ec65b047b78b4e1046e03fe7cc77bb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1122 00:59:11.984302  723417 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21934-513600/kubeconfig
	I1122 00:59:11.985302  723417 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21934-513600/kubeconfig: {Name:mkcd094d8a765e5a81369c0fb33cb5e696b17bb8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1122 00:59:11.985517  723417 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1122 00:59:11.985894  723417 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1122 00:59:11.985971  723417 addons.go:70] Setting storage-provisioner=true in profile "newest-cni-683181"
	I1122 00:59:11.985984  723417 addons.go:239] Setting addon storage-provisioner=true in "newest-cni-683181"
	W1122 00:59:11.985990  723417 addons.go:248] addon storage-provisioner should already be in state true
	I1122 00:59:11.986011  723417 host.go:66] Checking if "newest-cni-683181" exists ...
	I1122 00:59:11.986731  723417 cli_runner.go:164] Run: docker container inspect newest-cni-683181 --format={{.State.Status}}
	I1122 00:59:11.987017  723417 config.go:182] Loaded profile config "newest-cni-683181": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1122 00:59:11.987092  723417 addons.go:70] Setting dashboard=true in profile "newest-cni-683181"
	I1122 00:59:11.987117  723417 addons.go:239] Setting addon dashboard=true in "newest-cni-683181"
	W1122 00:59:11.987146  723417 addons.go:248] addon dashboard should already be in state true
	I1122 00:59:11.987197  723417 host.go:66] Checking if "newest-cni-683181" exists ...
	I1122 00:59:11.987670  723417 cli_runner.go:164] Run: docker container inspect newest-cni-683181 --format={{.State.Status}}
	I1122 00:59:11.991059  723417 addons.go:70] Setting default-storageclass=true in profile "newest-cni-683181"
	I1122 00:59:11.991290  723417 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "newest-cni-683181"
	I1122 00:59:11.992632  723417 cli_runner.go:164] Run: docker container inspect newest-cni-683181 --format={{.State.Status}}
	I1122 00:59:11.991242  723417 out.go:179] * Verifying Kubernetes components...
	I1122 00:59:12.003524  723417 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1122 00:59:12.041573  723417 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1122 00:59:12.041653  723417 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1122 00:59:12.047949  723417 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1122 00:59:12.047981  723417 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1122 00:59:12.048052  723417 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-683181
	I1122 00:59:12.054033  723417 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	I1122 00:59:12.056851  723417 addons.go:436] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1122 00:59:12.056878  723417 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1122 00:59:12.056946  723417 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-683181
	I1122 00:59:12.062051  723417 addons.go:239] Setting addon default-storageclass=true in "newest-cni-683181"
	W1122 00:59:12.062070  723417 addons.go:248] addon default-storageclass should already be in state true
	I1122 00:59:12.062096  723417 host.go:66] Checking if "newest-cni-683181" exists ...
	I1122 00:59:12.062526  723417 cli_runner.go:164] Run: docker container inspect newest-cni-683181 --format={{.State.Status}}
	I1122 00:59:12.092699  723417 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33817 SSHKeyPath:/home/jenkins/minikube-integration/21934-513600/.minikube/machines/newest-cni-683181/id_rsa Username:docker}
	I1122 00:59:12.122186  723417 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33817 SSHKeyPath:/home/jenkins/minikube-integration/21934-513600/.minikube/machines/newest-cni-683181/id_rsa Username:docker}
	I1122 00:59:12.136943  723417 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1122 00:59:12.136963  723417 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1122 00:59:12.137030  723417 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-683181
	I1122 00:59:12.168813  723417 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33817 SSHKeyPath:/home/jenkins/minikube-integration/21934-513600/.minikube/machines/newest-cni-683181/id_rsa Username:docker}
	I1122 00:59:12.410860  723417 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1122 00:59:10.929483  721299 addons.go:530] duration metric: took 6.94814783s for enable addons: enabled=[storage-provisioner dashboard default-storageclass]
	I1122 00:59:10.946665  721299 api_server.go:72] duration metric: took 6.965669591s to wait for apiserver process to appear ...
	I1122 00:59:10.946691  721299 api_server.go:88] waiting for apiserver healthz status ...
	I1122 00:59:10.946710  721299 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8444/healthz ...
	I1122 00:59:10.989918  721299 api_server.go:279] https://192.168.85.2:8444/healthz returned 200:
	ok
	I1122 00:59:10.999341  721299 api_server.go:141] control plane version: v1.34.1
	I1122 00:59:10.999377  721299 api_server.go:131] duration metric: took 52.678155ms to wait for apiserver health ...
	I1122 00:59:10.999388  721299 system_pods.go:43] waiting for kube-system pods to appear ...
	I1122 00:59:11.028588  721299 system_pods.go:59] 8 kube-system pods found
	I1122 00:59:11.028630  721299 system_pods.go:61] "coredns-66bc5c9577-448gn" [a2f33c9b-90d6-4197-9606-48fd95ff1ef2] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1122 00:59:11.028639  721299 system_pods.go:61] "etcd-default-k8s-diff-port-882305" [b7b7077d-891d-48c6-b3dc-2f137b395bc2] Running
	I1122 00:59:11.028648  721299 system_pods.go:61] "kindnet-kcwqj" [52f46f97-517a-4d53-9374-2313d6220643] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I1122 00:59:11.028653  721299 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-882305" [64aeddd8-fe12-4e20-86f8-b6b94d180713] Running
	I1122 00:59:11.028660  721299 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-882305" [da7abe5d-c103-4152-a303-9cca02a54d69] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1122 00:59:11.028669  721299 system_pods.go:61] "kube-proxy-59l6x" [7cdb7bc0-14ce-4e33-aca8-95137883f5e0] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1122 00:59:11.028676  721299 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-882305" [5506ff95-9cc2-4344-b578-eca19040f97a] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1122 00:59:11.028684  721299 system_pods.go:61] "storage-provisioner" [fc6390d1-3d5c-4f70-a9bb-7e5d41d44f2a] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1122 00:59:11.028691  721299 system_pods.go:74] duration metric: took 29.296456ms to wait for pod list to return data ...
	I1122 00:59:11.028704  721299 default_sa.go:34] waiting for default service account to be created ...
	I1122 00:59:11.032103  721299 default_sa.go:45] found service account: "default"
	I1122 00:59:11.032129  721299 default_sa.go:55] duration metric: took 3.41823ms for default service account to be created ...
	I1122 00:59:11.032139  721299 system_pods.go:116] waiting for k8s-apps to be running ...
	I1122 00:59:11.035472  721299 system_pods.go:86] 8 kube-system pods found
	I1122 00:59:11.035507  721299 system_pods.go:89] "coredns-66bc5c9577-448gn" [a2f33c9b-90d6-4197-9606-48fd95ff1ef2] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1122 00:59:11.035515  721299 system_pods.go:89] "etcd-default-k8s-diff-port-882305" [b7b7077d-891d-48c6-b3dc-2f137b395bc2] Running
	I1122 00:59:11.035524  721299 system_pods.go:89] "kindnet-kcwqj" [52f46f97-517a-4d53-9374-2313d6220643] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I1122 00:59:11.035529  721299 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-882305" [64aeddd8-fe12-4e20-86f8-b6b94d180713] Running
	I1122 00:59:11.035537  721299 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-882305" [da7abe5d-c103-4152-a303-9cca02a54d69] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1122 00:59:11.035546  721299 system_pods.go:89] "kube-proxy-59l6x" [7cdb7bc0-14ce-4e33-aca8-95137883f5e0] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1122 00:59:11.035556  721299 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-882305" [5506ff95-9cc2-4344-b578-eca19040f97a] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1122 00:59:11.035562  721299 system_pods.go:89] "storage-provisioner" [fc6390d1-3d5c-4f70-a9bb-7e5d41d44f2a] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1122 00:59:11.035574  721299 system_pods.go:126] duration metric: took 3.429388ms to wait for k8s-apps to be running ...
	I1122 00:59:11.035583  721299 system_svc.go:44] waiting for kubelet service to be running ....
	I1122 00:59:11.035639  721299 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1122 00:59:11.064706  721299 system_svc.go:56] duration metric: took 29.112995ms WaitForService to wait for kubelet
	I1122 00:59:11.064735  721299 kubeadm.go:587] duration metric: took 7.083744586s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1122 00:59:11.064753  721299 node_conditions.go:102] verifying NodePressure condition ...
	I1122 00:59:11.086906  721299 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1122 00:59:11.086946  721299 node_conditions.go:123] node cpu capacity is 2
	I1122 00:59:11.086961  721299 node_conditions.go:105] duration metric: took 22.202239ms to run NodePressure ...
	I1122 00:59:11.086974  721299 start.go:242] waiting for startup goroutines ...
	I1122 00:59:11.086982  721299 start.go:247] waiting for cluster config update ...
	I1122 00:59:11.086997  721299 start.go:256] writing updated cluster config ...
	I1122 00:59:11.087286  721299 ssh_runner.go:195] Run: rm -f paused
	I1122 00:59:11.098144  721299 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1122 00:59:11.157722  721299 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-448gn" in "kube-system" namespace to be "Ready" or be gone ...
	W1122 00:59:13.185114  721299 pod_ready.go:104] pod "coredns-66bc5c9577-448gn" is not "Ready", error: <nil>
	I1122 00:59:12.588306  723417 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1122 00:59:12.590176  723417 api_server.go:52] waiting for apiserver process to appear ...
	I1122 00:59:12.590264  723417 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1122 00:59:12.597203  723417 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1122 00:59:12.733409  723417 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1122 00:59:12.733434  723417 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1122 00:59:12.837659  723417 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1122 00:59:12.837680  723417 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1122 00:59:13.016255  723417 addons.go:436] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1122 00:59:13.016275  723417 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1122 00:59:13.083796  723417 addons.go:436] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1122 00:59:13.083815  723417 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1122 00:59:13.119174  723417 addons.go:436] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1122 00:59:13.119202  723417 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1122 00:59:13.159740  723417 addons.go:436] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1122 00:59:13.159802  723417 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1122 00:59:13.196569  723417 addons.go:436] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1122 00:59:13.196637  723417 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1122 00:59:13.219090  723417 addons.go:436] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1122 00:59:13.219119  723417 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1122 00:59:13.236662  723417 addons.go:436] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1122 00:59:13.236687  723417 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1122 00:59:13.250317  723417 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1122 00:59:15.662928  721299 pod_ready.go:104] pod "coredns-66bc5c9577-448gn" is not "Ready", error: <nil>
	W1122 00:59:17.664863  721299 pod_ready.go:104] pod "coredns-66bc5c9577-448gn" is not "Ready", error: <nil>
	I1122 00:59:23.422953  723417 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (10.834612876s)
	I1122 00:59:23.422998  723417 ssh_runner.go:235] Completed: sudo pgrep -xnf kube-apiserver.*minikube.*: (10.832719351s)
	I1122 00:59:23.423011  723417 api_server.go:72] duration metric: took 11.437466683s to wait for apiserver process to appear ...
	I1122 00:59:23.423016  723417 api_server.go:88] waiting for apiserver healthz status ...
	I1122 00:59:23.423032  723417 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1122 00:59:23.423326  723417 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (10.826098401s)
	I1122 00:59:23.423580  723417 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (10.173231632s)
	I1122 00:59:23.427418  723417 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p newest-cni-683181 addons enable metrics-server
	
	I1122 00:59:23.462704  723417 api_server.go:279] https://192.168.76.2:8443/healthz returned 200:
	ok
	I1122 00:59:23.463728  723417 api_server.go:141] control plane version: v1.34.1
	I1122 00:59:23.463786  723417 api_server.go:131] duration metric: took 40.763607ms to wait for apiserver health ...
	I1122 00:59:23.463810  723417 system_pods.go:43] waiting for kube-system pods to appear ...
	I1122 00:59:23.469060  723417 system_pods.go:59] 8 kube-system pods found
	I1122 00:59:23.469142  723417 system_pods.go:61] "coredns-66bc5c9577-t729j" [aeaa479f-a434-45f0-a153-9930c355bc90] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. no new claims to deallocate, preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I1122 00:59:23.469167  723417 system_pods.go:61] "etcd-newest-cni-683181" [a7afb010-b8c8-4f7c-b259-9bda74317a71] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1122 00:59:23.469205  723417 system_pods.go:61] "kindnet-bpmkp" [a8d571f3-91a7-4136-8402-f32f10864617] Running
	I1122 00:59:23.469231  723417 system_pods.go:61] "kube-apiserver-newest-cni-683181" [0ae77e9e-2bcc-4530-a9af-edb6a2775a1c] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1122 00:59:23.469253  723417 system_pods.go:61] "kube-controller-manager-newest-cni-683181" [b8386b4e-6a08-4989-b637-baf2a4d446bb] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1122 00:59:23.469286  723417 system_pods.go:61] "kube-proxy-s5mhf" [386ab39d-8d29-482b-b752-52257e97dde8] Running
	I1122 00:59:23.469312  723417 system_pods.go:61] "kube-scheduler-newest-cni-683181" [8b62cbb0-d4b5-487b-bc74-7459fb8fc92f] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1122 00:59:23.469332  723417 system_pods.go:61] "storage-provisioner" [1b4ee39a-586b-4b95-b610-8cd6ad0ca178] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. no new claims to deallocate, preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I1122 00:59:23.469365  723417 system_pods.go:74] duration metric: took 5.537028ms to wait for pod list to return data ...
	I1122 00:59:23.469391  723417 default_sa.go:34] waiting for default service account to be created ...
	I1122 00:59:23.473080  723417 default_sa.go:45] found service account: "default"
	I1122 00:59:23.473101  723417 default_sa.go:55] duration metric: took 3.691839ms for default service account to be created ...
	I1122 00:59:23.473113  723417 kubeadm.go:587] duration metric: took 11.487566994s to wait for: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1122 00:59:23.473129  723417 node_conditions.go:102] verifying NodePressure condition ...
	I1122 00:59:23.475652  723417 out.go:179] * Enabled addons: storage-provisioner, dashboard, default-storageclass
	I1122 00:59:23.478173  723417 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1122 00:59:23.478201  723417 node_conditions.go:123] node cpu capacity is 2
	I1122 00:59:23.478213  723417 node_conditions.go:105] duration metric: took 5.079482ms to run NodePressure ...
	I1122 00:59:23.478225  723417 start.go:242] waiting for startup goroutines ...
	I1122 00:59:23.479120  723417 addons.go:530] duration metric: took 11.493222525s for enable addons: enabled=[storage-provisioner dashboard default-storageclass]
	I1122 00:59:23.479205  723417 start.go:247] waiting for cluster config update ...
	I1122 00:59:23.479232  723417 start.go:256] writing updated cluster config ...
	I1122 00:59:23.479551  723417 ssh_runner.go:195] Run: rm -f paused
	I1122 00:59:23.580532  723417 start.go:628] kubectl: 1.33.2, cluster: 1.34.1 (minor skew: 1)
	I1122 00:59:23.584465  723417 out.go:179] * Done! kubectl is now configured to use "newest-cni-683181" cluster and "default" namespace by default
	W1122 00:59:19.677419  721299 pod_ready.go:104] pod "coredns-66bc5c9577-448gn" is not "Ready", error: <nil>
	W1122 00:59:22.180607  721299 pod_ready.go:104] pod "coredns-66bc5c9577-448gn" is not "Ready", error: <nil>
	
	
	==> CRI-O <==
	Nov 22 00:59:21 newest-cni-683181 crio[616]: time="2025-11-22T00:59:21.423296932Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 22 00:59:21 newest-cni-683181 crio[616]: time="2025-11-22T00:59:21.429429035Z" level=info msg="Running pod sandbox: kube-system/kube-proxy-s5mhf/POD" id=9d67c0c5-cf9a-43e9-a2da-ded26e41d558 name=/runtime.v1.RuntimeService/RunPodSandbox
	Nov 22 00:59:21 newest-cni-683181 crio[616]: time="2025-11-22T00:59:21.429501698Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 22 00:59:21 newest-cni-683181 crio[616]: time="2025-11-22T00:59:21.4688268Z" level=warning msg="Skipping invalid sysctl specified by config {net.ipv4.ip_unprivileged_port_start 0}: \"net.ipv4.ip_unprivileged_port_start\" not allowed with host net enabled" id=e7e8df5d-c504-4d8f-a01c-d031c625cdff name=/runtime.v1.RuntimeService/RunPodSandbox
	Nov 22 00:59:21 newest-cni-683181 crio[616]: time="2025-11-22T00:59:21.473975146Z" level=warning msg="Skipping invalid sysctl specified by config {net.ipv4.ip_unprivileged_port_start 0}: \"net.ipv4.ip_unprivileged_port_start\" not allowed with host net enabled" id=9d67c0c5-cf9a-43e9-a2da-ded26e41d558 name=/runtime.v1.RuntimeService/RunPodSandbox
	Nov 22 00:59:21 newest-cni-683181 crio[616]: time="2025-11-22T00:59:21.498764648Z" level=info msg="Ran pod sandbox 3801f73c18a0299371050750be096e11fc892006f4e7a0df2ad650af342af016 with infra container: kube-system/kindnet-bpmkp/POD" id=e7e8df5d-c504-4d8f-a01c-d031c625cdff name=/runtime.v1.RuntimeService/RunPodSandbox
	Nov 22 00:59:21 newest-cni-683181 crio[616]: time="2025-11-22T00:59:21.504954309Z" level=info msg="Ran pod sandbox be61ae63e65ba736f4321fdb424b77590117599e9a79bd1c7a2eddf0d953b694 with infra container: kube-system/kube-proxy-s5mhf/POD" id=9d67c0c5-cf9a-43e9-a2da-ded26e41d558 name=/runtime.v1.RuntimeService/RunPodSandbox
	Nov 22 00:59:21 newest-cni-683181 crio[616]: time="2025-11-22T00:59:21.54094693Z" level=info msg="Checking image status: docker.io/kindest/kindnetd:v20250512-df8de77b" id=0679e05a-df64-4c00-8b07-4b59a3ed8274 name=/runtime.v1.ImageService/ImageStatus
	Nov 22 00:59:21 newest-cni-683181 crio[616]: time="2025-11-22T00:59:21.565513137Z" level=info msg="Checking image status: docker.io/kindest/kindnetd:v20250512-df8de77b" id=70460a3e-740e-4fa5-9a48-37640571794a name=/runtime.v1.ImageService/ImageStatus
	Nov 22 00:59:21 newest-cni-683181 crio[616]: time="2025-11-22T00:59:21.56591516Z" level=info msg="Checking image status: registry.k8s.io/kube-proxy:v1.34.1" id=ec4147a9-fa98-45ff-a56e-f29ecc1c3ac4 name=/runtime.v1.ImageService/ImageStatus
	Nov 22 00:59:21 newest-cni-683181 crio[616]: time="2025-11-22T00:59:21.572246544Z" level=info msg="Checking image status: registry.k8s.io/kube-proxy:v1.34.1" id=162e1a1e-fe57-4d8d-97dd-33985bb3095b name=/runtime.v1.ImageService/ImageStatus
	Nov 22 00:59:21 newest-cni-683181 crio[616]: time="2025-11-22T00:59:21.575440354Z" level=info msg="Creating container: kube-system/kindnet-bpmkp/kindnet-cni" id=9eca5064-c285-4568-b226-1fcd7338e640 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 22 00:59:21 newest-cni-683181 crio[616]: time="2025-11-22T00:59:21.575546616Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 22 00:59:21 newest-cni-683181 crio[616]: time="2025-11-22T00:59:21.586597656Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 22 00:59:21 newest-cni-683181 crio[616]: time="2025-11-22T00:59:21.593602842Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 22 00:59:21 newest-cni-683181 crio[616]: time="2025-11-22T00:59:21.597651405Z" level=info msg="Creating container: kube-system/kube-proxy-s5mhf/kube-proxy" id=62460a36-de5b-4424-a281-d0d10c1208f4 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 22 00:59:21 newest-cni-683181 crio[616]: time="2025-11-22T00:59:21.598227544Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 22 00:59:21 newest-cni-683181 crio[616]: time="2025-11-22T00:59:21.619448634Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 22 00:59:21 newest-cni-683181 crio[616]: time="2025-11-22T00:59:21.619951405Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 22 00:59:21 newest-cni-683181 crio[616]: time="2025-11-22T00:59:21.751221173Z" level=info msg="Created container 7e89eb5cadf037e487a32d9ea4517b84e0355838991972f6b49fefb3847298aa: kube-system/kindnet-bpmkp/kindnet-cni" id=9eca5064-c285-4568-b226-1fcd7338e640 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 22 00:59:21 newest-cni-683181 crio[616]: time="2025-11-22T00:59:21.751834308Z" level=info msg="Starting container: 7e89eb5cadf037e487a32d9ea4517b84e0355838991972f6b49fefb3847298aa" id=91648659-81c6-42db-9914-45e5972af62f name=/runtime.v1.RuntimeService/StartContainer
	Nov 22 00:59:21 newest-cni-683181 crio[616]: time="2025-11-22T00:59:21.765466541Z" level=info msg="Started container" PID=1067 containerID=7e89eb5cadf037e487a32d9ea4517b84e0355838991972f6b49fefb3847298aa description=kube-system/kindnet-bpmkp/kindnet-cni id=91648659-81c6-42db-9914-45e5972af62f name=/runtime.v1.RuntimeService/StartContainer sandboxID=3801f73c18a0299371050750be096e11fc892006f4e7a0df2ad650af342af016
	Nov 22 00:59:21 newest-cni-683181 crio[616]: time="2025-11-22T00:59:21.769607409Z" level=info msg="Created container c6dc4dd7fc04ddecd3b2bb2080ec2863a7ec44e94fdeabd8e2cccba7fd814d22: kube-system/kube-proxy-s5mhf/kube-proxy" id=62460a36-de5b-4424-a281-d0d10c1208f4 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 22 00:59:21 newest-cni-683181 crio[616]: time="2025-11-22T00:59:21.771469467Z" level=info msg="Starting container: c6dc4dd7fc04ddecd3b2bb2080ec2863a7ec44e94fdeabd8e2cccba7fd814d22" id=2b901052-0042-414d-bf46-ab79473d9020 name=/runtime.v1.RuntimeService/StartContainer
	Nov 22 00:59:21 newest-cni-683181 crio[616]: time="2025-11-22T00:59:21.779602431Z" level=info msg="Started container" PID=1066 containerID=c6dc4dd7fc04ddecd3b2bb2080ec2863a7ec44e94fdeabd8e2cccba7fd814d22 description=kube-system/kube-proxy-s5mhf/kube-proxy id=2b901052-0042-414d-bf46-ab79473d9020 name=/runtime.v1.RuntimeService/StartContainer sandboxID=be61ae63e65ba736f4321fdb424b77590117599e9a79bd1c7a2eddf0d953b694
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                         NAMESPACE
	c6dc4dd7fc04d       05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9   6 seconds ago       Running             kube-proxy                1                   be61ae63e65ba       kube-proxy-s5mhf                            kube-system
	7e89eb5cadf03       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c   6 seconds ago       Running             kindnet-cni               1                   3801f73c18a02       kindnet-bpmkp                               kube-system
	8765e88de49e2       a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e   15 seconds ago      Running             etcd                      1                   87bdb0869ad50       etcd-newest-cni-683181                      kube-system
	5b3cac023bb69       7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a   15 seconds ago      Running             kube-controller-manager   1                   4651247fd91ce       kube-controller-manager-newest-cni-683181   kube-system
	2cf11a2399791       43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196   16 seconds ago      Running             kube-apiserver            1                   653131c6328bc       kube-apiserver-newest-cni-683181            kube-system
	a49ade414a411       b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0   16 seconds ago      Running             kube-scheduler            1                   7e5ea269b0be6       kube-scheduler-newest-cni-683181            kube-system
	
	
	==> describe nodes <==
	Name:               newest-cni-683181
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=newest-cni-683181
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=299bbe887a12c40541707cc636234f35f4ff1785
	                    minikube.k8s.io/name=newest-cni-683181
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_11_22T00_58_51_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 22 Nov 2025 00:58:47 +0000
	Taints:             node.kubernetes.io/not-ready:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  newest-cni-683181
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 22 Nov 2025 00:59:20 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 22 Nov 2025 00:59:20 +0000   Sat, 22 Nov 2025 00:58:43 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 22 Nov 2025 00:59:20 +0000   Sat, 22 Nov 2025 00:58:43 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 22 Nov 2025 00:59:20 +0000   Sat, 22 Nov 2025 00:58:43 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            False   Sat, 22 Nov 2025 00:59:20 +0000   Sat, 22 Nov 2025 00:58:43 +0000   KubeletNotReady              container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/cni/net.d/. Has your network provider started?
	Addresses:
	  InternalIP:  192.168.76.2
	  Hostname:    newest-cni-683181
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	System Info:
	  Machine ID:                 c92eea4d3c03c0156e01fcf8691e3907
	  System UUID:                6a33f369-61a2-4323-af82-24618416d16b
	  Boot ID:                    72ac7385-472f-47d1-a23e-bd80468d6e09
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.2
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.42.0.0/24
	PodCIDRs:                     10.42.0.0/24
	Non-terminated Pods:          (6 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  kube-system                 etcd-newest-cni-683181                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         38s
	  kube-system                 kindnet-bpmkp                                100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      33s
	  kube-system                 kube-apiserver-newest-cni-683181             250m (12%)    0 (0%)      0 (0%)           0 (0%)         38s
	  kube-system                 kube-controller-manager-newest-cni-683181    200m (10%)    0 (0%)      0 (0%)           0 (0%)         38s
	  kube-system                 kube-proxy-s5mhf                             0 (0%)        0 (0%)      0 (0%)           0 (0%)         33s
	  kube-system                 kube-scheduler-newest-cni-683181             100m (5%)     0 (0%)      0 (0%)           0 (0%)         38s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  100m (5%)
	  memory             150Mi (1%)  50Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                From             Message
	  ----     ------                   ----               ----             -------
	  Normal   Starting                 31s                kube-proxy       
	  Normal   Starting                 4s                 kube-proxy       
	  Normal   NodeHasSufficientMemory  46s (x8 over 46s)  kubelet          Node newest-cni-683181 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    46s (x8 over 46s)  kubelet          Node newest-cni-683181 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     46s (x8 over 46s)  kubelet          Node newest-cni-683181 status is now: NodeHasSufficientPID
	  Normal   NodeHasNoDiskPressure    38s                kubelet          Node newest-cni-683181 status is now: NodeHasNoDiskPressure
	  Warning  CgroupV1                 38s                kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  38s                kubelet          Node newest-cni-683181 status is now: NodeHasSufficientMemory
	  Normal   NodeHasSufficientPID     38s                kubelet          Node newest-cni-683181 status is now: NodeHasSufficientPID
	  Normal   Starting                 38s                kubelet          Starting kubelet.
	  Normal   RegisteredNode           34s                node-controller  Node newest-cni-683181 event: Registered Node newest-cni-683181 in Controller
	  Normal   Starting                 17s                kubelet          Starting kubelet.
	  Warning  CgroupV1                 17s                kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  17s (x8 over 17s)  kubelet          Node newest-cni-683181 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    17s (x8 over 17s)  kubelet          Node newest-cni-683181 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     17s (x8 over 17s)  kubelet          Node newest-cni-683181 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           3s                 node-controller  Node newest-cni-683181 event: Registered Node newest-cni-683181 in Controller
	
	
	==> dmesg <==
	[Nov22 00:37] overlayfs: idmapped layers are currently not supported
	[ +56.322609] overlayfs: idmapped layers are currently not supported
	[Nov22 00:38] overlayfs: idmapped layers are currently not supported
	[Nov22 00:39] overlayfs: idmapped layers are currently not supported
	[ +23.174928] overlayfs: idmapped layers are currently not supported
	[Nov22 00:41] overlayfs: idmapped layers are currently not supported
	[Nov22 00:42] overlayfs: idmapped layers are currently not supported
	[Nov22 00:44] overlayfs: idmapped layers are currently not supported
	[Nov22 00:45] overlayfs: idmapped layers are currently not supported
	[Nov22 00:46] overlayfs: idmapped layers are currently not supported
	[Nov22 00:48] overlayfs: idmapped layers are currently not supported
	[Nov22 00:50] overlayfs: idmapped layers are currently not supported
	[Nov22 00:51] overlayfs: idmapped layers are currently not supported
	[ +11.900293] overlayfs: idmapped layers are currently not supported
	[ +28.922055] overlayfs: idmapped layers are currently not supported
	[Nov22 00:52] overlayfs: idmapped layers are currently not supported
	[Nov22 00:53] overlayfs: idmapped layers are currently not supported
	[Nov22 00:54] overlayfs: idmapped layers are currently not supported
	[Nov22 00:55] overlayfs: idmapped layers are currently not supported
	[Nov22 00:56] overlayfs: idmapped layers are currently not supported
	[Nov22 00:57] overlayfs: idmapped layers are currently not supported
	[Nov22 00:58] overlayfs: idmapped layers are currently not supported
	[ +43.407301] overlayfs: idmapped layers are currently not supported
	[Nov22 00:59] overlayfs: idmapped layers are currently not supported
	[  +8.585740] overlayfs: idmapped layers are currently not supported
	
	
	==> etcd [8765e88de49e2be9fd655ccd870bf0fbf040cf37c257af74ba0018ab6313b34a] <==
	{"level":"warn","ts":"2025-11-22T00:59:17.984230Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50466","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-22T00:59:18.062500Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50486","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-22T00:59:18.114364Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50498","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-22T00:59:18.179865Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50522","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-22T00:59:18.252335Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50538","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-22T00:59:18.313451Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50556","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-22T00:59:18.345755Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50560","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-22T00:59:18.417454Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50580","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-22T00:59:18.450364Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50594","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-22T00:59:18.524247Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50612","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-22T00:59:18.555295Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50630","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-22T00:59:18.588532Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50656","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-22T00:59:18.629971Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50678","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-22T00:59:18.697885Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50700","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-22T00:59:18.709234Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50722","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-22T00:59:18.728110Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50742","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-22T00:59:18.781102Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50760","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-22T00:59:18.804202Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50778","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-22T00:59:18.856403Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50804","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-22T00:59:18.940304Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50826","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-22T00:59:19.018585Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50832","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-22T00:59:19.066724Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50852","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-22T00:59:19.121690Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50866","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-22T00:59:19.169455Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50890","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-22T00:59:19.258732Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50906","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 00:59:28 up  5:41,  0 user,  load average: 6.36, 4.37, 3.18
	Linux newest-cni-683181 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [7e89eb5cadf037e487a32d9ea4517b84e0355838991972f6b49fefb3847298aa] <==
	I1122 00:59:21.916593       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1122 00:59:21.923781       1 main.go:139] hostIP = 192.168.76.2
	podIP = 192.168.76.2
	I1122 00:59:21.924010       1 main.go:148] setting mtu 1500 for CNI 
	I1122 00:59:21.924060       1 main.go:178] kindnetd IP family: "ipv4"
	I1122 00:59:21.924100       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-11-22T00:59:22Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1122 00:59:22.195823       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1122 00:59:22.195841       1 controller.go:381] "Waiting for informer caches to sync"
	I1122 00:59:22.195850       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1122 00:59:22.196137       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	
	
	==> kube-apiserver [2cf11a2399791e45cf7ed67b2198c31cbb95b3ccb3913ab1861bd2d43031f670] <==
	I1122 00:59:20.707046       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I1122 00:59:20.712852       1 shared_informer.go:356] "Caches are synced" controller="*generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]"
	I1122 00:59:20.712877       1 policy_source.go:240] refreshing policies
	I1122 00:59:20.718716       1 shared_informer.go:356] "Caches are synced" controller="kubernetes-service-cidr-controller"
	I1122 00:59:20.718775       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1122 00:59:20.724248       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	E1122 00:59:20.752122       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I1122 00:59:20.757934       1 shared_informer.go:356] "Caches are synced" controller="node_authorizer"
	I1122 00:59:20.758952       1 shared_informer.go:356] "Caches are synced" controller="ipallocator-repair-controller"
	I1122 00:59:20.802418       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I1122 00:59:20.802436       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I1122 00:59:20.808178       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1122 00:59:21.198956       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1122 00:59:21.518596       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1122 00:59:22.849693       1 controller.go:667] quota admission added evaluator for: namespaces
	I1122 00:59:22.929363       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1122 00:59:22.971052       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1122 00:59:23.010909       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1122 00:59:23.156851       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.100.71.68"}
	I1122 00:59:23.188384       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.100.39.113"}
	I1122 00:59:25.365690       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I1122 00:59:25.390129       1 controller.go:667] quota admission added evaluator for: endpoints
	I1122 00:59:25.501275       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1122 00:59:25.547673       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1122 00:59:25.547725       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	
	
	==> kube-controller-manager [5b3cac023bb69eb303946916eb3bee91968b9f879ebd1c3aacc0ade3047e950b] <==
	I1122 00:59:25.133494       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I1122 00:59:25.133616       1 shared_informer.go:356] "Caches are synced" controller="TTL"
	I1122 00:59:25.139959       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I1122 00:59:25.157543       1 shared_informer.go:356] "Caches are synced" controller="PVC protection"
	I1122 00:59:25.160882       1 shared_informer.go:356] "Caches are synced" controller="ReplicationController"
	I1122 00:59:25.168228       1 shared_informer.go:356] "Caches are synced" controller="GC"
	I1122 00:59:25.174064       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I1122 00:59:25.176848       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1122 00:59:25.193950       1 shared_informer.go:356] "Caches are synced" controller="VAC protection"
	I1122 00:59:25.194583       1 shared_informer.go:356] "Caches are synced" controller="node"
	I1122 00:59:25.231262       1 range_allocator.go:177] "Sending events to api server" logger="node-ipam-controller"
	I1122 00:59:25.231320       1 range_allocator.go:183] "Starting range CIDR allocator" logger="node-ipam-controller"
	I1122 00:59:25.231366       1 shared_informer.go:349] "Waiting for caches to sync" controller="cidrallocator"
	I1122 00:59:25.231376       1 shared_informer.go:356] "Caches are synced" controller="cidrallocator"
	I1122 00:59:25.195508       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I1122 00:59:25.196177       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	I1122 00:59:25.239075       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1122 00:59:25.239205       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1122 00:59:25.239231       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I1122 00:59:25.240275       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1122 00:59:25.262215       1 shared_informer.go:356] "Caches are synced" controller="disruption"
	I1122 00:59:25.297949       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I1122 00:59:25.299236       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1122 00:59:25.299259       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1122 00:59:25.299269       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	
	
	==> kube-proxy [c6dc4dd7fc04ddecd3b2bb2080ec2863a7ec44e94fdeabd8e2cccba7fd814d22] <==
	I1122 00:59:22.806855       1 server_linux.go:53] "Using iptables proxy"
	I1122 00:59:23.030723       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1122 00:59:23.133973       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1122 00:59:23.134013       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.76.2"]
	E1122 00:59:23.134083       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1122 00:59:23.322216       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1122 00:59:23.322298       1 server_linux.go:132] "Using iptables Proxier"
	I1122 00:59:23.385281       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1122 00:59:23.385654       1 server.go:527] "Version info" version="v1.34.1"
	I1122 00:59:23.385669       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1122 00:59:23.416719       1 config.go:200] "Starting service config controller"
	I1122 00:59:23.416738       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1122 00:59:23.416754       1 config.go:106] "Starting endpoint slice config controller"
	I1122 00:59:23.416759       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1122 00:59:23.416769       1 config.go:403] "Starting serviceCIDR config controller"
	I1122 00:59:23.416773       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1122 00:59:23.424288       1 config.go:309] "Starting node config controller"
	I1122 00:59:23.429422       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1122 00:59:23.429439       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1122 00:59:23.517365       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1122 00:59:23.517367       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1122 00:59:23.517391       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [a49ade414a411cf4537b0004c3cb9293ea4b12b1790212ea98dcd0dc746c2e0f] <==
	I1122 00:59:15.587906       1 serving.go:386] Generated self-signed cert in-memory
	I1122 00:59:21.206315       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.1"
	I1122 00:59:21.206352       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1122 00:59:21.272049       1 requestheader_controller.go:180] Starting RequestHeaderAuthRequestController
	I1122 00:59:21.272156       1 shared_informer.go:349] "Waiting for caches to sync" controller="RequestHeaderAuthRequestController"
	I1122 00:59:21.272314       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1122 00:59:21.272349       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1122 00:59:21.272848       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1122 00:59:21.273063       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1122 00:59:21.272931       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1122 00:59:21.273045       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1122 00:59:21.379006       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1122 00:59:21.379085       1 shared_informer.go:356] "Caches are synced" controller="RequestHeaderAuthRequestController"
	I1122 00:59:21.402404       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	
	
	==> kubelet <==
	Nov 22 00:59:16 newest-cni-683181 kubelet[736]: E1122 00:59:16.377726     736 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"newest-cni-683181\" not found" node="newest-cni-683181"
	Nov 22 00:59:18 newest-cni-683181 kubelet[736]: E1122 00:59:18.848347     736 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"newest-cni-683181\" not found" node="newest-cni-683181"
	Nov 22 00:59:20 newest-cni-683181 kubelet[736]: I1122 00:59:20.537218     736 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/etcd-newest-cni-683181"
	Nov 22 00:59:20 newest-cni-683181 kubelet[736]: E1122 00:59:20.783945     736 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"etcd-newest-cni-683181\" already exists" pod="kube-system/etcd-newest-cni-683181"
	Nov 22 00:59:20 newest-cni-683181 kubelet[736]: I1122 00:59:20.783999     736 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-newest-cni-683181"
	Nov 22 00:59:20 newest-cni-683181 kubelet[736]: I1122 00:59:20.788405     736 kubelet_node_status.go:124] "Node was previously registered" node="newest-cni-683181"
	Nov 22 00:59:20 newest-cni-683181 kubelet[736]: I1122 00:59:20.788552     736 kubelet_node_status.go:78] "Successfully registered node" node="newest-cni-683181"
	Nov 22 00:59:20 newest-cni-683181 kubelet[736]: I1122 00:59:20.788584     736 kuberuntime_manager.go:1828] "Updating runtime config through cri with podcidr" CIDR="10.42.0.0/24"
	Nov 22 00:59:20 newest-cni-683181 kubelet[736]: I1122 00:59:20.790125     736 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.42.0.0/24"
	Nov 22 00:59:20 newest-cni-683181 kubelet[736]: E1122 00:59:20.794353     736 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-apiserver-newest-cni-683181\" already exists" pod="kube-system/kube-apiserver-newest-cni-683181"
	Nov 22 00:59:20 newest-cni-683181 kubelet[736]: I1122 00:59:20.794510     736 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-newest-cni-683181"
	Nov 22 00:59:20 newest-cni-683181 kubelet[736]: E1122 00:59:20.840794     736 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-controller-manager-newest-cni-683181\" already exists" pod="kube-system/kube-controller-manager-newest-cni-683181"
	Nov 22 00:59:20 newest-cni-683181 kubelet[736]: I1122 00:59:20.840828     736 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-newest-cni-683181"
	Nov 22 00:59:20 newest-cni-683181 kubelet[736]: E1122 00:59:20.864603     736 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-scheduler-newest-cni-683181\" already exists" pod="kube-system/kube-scheduler-newest-cni-683181"
	Nov 22 00:59:21 newest-cni-683181 kubelet[736]: I1122 00:59:21.084521     736 apiserver.go:52] "Watching apiserver"
	Nov 22 00:59:21 newest-cni-683181 kubelet[736]: I1122 00:59:21.139469     736 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world"
	Nov 22 00:59:21 newest-cni-683181 kubelet[736]: I1122 00:59:21.153936     736 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/a8d571f3-91a7-4136-8402-f32f10864617-lib-modules\") pod \"kindnet-bpmkp\" (UID: \"a8d571f3-91a7-4136-8402-f32f10864617\") " pod="kube-system/kindnet-bpmkp"
	Nov 22 00:59:21 newest-cni-683181 kubelet[736]: I1122 00:59:21.153982     736 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/386ab39d-8d29-482b-b752-52257e97dde8-lib-modules\") pod \"kube-proxy-s5mhf\" (UID: \"386ab39d-8d29-482b-b752-52257e97dde8\") " pod="kube-system/kube-proxy-s5mhf"
	Nov 22 00:59:21 newest-cni-683181 kubelet[736]: I1122 00:59:21.154038     736 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/a8d571f3-91a7-4136-8402-f32f10864617-xtables-lock\") pod \"kindnet-bpmkp\" (UID: \"a8d571f3-91a7-4136-8402-f32f10864617\") " pod="kube-system/kindnet-bpmkp"
	Nov 22 00:59:21 newest-cni-683181 kubelet[736]: I1122 00:59:21.154059     736 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/386ab39d-8d29-482b-b752-52257e97dde8-xtables-lock\") pod \"kube-proxy-s5mhf\" (UID: \"386ab39d-8d29-482b-b752-52257e97dde8\") " pod="kube-system/kube-proxy-s5mhf"
	Nov 22 00:59:21 newest-cni-683181 kubelet[736]: I1122 00:59:21.154115     736 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/a8d571f3-91a7-4136-8402-f32f10864617-cni-cfg\") pod \"kindnet-bpmkp\" (UID: \"a8d571f3-91a7-4136-8402-f32f10864617\") " pod="kube-system/kindnet-bpmkp"
	Nov 22 00:59:21 newest-cni-683181 kubelet[736]: I1122 00:59:21.239667     736 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory"
	Nov 22 00:59:25 newest-cni-683181 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Nov 22 00:59:25 newest-cni-683181 systemd[1]: kubelet.service: Deactivated successfully.
	Nov 22 00:59:25 newest-cni-683181 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-683181 -n newest-cni-683181
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-683181 -n newest-cni-683181: exit status 2 (372.421435ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:269: (dbg) Run:  kubectl --context newest-cni-683181 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:280: non-running pods: coredns-66bc5c9577-t729j storage-provisioner dashboard-metrics-scraper-6ffb444bf9-nkd9c kubernetes-dashboard-855c9754f9-tlcvt
helpers_test.go:282: ======> post-mortem[TestStartStop/group/newest-cni/serial/Pause]: describe non-running pods <======
helpers_test.go:285: (dbg) Run:  kubectl --context newest-cni-683181 describe pod coredns-66bc5c9577-t729j storage-provisioner dashboard-metrics-scraper-6ffb444bf9-nkd9c kubernetes-dashboard-855c9754f9-tlcvt
helpers_test.go:285: (dbg) Non-zero exit: kubectl --context newest-cni-683181 describe pod coredns-66bc5c9577-t729j storage-provisioner dashboard-metrics-scraper-6ffb444bf9-nkd9c kubernetes-dashboard-855c9754f9-tlcvt: exit status 1 (83.337821ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "coredns-66bc5c9577-t729j" not found
	Error from server (NotFound): pods "storage-provisioner" not found
	Error from server (NotFound): pods "dashboard-metrics-scraper-6ffb444bf9-nkd9c" not found
	Error from server (NotFound): pods "kubernetes-dashboard-855c9754f9-tlcvt" not found

                                                
                                                
** /stderr **
helpers_test.go:287: kubectl --context newest-cni-683181 describe pod coredns-66bc5c9577-t729j storage-provisioner dashboard-metrics-scraper-6ffb444bf9-nkd9c kubernetes-dashboard-855c9754f9-tlcvt: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/newest-cni/serial/Pause]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/newest-cni/serial/Pause]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect newest-cni-683181
helpers_test.go:243: (dbg) docker inspect newest-cni-683181:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "135c0ae6b0320d87bc11e3af1c9defb75409d4de75a05dd6fb885ab556eb0fcb",
	        "Created": "2025-11-22T00:58:25.478632838Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 723649,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-11-22T00:59:02.915247398Z",
	            "FinishedAt": "2025-11-22T00:59:01.859414867Z"
	        },
	        "Image": "sha256:97061622515a85470dad13cb046ad5fc5021fcd2b05aa921a620abb52951cd5d",
	        "ResolvConfPath": "/var/lib/docker/containers/135c0ae6b0320d87bc11e3af1c9defb75409d4de75a05dd6fb885ab556eb0fcb/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/135c0ae6b0320d87bc11e3af1c9defb75409d4de75a05dd6fb885ab556eb0fcb/hostname",
	        "HostsPath": "/var/lib/docker/containers/135c0ae6b0320d87bc11e3af1c9defb75409d4de75a05dd6fb885ab556eb0fcb/hosts",
	        "LogPath": "/var/lib/docker/containers/135c0ae6b0320d87bc11e3af1c9defb75409d4de75a05dd6fb885ab556eb0fcb/135c0ae6b0320d87bc11e3af1c9defb75409d4de75a05dd6fb885ab556eb0fcb-json.log",
	        "Name": "/newest-cni-683181",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "newest-cni-683181:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "newest-cni-683181",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "135c0ae6b0320d87bc11e3af1c9defb75409d4de75a05dd6fb885ab556eb0fcb",
	                "LowerDir": "/var/lib/docker/overlay2/fe785fd9359d610347ceff171bafa42142111d5ef9b3343e32ddfef45bc62e2d-init/diff:/var/lib/docker/overlay2/7e8788c6de692bc1c3758a2bb2c4b8da0fbba26855f855c0f3b655bfbdd92f8e/diff",
	                "MergedDir": "/var/lib/docker/overlay2/fe785fd9359d610347ceff171bafa42142111d5ef9b3343e32ddfef45bc62e2d/merged",
	                "UpperDir": "/var/lib/docker/overlay2/fe785fd9359d610347ceff171bafa42142111d5ef9b3343e32ddfef45bc62e2d/diff",
	                "WorkDir": "/var/lib/docker/overlay2/fe785fd9359d610347ceff171bafa42142111d5ef9b3343e32ddfef45bc62e2d/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "newest-cni-683181",
	                "Source": "/var/lib/docker/volumes/newest-cni-683181/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "newest-cni-683181",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763588073-21934@sha256:19d3da0413e1bfa354cbb88004c6796f8e9772a083e0230b0f6e50212ee04c7e",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "newest-cni-683181",
	                "name.minikube.sigs.k8s.io": "newest-cni-683181",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "ec79e1719321d681fac3b3ed01fa96bd8ef54dfd72f12f0cd6dc8901a7b9d91b",
	            "SandboxKey": "/var/run/docker/netns/ec79e1719321",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33817"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33818"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33821"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33819"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33820"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "newest-cni-683181": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "e2:ba:e3:ed:12:61",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "621737e0fccb923382516e66d395dca2d3f734654251424cf1f7cd380f8144e7",
	                    "EndpointID": "1fee398ed76d45ff55b3a57ec14d889b3eae056ad25c39ada92aebb9ec09b60c",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "newest-cni-683181",
	                        "135c0ae6b032"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-683181 -n newest-cni-683181
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-683181 -n newest-cni-683181: exit status 2 (363.425874ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestStartStop/group/newest-cni/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/newest-cni/serial/Pause]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p newest-cni-683181 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p newest-cni-683181 logs -n 25: (1.166212263s)
helpers_test.go:260: TestStartStop/group/newest-cni/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬──────────────
───────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼──────────────
───────┤
	│ addons  │ enable metrics-server -p embed-certs-879000 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                      │ embed-certs-879000           │ jenkins │ v1.37.0 │ 22 Nov 25 00:56 UTC │                     │
	│ stop    │ -p embed-certs-879000 --alsologtostderr -v=3                                                                                                                                                                                                  │ embed-certs-879000           │ jenkins │ v1.37.0 │ 22 Nov 25 00:56 UTC │ 22 Nov 25 00:57 UTC │
	│ addons  │ enable dashboard -p embed-certs-879000 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                 │ embed-certs-879000           │ jenkins │ v1.37.0 │ 22 Nov 25 00:57 UTC │ 22 Nov 25 00:57 UTC │
	│ start   │ -p embed-certs-879000 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                        │ embed-certs-879000           │ jenkins │ v1.37.0 │ 22 Nov 25 00:57 UTC │ 22 Nov 25 00:57 UTC │
	│ image   │ no-preload-165130 image list --format=json                                                                                                                                                                                                    │ no-preload-165130            │ jenkins │ v1.37.0 │ 22 Nov 25 00:57 UTC │ 22 Nov 25 00:57 UTC │
	│ pause   │ -p no-preload-165130 --alsologtostderr -v=1                                                                                                                                                                                                   │ no-preload-165130            │ jenkins │ v1.37.0 │ 22 Nov 25 00:57 UTC │                     │
	│ delete  │ -p no-preload-165130                                                                                                                                                                                                                          │ no-preload-165130            │ jenkins │ v1.37.0 │ 22 Nov 25 00:57 UTC │ 22 Nov 25 00:57 UTC │
	│ delete  │ -p no-preload-165130                                                                                                                                                                                                                          │ no-preload-165130            │ jenkins │ v1.37.0 │ 22 Nov 25 00:57 UTC │ 22 Nov 25 00:57 UTC │
	│ delete  │ -p disable-driver-mounts-046489                                                                                                                                                                                                               │ disable-driver-mounts-046489 │ jenkins │ v1.37.0 │ 22 Nov 25 00:57 UTC │ 22 Nov 25 00:57 UTC │
	│ start   │ -p default-k8s-diff-port-882305 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                      │ default-k8s-diff-port-882305 │ jenkins │ v1.37.0 │ 22 Nov 25 00:57 UTC │ 22 Nov 25 00:58 UTC │
	│ image   │ embed-certs-879000 image list --format=json                                                                                                                                                                                                   │ embed-certs-879000           │ jenkins │ v1.37.0 │ 22 Nov 25 00:58 UTC │ 22 Nov 25 00:58 UTC │
	│ pause   │ -p embed-certs-879000 --alsologtostderr -v=1                                                                                                                                                                                                  │ embed-certs-879000           │ jenkins │ v1.37.0 │ 22 Nov 25 00:58 UTC │                     │
	│ delete  │ -p embed-certs-879000                                                                                                                                                                                                                         │ embed-certs-879000           │ jenkins │ v1.37.0 │ 22 Nov 25 00:58 UTC │ 22 Nov 25 00:58 UTC │
	│ delete  │ -p embed-certs-879000                                                                                                                                                                                                                         │ embed-certs-879000           │ jenkins │ v1.37.0 │ 22 Nov 25 00:58 UTC │ 22 Nov 25 00:58 UTC │
	│ start   │ -p newest-cni-683181 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1 │ newest-cni-683181            │ jenkins │ v1.37.0 │ 22 Nov 25 00:58 UTC │ 22 Nov 25 00:58 UTC │
	│ addons  │ enable metrics-server -p default-k8s-diff-port-882305 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                            │ default-k8s-diff-port-882305 │ jenkins │ v1.37.0 │ 22 Nov 25 00:58 UTC │                     │
	│ stop    │ -p default-k8s-diff-port-882305 --alsologtostderr -v=3                                                                                                                                                                                        │ default-k8s-diff-port-882305 │ jenkins │ v1.37.0 │ 22 Nov 25 00:58 UTC │ 22 Nov 25 00:58 UTC │
	│ addons  │ enable dashboard -p default-k8s-diff-port-882305 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                       │ default-k8s-diff-port-882305 │ jenkins │ v1.37.0 │ 22 Nov 25 00:58 UTC │ 22 Nov 25 00:58 UTC │
	│ start   │ -p default-k8s-diff-port-882305 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                      │ default-k8s-diff-port-882305 │ jenkins │ v1.37.0 │ 22 Nov 25 00:58 UTC │                     │
	│ addons  │ enable metrics-server -p newest-cni-683181 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ newest-cni-683181            │ jenkins │ v1.37.0 │ 22 Nov 25 00:58 UTC │                     │
	│ stop    │ -p newest-cni-683181 --alsologtostderr -v=3                                                                                                                                                                                                   │ newest-cni-683181            │ jenkins │ v1.37.0 │ 22 Nov 25 00:59 UTC │ 22 Nov 25 00:59 UTC │
	│ addons  │ enable dashboard -p newest-cni-683181 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ newest-cni-683181            │ jenkins │ v1.37.0 │ 22 Nov 25 00:59 UTC │ 22 Nov 25 00:59 UTC │
	│ start   │ -p newest-cni-683181 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1 │ newest-cni-683181            │ jenkins │ v1.37.0 │ 22 Nov 25 00:59 UTC │ 22 Nov 25 00:59 UTC │
	│ image   │ newest-cni-683181 image list --format=json                                                                                                                                                                                                    │ newest-cni-683181            │ jenkins │ v1.37.0 │ 22 Nov 25 00:59 UTC │ 22 Nov 25 00:59 UTC │
	│ pause   │ -p newest-cni-683181 --alsologtostderr -v=1                                                                                                                                                                                                   │ newest-cni-683181            │ jenkins │ v1.37.0 │ 22 Nov 25 00:59 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴──────────────
───────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/22 00:59:02
	Running on machine: ip-172-31-21-244
	Binary: Built with gc go1.25.3 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1122 00:59:02.503300  723417 out.go:360] Setting OutFile to fd 1 ...
	I1122 00:59:02.503528  723417 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1122 00:59:02.503552  723417 out.go:374] Setting ErrFile to fd 2...
	I1122 00:59:02.503572  723417 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1122 00:59:02.503848  723417 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21934-513600/.minikube/bin
	I1122 00:59:02.504432  723417 out.go:368] Setting JSON to false
	I1122 00:59:02.505759  723417 start.go:133] hostinfo: {"hostname":"ip-172-31-21-244","uptime":20459,"bootTime":1763752684,"procs":173,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"da8ac1fd-6236-412a-a346-95873c98230d"}
	I1122 00:59:02.505870  723417 start.go:143] virtualization:  
	I1122 00:59:02.511613  723417 out.go:179] * [newest-cni-683181] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1122 00:59:02.514744  723417 out.go:179]   - MINIKUBE_LOCATION=21934
	I1122 00:59:02.514827  723417 notify.go:221] Checking for updates...
	I1122 00:59:02.520745  723417 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1122 00:59:02.523857  723417 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21934-513600/kubeconfig
	I1122 00:59:02.526853  723417 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21934-513600/.minikube
	I1122 00:59:02.529711  723417 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1122 00:59:02.532900  723417 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1122 00:59:02.536696  723417 config.go:182] Loaded profile config "newest-cni-683181": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1122 00:59:02.537324  723417 driver.go:422] Setting default libvirt URI to qemu:///system
	I1122 00:59:02.581313  723417 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1122 00:59:02.581437  723417 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1122 00:59:02.697293  723417 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:38 OomKillDisable:true NGoroutines:53 SystemTime:2025-11-22 00:59:02.68660273 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aa
rch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pa
th:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1122 00:59:02.697390  723417 docker.go:319] overlay module found
	I1122 00:59:02.701233  723417 out.go:179] * Using the docker driver based on existing profile
	I1122 00:59:02.704018  723417 start.go:309] selected driver: docker
	I1122 00:59:02.704038  723417 start.go:930] validating driver "docker" against &{Name:newest-cni-683181 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763588073-21934@sha256:19d3da0413e1bfa354cbb88004c6796f8e9772a083e0230b0f6e50212ee04c7e Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-683181 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker Mou
ntIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1122 00:59:02.704143  723417 start.go:941] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1122 00:59:02.704780  723417 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1122 00:59:02.811205  723417 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:38 OomKillDisable:true NGoroutines:53 SystemTime:2025-11-22 00:59:02.800456631 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1122 00:59:02.811525  723417 start_flags.go:1011] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1122 00:59:02.811551  723417 cni.go:84] Creating CNI manager for ""
	I1122 00:59:02.811603  723417 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1122 00:59:02.811640  723417 start.go:353] cluster config:
	{Name:newest-cni-683181 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763588073-21934@sha256:19d3da0413e1bfa354cbb88004c6796f8e9772a083e0230b0f6e50212ee04c7e Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-683181 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Containe
rRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker
BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1122 00:59:02.815206  723417 out.go:179] * Starting "newest-cni-683181" primary control-plane node in "newest-cni-683181" cluster
	I1122 00:59:02.818167  723417 cache.go:134] Beginning downloading kic base image for docker with crio
	I1122 00:59:02.821166  723417 out.go:179] * Pulling base image v0.0.48-1763588073-21934 ...
	I1122 00:59:02.824043  723417 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1122 00:59:02.824093  723417 preload.go:203] Found local preload: /home/jenkins/minikube-integration/21934-513600/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4
	I1122 00:59:02.824103  723417 cache.go:65] Caching tarball of preloaded images
	I1122 00:59:02.824199  723417 preload.go:238] Found /home/jenkins/minikube-integration/21934-513600/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1122 00:59:02.824211  723417 cache.go:68] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1122 00:59:02.824326  723417 profile.go:143] Saving config to /home/jenkins/minikube-integration/21934-513600/.minikube/profiles/newest-cni-683181/config.json ...
	I1122 00:59:02.824539  723417 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763588073-21934@sha256:19d3da0413e1bfa354cbb88004c6796f8e9772a083e0230b0f6e50212ee04c7e in local docker daemon
	I1122 00:59:02.851660  723417 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763588073-21934@sha256:19d3da0413e1bfa354cbb88004c6796f8e9772a083e0230b0f6e50212ee04c7e in local docker daemon, skipping pull
	I1122 00:59:02.851680  723417 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763588073-21934@sha256:19d3da0413e1bfa354cbb88004c6796f8e9772a083e0230b0f6e50212ee04c7e exists in daemon, skipping load
	I1122 00:59:02.851701  723417 cache.go:243] Successfully downloaded all kic artifacts
	I1122 00:59:02.851732  723417 start.go:360] acquireMachinesLock for newest-cni-683181: {Name:mk27a4458a1236fbb3e5921a2f9459ba81f48a3a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1122 00:59:02.851800  723417 start.go:364] duration metric: took 50.305µs to acquireMachinesLock for "newest-cni-683181"
	I1122 00:59:02.851820  723417 start.go:96] Skipping create...Using existing machine configuration
	I1122 00:59:02.851824  723417 fix.go:54] fixHost starting: 
	I1122 00:59:02.852105  723417 cli_runner.go:164] Run: docker container inspect newest-cni-683181 --format={{.State.Status}}
	I1122 00:59:02.875029  723417 fix.go:112] recreateIfNeeded on newest-cni-683181: state=Stopped err=<nil>
	W1122 00:59:02.875051  723417 fix.go:138] unexpected machine state, will restart: <nil>
	I1122 00:59:01.995469  721299 cli_runner.go:164] Run: docker network inspect default-k8s-diff-port-882305 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1122 00:59:02.014997  721299 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I1122 00:59:02.019513  721299 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.85.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1122 00:59:02.043925  721299 kubeadm.go:884] updating cluster {Name:default-k8s-diff-port-882305 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763588073-21934@sha256:19d3da0413e1bfa354cbb88004c6796f8e9772a083e0230b0f6e50212ee04c7e Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-882305 Namespace:default APIServerHAVIP: APISer
verName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountT
ype:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1122 00:59:02.044057  721299 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1122 00:59:02.044109  721299 ssh_runner.go:195] Run: sudo crictl images --output json
	I1122 00:59:02.122530  721299 crio.go:514] all images are preloaded for cri-o runtime.
	I1122 00:59:02.122551  721299 crio.go:433] Images already preloaded, skipping extraction
	I1122 00:59:02.122617  721299 ssh_runner.go:195] Run: sudo crictl images --output json
	I1122 00:59:02.176099  721299 crio.go:514] all images are preloaded for cri-o runtime.
	I1122 00:59:02.176121  721299 cache_images.go:86] Images are preloaded, skipping loading
	I1122 00:59:02.176129  721299 kubeadm.go:935] updating node { 192.168.85.2 8444 v1.34.1 crio true true} ...
	I1122 00:59:02.176224  721299 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=default-k8s-diff-port-882305 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-882305 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1122 00:59:02.176318  721299 ssh_runner.go:195] Run: crio config
	I1122 00:59:02.250179  721299 cni.go:84] Creating CNI manager for ""
	I1122 00:59:02.250200  721299 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1122 00:59:02.250223  721299 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1122 00:59:02.250246  721299 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8444 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-882305 NodeName:default-k8s-diff-port-882305 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.
crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1122 00:59:02.250398  721299 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-882305"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.85.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1122 00:59:02.250472  721299 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1122 00:59:02.258934  721299 binaries.go:51] Found k8s binaries, skipping transfer
	I1122 00:59:02.259003  721299 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1122 00:59:02.267225  721299 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (378 bytes)
	I1122 00:59:02.292784  721299 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1122 00:59:02.308235  721299 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2225 bytes)
	I1122 00:59:02.326134  721299 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I1122 00:59:02.330342  721299 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1122 00:59:02.340868  721299 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1122 00:59:02.487730  721299 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1122 00:59:02.504813  721299 certs.go:69] Setting up /home/jenkins/minikube-integration/21934-513600/.minikube/profiles/default-k8s-diff-port-882305 for IP: 192.168.85.2
	I1122 00:59:02.504834  721299 certs.go:195] generating shared ca certs ...
	I1122 00:59:02.504856  721299 certs.go:227] acquiring lock for ca certs: {Name:mkaf4c79493334cb2058b349c36be7473837f9b7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1122 00:59:02.504986  721299 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21934-513600/.minikube/ca.key
	I1122 00:59:02.505033  721299 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21934-513600/.minikube/proxy-client-ca.key
	I1122 00:59:02.505044  721299 certs.go:257] generating profile certs ...
	I1122 00:59:02.505146  721299 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21934-513600/.minikube/profiles/default-k8s-diff-port-882305/client.key
	I1122 00:59:02.505214  721299 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21934-513600/.minikube/profiles/default-k8s-diff-port-882305/apiserver.key.14c699f7
	I1122 00:59:02.505253  721299 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21934-513600/.minikube/profiles/default-k8s-diff-port-882305/proxy-client.key
	I1122 00:59:02.505371  721299 certs.go:484] found cert: /home/jenkins/minikube-integration/21934-513600/.minikube/certs/516937.pem (1338 bytes)
	W1122 00:59:02.505403  721299 certs.go:480] ignoring /home/jenkins/minikube-integration/21934-513600/.minikube/certs/516937_empty.pem, impossibly tiny 0 bytes
	I1122 00:59:02.505416  721299 certs.go:484] found cert: /home/jenkins/minikube-integration/21934-513600/.minikube/certs/ca-key.pem (1679 bytes)
	I1122 00:59:02.505442  721299 certs.go:484] found cert: /home/jenkins/minikube-integration/21934-513600/.minikube/certs/ca.pem (1078 bytes)
	I1122 00:59:02.505473  721299 certs.go:484] found cert: /home/jenkins/minikube-integration/21934-513600/.minikube/certs/cert.pem (1123 bytes)
	I1122 00:59:02.505499  721299 certs.go:484] found cert: /home/jenkins/minikube-integration/21934-513600/.minikube/certs/key.pem (1675 bytes)
	I1122 00:59:02.505556  721299 certs.go:484] found cert: /home/jenkins/minikube-integration/21934-513600/.minikube/files/etc/ssl/certs/5169372.pem (1708 bytes)
	I1122 00:59:02.506504  721299 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21934-513600/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1122 00:59:02.535525  721299 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21934-513600/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1122 00:59:02.595366  721299 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21934-513600/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1122 00:59:02.642848  721299 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21934-513600/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1122 00:59:02.693548  721299 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21934-513600/.minikube/profiles/default-k8s-diff-port-882305/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I1122 00:59:02.745426  721299 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21934-513600/.minikube/profiles/default-k8s-diff-port-882305/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1122 00:59:02.785837  721299 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21934-513600/.minikube/profiles/default-k8s-diff-port-882305/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1122 00:59:02.809952  721299 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21934-513600/.minikube/profiles/default-k8s-diff-port-882305/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1122 00:59:02.834365  721299 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21934-513600/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1122 00:59:02.855606  721299 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21934-513600/.minikube/certs/516937.pem --> /usr/share/ca-certificates/516937.pem (1338 bytes)
	I1122 00:59:02.874418  721299 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21934-513600/.minikube/files/etc/ssl/certs/5169372.pem --> /usr/share/ca-certificates/5169372.pem (1708 bytes)
	I1122 00:59:02.892936  721299 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1122 00:59:02.916373  721299 ssh_runner.go:195] Run: openssl version
	I1122 00:59:02.931005  721299 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1122 00:59:02.945307  721299 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1122 00:59:02.949616  721299 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov 21 23:48 /usr/share/ca-certificates/minikubeCA.pem
	I1122 00:59:02.949683  721299 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1122 00:59:02.999513  721299 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1122 00:59:03.020595  721299 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/516937.pem && ln -fs /usr/share/ca-certificates/516937.pem /etc/ssl/certs/516937.pem"
	I1122 00:59:03.032434  721299 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/516937.pem
	I1122 00:59:03.036106  721299 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov 21 23:55 /usr/share/ca-certificates/516937.pem
	I1122 00:59:03.036169  721299 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/516937.pem
	I1122 00:59:03.080976  721299 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/516937.pem /etc/ssl/certs/51391683.0"
	I1122 00:59:03.090266  721299 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/5169372.pem && ln -fs /usr/share/ca-certificates/5169372.pem /etc/ssl/certs/5169372.pem"
	I1122 00:59:03.100350  721299 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/5169372.pem
	I1122 00:59:03.104729  721299 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov 21 23:55 /usr/share/ca-certificates/5169372.pem
	I1122 00:59:03.104805  721299 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/5169372.pem
	I1122 00:59:03.152781  721299 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/5169372.pem /etc/ssl/certs/3ec20f2e.0"
	I1122 00:59:03.166431  721299 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1122 00:59:03.170534  721299 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1122 00:59:03.272325  721299 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1122 00:59:03.394650  721299 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1122 00:59:03.562785  721299 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1122 00:59:03.696773  721299 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1122 00:59:03.762454  721299 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1122 00:59:03.860436  721299 kubeadm.go:401] StartCluster: {Name:default-k8s-diff-port-882305 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763588073-21934@sha256:19d3da0413e1bfa354cbb88004c6796f8e9772a083e0230b0f6e50212ee04c7e Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-882305 Namespace:default APIServerHAVIP: APIServer
Name:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType
:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1122 00:59:03.860526  721299 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1122 00:59:03.860586  721299 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1122 00:59:03.921643  721299 cri.go:89] found id: "ef5cf3bc0e8a1e84b865765165a5244f97715b14ad4afe6bdecb47483cb802ba"
	I1122 00:59:03.921660  721299 cri.go:89] found id: "d1d854f1c70c8c8f58aacea7d3bc3bea0c433b6787c467ffaf9f43d30127f3aa"
	I1122 00:59:03.921671  721299 cri.go:89] found id: "c0ae03824089747781ca3fa95c137501b3b35608e772c7bf534789a146554e3c"
	I1122 00:59:03.921676  721299 cri.go:89] found id: "1ce380445cfc1fe8d2cbb405092ab03fd65cb6c2cf8bac3317898266e679c5d3"
	I1122 00:59:03.921679  721299 cri.go:89] found id: ""
	I1122 00:59:03.921756  721299 ssh_runner.go:195] Run: sudo runc list -f json
	W1122 00:59:03.939051  721299 kubeadm.go:408] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-22T00:59:03Z" level=error msg="open /run/runc: no such file or directory"
	I1122 00:59:03.939116  721299 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1122 00:59:03.954585  721299 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1122 00:59:03.954602  721299 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1122 00:59:03.954654  721299 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1122 00:59:03.965013  721299 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1122 00:59:03.965529  721299 kubeconfig.go:47] verify endpoint returned: get endpoint: "default-k8s-diff-port-882305" does not appear in /home/jenkins/minikube-integration/21934-513600/kubeconfig
	I1122 00:59:03.965693  721299 kubeconfig.go:62] /home/jenkins/minikube-integration/21934-513600/kubeconfig needs updating (will repair): [kubeconfig missing "default-k8s-diff-port-882305" cluster setting kubeconfig missing "default-k8s-diff-port-882305" context setting]
	I1122 00:59:03.966120  721299 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21934-513600/kubeconfig: {Name:mkcd094d8a765e5a81369c0fb33cb5e696b17bb8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1122 00:59:03.968808  721299 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1122 00:59:03.979602  721299 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.85.2
	I1122 00:59:03.979682  721299 kubeadm.go:602] duration metric: took 25.073203ms to restartPrimaryControlPlane
	I1122 00:59:03.979706  721299 kubeadm.go:403] duration metric: took 119.278562ms to StartCluster
	I1122 00:59:03.979751  721299 settings.go:142] acquiring lock: {Name:mk6c31eb57ec65b047b78b4e1046e03fe7cc77bb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1122 00:59:03.979855  721299 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21934-513600/kubeconfig
	I1122 00:59:03.980650  721299 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21934-513600/kubeconfig: {Name:mkcd094d8a765e5a81369c0fb33cb5e696b17bb8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1122 00:59:03.980931  721299 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.85.2 Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1122 00:59:03.981384  721299 config.go:182] Loaded profile config "default-k8s-diff-port-882305": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1122 00:59:03.981337  721299 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1122 00:59:03.981469  721299 addons.go:70] Setting storage-provisioner=true in profile "default-k8s-diff-port-882305"
	I1122 00:59:03.981493  721299 addons.go:239] Setting addon storage-provisioner=true in "default-k8s-diff-port-882305"
	W1122 00:59:03.981505  721299 addons.go:248] addon storage-provisioner should already be in state true
	I1122 00:59:03.981535  721299 host.go:66] Checking if "default-k8s-diff-port-882305" exists ...
	I1122 00:59:03.981573  721299 addons.go:70] Setting default-storageclass=true in profile "default-k8s-diff-port-882305"
	I1122 00:59:03.981717  721299 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-882305"
	I1122 00:59:03.982038  721299 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-882305 --format={{.State.Status}}
	I1122 00:59:03.982308  721299 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-882305 --format={{.State.Status}}
	I1122 00:59:03.981537  721299 addons.go:70] Setting dashboard=true in profile "default-k8s-diff-port-882305"
	I1122 00:59:03.982806  721299 addons.go:239] Setting addon dashboard=true in "default-k8s-diff-port-882305"
	W1122 00:59:03.982820  721299 addons.go:248] addon dashboard should already be in state true
	I1122 00:59:03.982847  721299 host.go:66] Checking if "default-k8s-diff-port-882305" exists ...
	I1122 00:59:03.983278  721299 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-882305 --format={{.State.Status}}
	I1122 00:59:03.987111  721299 out.go:179] * Verifying Kubernetes components...
	I1122 00:59:03.990162  721299 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1122 00:59:04.046335  721299 addons.go:239] Setting addon default-storageclass=true in "default-k8s-diff-port-882305"
	W1122 00:59:04.046357  721299 addons.go:248] addon default-storageclass should already be in state true
	I1122 00:59:04.046381  721299 host.go:66] Checking if "default-k8s-diff-port-882305" exists ...
	I1122 00:59:04.046804  721299 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-882305 --format={{.State.Status}}
	I1122 00:59:04.059982  721299 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	I1122 00:59:04.062953  721299 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1122 00:59:04.065869  721299 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1122 00:59:04.065980  721299 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1122 00:59:04.065990  721299 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1122 00:59:04.066053  721299 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-882305
	I1122 00:59:04.068799  721299 addons.go:436] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1122 00:59:04.068822  721299 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1122 00:59:04.068964  721299 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-882305
	I1122 00:59:04.113989  721299 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1122 00:59:04.114011  721299 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1122 00:59:04.114073  721299 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-882305
	I1122 00:59:04.138125  721299 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33812 SSHKeyPath:/home/jenkins/minikube-integration/21934-513600/.minikube/machines/default-k8s-diff-port-882305/id_rsa Username:docker}
	I1122 00:59:04.157153  721299 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33812 SSHKeyPath:/home/jenkins/minikube-integration/21934-513600/.minikube/machines/default-k8s-diff-port-882305/id_rsa Username:docker}
	I1122 00:59:04.158719  721299 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33812 SSHKeyPath:/home/jenkins/minikube-integration/21934-513600/.minikube/machines/default-k8s-diff-port-882305/id_rsa Username:docker}
	I1122 00:59:04.312234  721299 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1122 00:59:04.345077  721299 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1122 00:59:04.409899  721299 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1122 00:59:04.409925  721299 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1122 00:59:02.878250  723417 out.go:252] * Restarting existing docker container for "newest-cni-683181" ...
	I1122 00:59:02.878328  723417 cli_runner.go:164] Run: docker start newest-cni-683181
	I1122 00:59:03.202125  723417 cli_runner.go:164] Run: docker container inspect newest-cni-683181 --format={{.State.Status}}
	I1122 00:59:03.231871  723417 kic.go:430] container "newest-cni-683181" state is running.
	I1122 00:59:03.232226  723417 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-683181
	I1122 00:59:03.263960  723417 profile.go:143] Saving config to /home/jenkins/minikube-integration/21934-513600/.minikube/profiles/newest-cni-683181/config.json ...
	I1122 00:59:03.264183  723417 machine.go:94] provisionDockerMachine start ...
	I1122 00:59:03.264249  723417 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-683181
	I1122 00:59:03.296456  723417 main.go:143] libmachine: Using SSH client type: native
	I1122 00:59:03.296780  723417 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33817 <nil> <nil>}
	I1122 00:59:03.296789  723417 main.go:143] libmachine: About to run SSH command:
	hostname
	I1122 00:59:03.297331  723417 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:55510->127.0.0.1:33817: read: connection reset by peer
	I1122 00:59:06.457716  723417 main.go:143] libmachine: SSH cmd err, output: <nil>: newest-cni-683181
	
	I1122 00:59:06.457760  723417 ubuntu.go:182] provisioning hostname "newest-cni-683181"
	I1122 00:59:06.457844  723417 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-683181
	I1122 00:59:06.484016  723417 main.go:143] libmachine: Using SSH client type: native
	I1122 00:59:06.484318  723417 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33817 <nil> <nil>}
	I1122 00:59:06.484329  723417 main.go:143] libmachine: About to run SSH command:
	sudo hostname newest-cni-683181 && echo "newest-cni-683181" | sudo tee /etc/hostname
	I1122 00:59:06.671423  723417 main.go:143] libmachine: SSH cmd err, output: <nil>: newest-cni-683181
	
	I1122 00:59:06.671543  723417 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-683181
	I1122 00:59:06.696749  723417 main.go:143] libmachine: Using SSH client type: native
	I1122 00:59:06.697129  723417 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33817 <nil> <nil>}
	I1122 00:59:06.697152  723417 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\snewest-cni-683181' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 newest-cni-683181/g' /etc/hosts;
				else 
					echo '127.0.1.1 newest-cni-683181' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1122 00:59:06.866237  723417 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1122 00:59:06.866265  723417 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21934-513600/.minikube CaCertPath:/home/jenkins/minikube-integration/21934-513600/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21934-513600/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21934-513600/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21934-513600/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21934-513600/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21934-513600/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21934-513600/.minikube}
	I1122 00:59:06.866304  723417 ubuntu.go:190] setting up certificates
	I1122 00:59:06.866314  723417 provision.go:84] configureAuth start
	I1122 00:59:06.866389  723417 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-683181
	I1122 00:59:06.896431  723417 provision.go:143] copyHostCerts
	I1122 00:59:06.896506  723417 exec_runner.go:144] found /home/jenkins/minikube-integration/21934-513600/.minikube/ca.pem, removing ...
	I1122 00:59:06.896520  723417 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21934-513600/.minikube/ca.pem
	I1122 00:59:06.896600  723417 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21934-513600/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21934-513600/.minikube/ca.pem (1078 bytes)
	I1122 00:59:06.896712  723417 exec_runner.go:144] found /home/jenkins/minikube-integration/21934-513600/.minikube/cert.pem, removing ...
	I1122 00:59:06.896724  723417 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21934-513600/.minikube/cert.pem
	I1122 00:59:06.896755  723417 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21934-513600/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21934-513600/.minikube/cert.pem (1123 bytes)
	I1122 00:59:06.896872  723417 exec_runner.go:144] found /home/jenkins/minikube-integration/21934-513600/.minikube/key.pem, removing ...
	I1122 00:59:06.896883  723417 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21934-513600/.minikube/key.pem
	I1122 00:59:06.896916  723417 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21934-513600/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21934-513600/.minikube/key.pem (1675 bytes)
	I1122 00:59:06.896979  723417 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21934-513600/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21934-513600/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21934-513600/.minikube/certs/ca-key.pem org=jenkins.newest-cni-683181 san=[127.0.0.1 192.168.76.2 localhost minikube newest-cni-683181]
	I1122 00:59:07.001299  723417 provision.go:177] copyRemoteCerts
	I1122 00:59:07.001464  723417 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1122 00:59:07.001536  723417 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-683181
	I1122 00:59:07.019769  723417 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33817 SSHKeyPath:/home/jenkins/minikube-integration/21934-513600/.minikube/machines/newest-cni-683181/id_rsa Username:docker}
	I1122 00:59:07.135805  723417 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21934-513600/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1122 00:59:07.168308  723417 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21934-513600/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1122 00:59:07.199555  723417 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21934-513600/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1122 00:59:07.229013  723417 provision.go:87] duration metric: took 362.67383ms to configureAuth
	I1122 00:59:07.229082  723417 ubuntu.go:206] setting minikube options for container-runtime
	I1122 00:59:07.229327  723417 config.go:182] Loaded profile config "newest-cni-683181": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1122 00:59:07.229473  723417 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-683181
	I1122 00:59:07.252081  723417 main.go:143] libmachine: Using SSH client type: native
	I1122 00:59:07.252390  723417 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33817 <nil> <nil>}
	I1122 00:59:07.252410  723417 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1122 00:59:04.439090  721299 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1122 00:59:04.474959  721299 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1122 00:59:04.474979  721299 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1122 00:59:04.562957  721299 addons.go:436] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1122 00:59:04.562978  721299 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1122 00:59:04.615816  721299 addons.go:436] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1122 00:59:04.615868  721299 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1122 00:59:04.675067  721299 addons.go:436] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1122 00:59:04.675133  721299 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1122 00:59:04.694319  721299 addons.go:436] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1122 00:59:04.694388  721299 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1122 00:59:04.715314  721299 addons.go:436] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1122 00:59:04.715379  721299 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1122 00:59:04.734131  721299 addons.go:436] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1122 00:59:04.734196  721299 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1122 00:59:04.750497  721299 addons.go:436] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1122 00:59:04.750569  721299 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1122 00:59:04.770972  721299 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1122 00:59:07.722329  723417 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1122 00:59:07.722375  723417 machine.go:97] duration metric: took 4.458175162s to provisionDockerMachine
	I1122 00:59:07.722386  723417 start.go:293] postStartSetup for "newest-cni-683181" (driver="docker")
	I1122 00:59:07.722397  723417 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1122 00:59:07.722483  723417 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1122 00:59:07.722533  723417 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-683181
	I1122 00:59:07.753762  723417 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33817 SSHKeyPath:/home/jenkins/minikube-integration/21934-513600/.minikube/machines/newest-cni-683181/id_rsa Username:docker}
	I1122 00:59:07.874962  723417 ssh_runner.go:195] Run: cat /etc/os-release
	I1122 00:59:07.879196  723417 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1122 00:59:07.879228  723417 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1122 00:59:07.879240  723417 filesync.go:126] Scanning /home/jenkins/minikube-integration/21934-513600/.minikube/addons for local assets ...
	I1122 00:59:07.879299  723417 filesync.go:126] Scanning /home/jenkins/minikube-integration/21934-513600/.minikube/files for local assets ...
	I1122 00:59:07.879384  723417 filesync.go:149] local asset: /home/jenkins/minikube-integration/21934-513600/.minikube/files/etc/ssl/certs/5169372.pem -> 5169372.pem in /etc/ssl/certs
	I1122 00:59:07.879495  723417 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1122 00:59:07.890236  723417 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21934-513600/.minikube/files/etc/ssl/certs/5169372.pem --> /etc/ssl/certs/5169372.pem (1708 bytes)
	I1122 00:59:07.919125  723417 start.go:296] duration metric: took 196.723ms for postStartSetup
	I1122 00:59:07.919247  723417 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1122 00:59:07.919298  723417 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-683181
	I1122 00:59:07.937135  723417 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33817 SSHKeyPath:/home/jenkins/minikube-integration/21934-513600/.minikube/machines/newest-cni-683181/id_rsa Username:docker}
	I1122 00:59:08.040307  723417 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1122 00:59:08.046798  723417 fix.go:56] duration metric: took 5.194965571s for fixHost
	I1122 00:59:08.046893  723417 start.go:83] releasing machines lock for "newest-cni-683181", held for 5.195062472s
	I1122 00:59:08.046993  723417 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-683181
	I1122 00:59:08.074215  723417 ssh_runner.go:195] Run: cat /version.json
	I1122 00:59:08.074278  723417 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1122 00:59:08.074346  723417 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-683181
	I1122 00:59:08.074280  723417 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-683181
	I1122 00:59:08.113980  723417 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33817 SSHKeyPath:/home/jenkins/minikube-integration/21934-513600/.minikube/machines/newest-cni-683181/id_rsa Username:docker}
	I1122 00:59:08.116442  723417 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33817 SSHKeyPath:/home/jenkins/minikube-integration/21934-513600/.minikube/machines/newest-cni-683181/id_rsa Username:docker}
	I1122 00:59:08.351101  723417 ssh_runner.go:195] Run: systemctl --version
	I1122 00:59:08.358178  723417 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1122 00:59:08.414487  723417 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1122 00:59:08.419622  723417 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1122 00:59:08.419701  723417 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1122 00:59:08.434364  723417 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1122 00:59:08.434411  723417 start.go:496] detecting cgroup driver to use...
	I1122 00:59:08.434444  723417 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1122 00:59:08.434511  723417 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1122 00:59:08.454654  723417 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1122 00:59:08.469117  723417 docker.go:218] disabling cri-docker service (if available) ...
	I1122 00:59:08.469180  723417 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1122 00:59:08.489744  723417 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1122 00:59:08.518366  723417 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1122 00:59:08.709392  723417 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1122 00:59:08.905180  723417 docker.go:234] disabling docker service ...
	I1122 00:59:08.905257  723417 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1122 00:59:08.927428  723417 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1122 00:59:08.947641  723417 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1122 00:59:09.155408  723417 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1122 00:59:09.338450  723417 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1122 00:59:09.355624  723417 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1122 00:59:09.375009  723417 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1122 00:59:09.375085  723417 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1122 00:59:09.392206  723417 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1122 00:59:09.392312  723417 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1122 00:59:09.408908  723417 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1122 00:59:09.424892  723417 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1122 00:59:09.443007  723417 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1122 00:59:09.456492  723417 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1122 00:59:09.472740  723417 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1122 00:59:09.484559  723417 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1122 00:59:09.498124  723417 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1122 00:59:09.509830  723417 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1122 00:59:09.523713  723417 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1122 00:59:09.726324  723417 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1122 00:59:09.988596  723417 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1122 00:59:09.988668  723417 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1122 00:59:09.994953  723417 start.go:564] Will wait 60s for crictl version
	I1122 00:59:09.995017  723417 ssh_runner.go:195] Run: which crictl
	I1122 00:59:09.999299  723417 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1122 00:59:10.050245  723417 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.2
	RuntimeApiVersion:  v1
	I1122 00:59:10.050332  723417 ssh_runner.go:195] Run: crio --version
	I1122 00:59:10.103645  723417 ssh_runner.go:195] Run: crio --version
	I1122 00:59:10.162439  723417 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.2 ...
	I1122 00:59:10.165362  723417 cli_runner.go:164] Run: docker network inspect newest-cni-683181 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1122 00:59:10.188504  723417 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I1122 00:59:10.192964  723417 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1122 00:59:10.209216  723417 out.go:179]   - kubeadm.pod-network-cidr=10.42.0.0/16
	I1122 00:59:10.778865  721299 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (6.466557053s)
	I1122 00:59:10.778940  721299 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (6.433830001s)
	I1122 00:59:10.778974  721299 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-882305" to be "Ready" ...
	I1122 00:59:10.779306  721299 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (6.340188909s)
	I1122 00:59:10.779562  721299 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (6.008491961s)
	I1122 00:59:10.782770  721299 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p default-k8s-diff-port-882305 addons enable metrics-server
	
	I1122 00:59:10.881477  721299 node_ready.go:49] node "default-k8s-diff-port-882305" is "Ready"
	I1122 00:59:10.881508  721299 node_ready.go:38] duration metric: took 102.515664ms for node "default-k8s-diff-port-882305" to be "Ready" ...
	I1122 00:59:10.881522  721299 api_server.go:52] waiting for apiserver process to appear ...
	I1122 00:59:10.881578  721299 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1122 00:59:10.926162  721299 out.go:179] * Enabled addons: storage-provisioner, dashboard, default-storageclass
	I1122 00:59:10.212156  723417 kubeadm.go:884] updating cluster {Name:newest-cni-683181 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763588073-21934@sha256:19d3da0413e1bfa354cbb88004c6796f8e9772a083e0230b0f6e50212ee04c7e Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-683181 Namespace:default APIServerHAVIP: APIServerName:minikubeCA API
ServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:
262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1122 00:59:10.212323  723417 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1122 00:59:10.212400  723417 ssh_runner.go:195] Run: sudo crictl images --output json
	I1122 00:59:10.268918  723417 crio.go:514] all images are preloaded for cri-o runtime.
	I1122 00:59:10.268937  723417 crio.go:433] Images already preloaded, skipping extraction
	I1122 00:59:10.268989  723417 ssh_runner.go:195] Run: sudo crictl images --output json
	I1122 00:59:10.346312  723417 crio.go:514] all images are preloaded for cri-o runtime.
	I1122 00:59:10.346336  723417 cache_images.go:86] Images are preloaded, skipping loading
	I1122 00:59:10.346343  723417 kubeadm.go:935] updating node { 192.168.76.2 8443 v1.34.1 crio true true} ...
	I1122 00:59:10.346437  723417 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=newest-cni-683181 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:newest-cni-683181 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1122 00:59:10.346517  723417 ssh_runner.go:195] Run: crio config
	I1122 00:59:10.438766  723417 cni.go:84] Creating CNI manager for ""
	I1122 00:59:10.438833  723417 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1122 00:59:10.438857  723417 kubeadm.go:85] Using pod CIDR: 10.42.0.0/16
	I1122 00:59:10.438887  723417 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.42.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:newest-cni-683181 NodeName:newest-cni-683181 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/
kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1122 00:59:10.439101  723417 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "newest-cni-683181"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.76.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.42.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.42.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1122 00:59:10.439230  723417 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1122 00:59:10.450839  723417 binaries.go:51] Found k8s binaries, skipping transfer
	I1122 00:59:10.450974  723417 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1122 00:59:10.458114  723417 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (367 bytes)
	I1122 00:59:10.475213  723417 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1122 00:59:10.488054  723417 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2212 bytes)
	I1122 00:59:10.516017  723417 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I1122 00:59:10.519936  723417 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1122 00:59:10.537302  723417 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1122 00:59:10.766134  723417 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1122 00:59:10.800565  723417 certs.go:69] Setting up /home/jenkins/minikube-integration/21934-513600/.minikube/profiles/newest-cni-683181 for IP: 192.168.76.2
	I1122 00:59:10.800628  723417 certs.go:195] generating shared ca certs ...
	I1122 00:59:10.800658  723417 certs.go:227] acquiring lock for ca certs: {Name:mkaf4c79493334cb2058b349c36be7473837f9b7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1122 00:59:10.800823  723417 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21934-513600/.minikube/ca.key
	I1122 00:59:10.800910  723417 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21934-513600/.minikube/proxy-client-ca.key
	I1122 00:59:10.800946  723417 certs.go:257] generating profile certs ...
	I1122 00:59:10.801069  723417 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21934-513600/.minikube/profiles/newest-cni-683181/client.key
	I1122 00:59:10.801156  723417 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21934-513600/.minikube/profiles/newest-cni-683181/apiserver.key.458b4884
	I1122 00:59:10.801240  723417 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21934-513600/.minikube/profiles/newest-cni-683181/proxy-client.key
	I1122 00:59:10.801381  723417 certs.go:484] found cert: /home/jenkins/minikube-integration/21934-513600/.minikube/certs/516937.pem (1338 bytes)
	W1122 00:59:10.801440  723417 certs.go:480] ignoring /home/jenkins/minikube-integration/21934-513600/.minikube/certs/516937_empty.pem, impossibly tiny 0 bytes
	I1122 00:59:10.801464  723417 certs.go:484] found cert: /home/jenkins/minikube-integration/21934-513600/.minikube/certs/ca-key.pem (1679 bytes)
	I1122 00:59:10.801527  723417 certs.go:484] found cert: /home/jenkins/minikube-integration/21934-513600/.minikube/certs/ca.pem (1078 bytes)
	I1122 00:59:10.801576  723417 certs.go:484] found cert: /home/jenkins/minikube-integration/21934-513600/.minikube/certs/cert.pem (1123 bytes)
	I1122 00:59:10.801629  723417 certs.go:484] found cert: /home/jenkins/minikube-integration/21934-513600/.minikube/certs/key.pem (1675 bytes)
	I1122 00:59:10.801709  723417 certs.go:484] found cert: /home/jenkins/minikube-integration/21934-513600/.minikube/files/etc/ssl/certs/5169372.pem (1708 bytes)
	I1122 00:59:10.802410  723417 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21934-513600/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1122 00:59:10.835955  723417 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21934-513600/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1122 00:59:10.863704  723417 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21934-513600/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1122 00:59:10.894131  723417 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21934-513600/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1122 00:59:10.943558  723417 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21934-513600/.minikube/profiles/newest-cni-683181/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1122 00:59:10.968974  723417 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21934-513600/.minikube/profiles/newest-cni-683181/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1122 00:59:11.000134  723417 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21934-513600/.minikube/profiles/newest-cni-683181/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1122 00:59:11.066994  723417 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21934-513600/.minikube/profiles/newest-cni-683181/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1122 00:59:11.102718  723417 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21934-513600/.minikube/certs/516937.pem --> /usr/share/ca-certificates/516937.pem (1338 bytes)
	I1122 00:59:11.159619  723417 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21934-513600/.minikube/files/etc/ssl/certs/5169372.pem --> /usr/share/ca-certificates/5169372.pem (1708 bytes)
	I1122 00:59:11.209925  723417 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21934-513600/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1122 00:59:11.247743  723417 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1122 00:59:11.290352  723417 ssh_runner.go:195] Run: openssl version
	I1122 00:59:11.307175  723417 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/5169372.pem && ln -fs /usr/share/ca-certificates/5169372.pem /etc/ssl/certs/5169372.pem"
	I1122 00:59:11.331931  723417 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/5169372.pem
	I1122 00:59:11.336290  723417 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov 21 23:55 /usr/share/ca-certificates/5169372.pem
	I1122 00:59:11.336393  723417 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/5169372.pem
	I1122 00:59:11.396981  723417 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/5169372.pem /etc/ssl/certs/3ec20f2e.0"
	I1122 00:59:11.406734  723417 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1122 00:59:11.420121  723417 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1122 00:59:11.427494  723417 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov 21 23:48 /usr/share/ca-certificates/minikubeCA.pem
	I1122 00:59:11.427577  723417 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1122 00:59:11.481193  723417 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1122 00:59:11.490649  723417 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/516937.pem && ln -fs /usr/share/ca-certificates/516937.pem /etc/ssl/certs/516937.pem"
	I1122 00:59:11.505120  723417 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/516937.pem
	I1122 00:59:11.511278  723417 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov 21 23:55 /usr/share/ca-certificates/516937.pem
	I1122 00:59:11.511358  723417 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/516937.pem
	I1122 00:59:11.562438  723417 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/516937.pem /etc/ssl/certs/51391683.0"
	I1122 00:59:11.572701  723417 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1122 00:59:11.578098  723417 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1122 00:59:11.622808  723417 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1122 00:59:11.669794  723417 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1122 00:59:11.715356  723417 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1122 00:59:11.760034  723417 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1122 00:59:11.810989  723417 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1122 00:59:11.872227  723417 kubeadm.go:401] StartCluster: {Name:newest-cni-683181 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763588073-21934@sha256:19d3da0413e1bfa354cbb88004c6796f8e9772a083e0230b0f6e50212ee04c7e Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-683181 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISer
verNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262
144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1122 00:59:11.872337  723417 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1122 00:59:11.872417  723417 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1122 00:59:11.944703  723417 cri.go:89] found id: ""
	I1122 00:59:11.944783  723417 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1122 00:59:11.957367  723417 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1122 00:59:11.957389  723417 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1122 00:59:11.957472  723417 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1122 00:59:11.967493  723417 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1122 00:59:11.968150  723417 kubeconfig.go:47] verify endpoint returned: get endpoint: "newest-cni-683181" does not appear in /home/jenkins/minikube-integration/21934-513600/kubeconfig
	I1122 00:59:11.968438  723417 kubeconfig.go:62] /home/jenkins/minikube-integration/21934-513600/kubeconfig needs updating (will repair): [kubeconfig missing "newest-cni-683181" cluster setting kubeconfig missing "newest-cni-683181" context setting]
	I1122 00:59:11.968967  723417 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21934-513600/kubeconfig: {Name:mkcd094d8a765e5a81369c0fb33cb5e696b17bb8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1122 00:59:11.970846  723417 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1122 00:59:11.984168  723417 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.76.2
	I1122 00:59:11.984211  723417 kubeadm.go:602] duration metric: took 26.815683ms to restartPrimaryControlPlane
	I1122 00:59:11.984220  723417 kubeadm.go:403] duration metric: took 112.007514ms to StartCluster
	I1122 00:59:11.984235  723417 settings.go:142] acquiring lock: {Name:mk6c31eb57ec65b047b78b4e1046e03fe7cc77bb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1122 00:59:11.984302  723417 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21934-513600/kubeconfig
	I1122 00:59:11.985302  723417 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21934-513600/kubeconfig: {Name:mkcd094d8a765e5a81369c0fb33cb5e696b17bb8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1122 00:59:11.985517  723417 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1122 00:59:11.985894  723417 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1122 00:59:11.985971  723417 addons.go:70] Setting storage-provisioner=true in profile "newest-cni-683181"
	I1122 00:59:11.985984  723417 addons.go:239] Setting addon storage-provisioner=true in "newest-cni-683181"
	W1122 00:59:11.985990  723417 addons.go:248] addon storage-provisioner should already be in state true
	I1122 00:59:11.986011  723417 host.go:66] Checking if "newest-cni-683181" exists ...
	I1122 00:59:11.986731  723417 cli_runner.go:164] Run: docker container inspect newest-cni-683181 --format={{.State.Status}}
	I1122 00:59:11.987017  723417 config.go:182] Loaded profile config "newest-cni-683181": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1122 00:59:11.987092  723417 addons.go:70] Setting dashboard=true in profile "newest-cni-683181"
	I1122 00:59:11.987117  723417 addons.go:239] Setting addon dashboard=true in "newest-cni-683181"
	W1122 00:59:11.987146  723417 addons.go:248] addon dashboard should already be in state true
	I1122 00:59:11.987197  723417 host.go:66] Checking if "newest-cni-683181" exists ...
	I1122 00:59:11.987670  723417 cli_runner.go:164] Run: docker container inspect newest-cni-683181 --format={{.State.Status}}
	I1122 00:59:11.991059  723417 addons.go:70] Setting default-storageclass=true in profile "newest-cni-683181"
	I1122 00:59:11.991290  723417 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "newest-cni-683181"
	I1122 00:59:11.992632  723417 cli_runner.go:164] Run: docker container inspect newest-cni-683181 --format={{.State.Status}}
	I1122 00:59:11.991242  723417 out.go:179] * Verifying Kubernetes components...
	I1122 00:59:12.003524  723417 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1122 00:59:12.041573  723417 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1122 00:59:12.041653  723417 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1122 00:59:12.047949  723417 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1122 00:59:12.047981  723417 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1122 00:59:12.048052  723417 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-683181
	I1122 00:59:12.054033  723417 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	I1122 00:59:12.056851  723417 addons.go:436] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1122 00:59:12.056878  723417 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1122 00:59:12.056946  723417 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-683181
	I1122 00:59:12.062051  723417 addons.go:239] Setting addon default-storageclass=true in "newest-cni-683181"
	W1122 00:59:12.062070  723417 addons.go:248] addon default-storageclass should already be in state true
	I1122 00:59:12.062096  723417 host.go:66] Checking if "newest-cni-683181" exists ...
	I1122 00:59:12.062526  723417 cli_runner.go:164] Run: docker container inspect newest-cni-683181 --format={{.State.Status}}
	I1122 00:59:12.092699  723417 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33817 SSHKeyPath:/home/jenkins/minikube-integration/21934-513600/.minikube/machines/newest-cni-683181/id_rsa Username:docker}
	I1122 00:59:12.122186  723417 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33817 SSHKeyPath:/home/jenkins/minikube-integration/21934-513600/.minikube/machines/newest-cni-683181/id_rsa Username:docker}
	I1122 00:59:12.136943  723417 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1122 00:59:12.136963  723417 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1122 00:59:12.137030  723417 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-683181
	I1122 00:59:12.168813  723417 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33817 SSHKeyPath:/home/jenkins/minikube-integration/21934-513600/.minikube/machines/newest-cni-683181/id_rsa Username:docker}
	I1122 00:59:12.410860  723417 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1122 00:59:10.929483  721299 addons.go:530] duration metric: took 6.94814783s for enable addons: enabled=[storage-provisioner dashboard default-storageclass]
	I1122 00:59:10.946665  721299 api_server.go:72] duration metric: took 6.965669591s to wait for apiserver process to appear ...
	I1122 00:59:10.946691  721299 api_server.go:88] waiting for apiserver healthz status ...
	I1122 00:59:10.946710  721299 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8444/healthz ...
	I1122 00:59:10.989918  721299 api_server.go:279] https://192.168.85.2:8444/healthz returned 200:
	ok
	I1122 00:59:10.999341  721299 api_server.go:141] control plane version: v1.34.1
	I1122 00:59:10.999377  721299 api_server.go:131] duration metric: took 52.678155ms to wait for apiserver health ...
	I1122 00:59:10.999388  721299 system_pods.go:43] waiting for kube-system pods to appear ...
	I1122 00:59:11.028588  721299 system_pods.go:59] 8 kube-system pods found
	I1122 00:59:11.028630  721299 system_pods.go:61] "coredns-66bc5c9577-448gn" [a2f33c9b-90d6-4197-9606-48fd95ff1ef2] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1122 00:59:11.028639  721299 system_pods.go:61] "etcd-default-k8s-diff-port-882305" [b7b7077d-891d-48c6-b3dc-2f137b395bc2] Running
	I1122 00:59:11.028648  721299 system_pods.go:61] "kindnet-kcwqj" [52f46f97-517a-4d53-9374-2313d6220643] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I1122 00:59:11.028653  721299 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-882305" [64aeddd8-fe12-4e20-86f8-b6b94d180713] Running
	I1122 00:59:11.028660  721299 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-882305" [da7abe5d-c103-4152-a303-9cca02a54d69] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1122 00:59:11.028669  721299 system_pods.go:61] "kube-proxy-59l6x" [7cdb7bc0-14ce-4e33-aca8-95137883f5e0] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1122 00:59:11.028676  721299 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-882305" [5506ff95-9cc2-4344-b578-eca19040f97a] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1122 00:59:11.028684  721299 system_pods.go:61] "storage-provisioner" [fc6390d1-3d5c-4f70-a9bb-7e5d41d44f2a] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1122 00:59:11.028691  721299 system_pods.go:74] duration metric: took 29.296456ms to wait for pod list to return data ...
	I1122 00:59:11.028704  721299 default_sa.go:34] waiting for default service account to be created ...
	I1122 00:59:11.032103  721299 default_sa.go:45] found service account: "default"
	I1122 00:59:11.032129  721299 default_sa.go:55] duration metric: took 3.41823ms for default service account to be created ...
	I1122 00:59:11.032139  721299 system_pods.go:116] waiting for k8s-apps to be running ...
	I1122 00:59:11.035472  721299 system_pods.go:86] 8 kube-system pods found
	I1122 00:59:11.035507  721299 system_pods.go:89] "coredns-66bc5c9577-448gn" [a2f33c9b-90d6-4197-9606-48fd95ff1ef2] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1122 00:59:11.035515  721299 system_pods.go:89] "etcd-default-k8s-diff-port-882305" [b7b7077d-891d-48c6-b3dc-2f137b395bc2] Running
	I1122 00:59:11.035524  721299 system_pods.go:89] "kindnet-kcwqj" [52f46f97-517a-4d53-9374-2313d6220643] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I1122 00:59:11.035529  721299 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-882305" [64aeddd8-fe12-4e20-86f8-b6b94d180713] Running
	I1122 00:59:11.035537  721299 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-882305" [da7abe5d-c103-4152-a303-9cca02a54d69] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1122 00:59:11.035546  721299 system_pods.go:89] "kube-proxy-59l6x" [7cdb7bc0-14ce-4e33-aca8-95137883f5e0] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1122 00:59:11.035556  721299 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-882305" [5506ff95-9cc2-4344-b578-eca19040f97a] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1122 00:59:11.035562  721299 system_pods.go:89] "storage-provisioner" [fc6390d1-3d5c-4f70-a9bb-7e5d41d44f2a] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1122 00:59:11.035574  721299 system_pods.go:126] duration metric: took 3.429388ms to wait for k8s-apps to be running ...
	I1122 00:59:11.035583  721299 system_svc.go:44] waiting for kubelet service to be running ....
	I1122 00:59:11.035639  721299 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1122 00:59:11.064706  721299 system_svc.go:56] duration metric: took 29.112995ms WaitForService to wait for kubelet
	I1122 00:59:11.064735  721299 kubeadm.go:587] duration metric: took 7.083744586s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1122 00:59:11.064753  721299 node_conditions.go:102] verifying NodePressure condition ...
	I1122 00:59:11.086906  721299 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1122 00:59:11.086946  721299 node_conditions.go:123] node cpu capacity is 2
	I1122 00:59:11.086961  721299 node_conditions.go:105] duration metric: took 22.202239ms to run NodePressure ...
	I1122 00:59:11.086974  721299 start.go:242] waiting for startup goroutines ...
	I1122 00:59:11.086982  721299 start.go:247] waiting for cluster config update ...
	I1122 00:59:11.086997  721299 start.go:256] writing updated cluster config ...
	I1122 00:59:11.087286  721299 ssh_runner.go:195] Run: rm -f paused
	I1122 00:59:11.098144  721299 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1122 00:59:11.157722  721299 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-448gn" in "kube-system" namespace to be "Ready" or be gone ...
	W1122 00:59:13.185114  721299 pod_ready.go:104] pod "coredns-66bc5c9577-448gn" is not "Ready", error: <nil>
	I1122 00:59:12.588306  723417 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1122 00:59:12.590176  723417 api_server.go:52] waiting for apiserver process to appear ...
	I1122 00:59:12.590264  723417 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1122 00:59:12.597203  723417 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1122 00:59:12.733409  723417 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1122 00:59:12.733434  723417 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1122 00:59:12.837659  723417 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1122 00:59:12.837680  723417 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1122 00:59:13.016255  723417 addons.go:436] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1122 00:59:13.016275  723417 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1122 00:59:13.083796  723417 addons.go:436] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1122 00:59:13.083815  723417 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1122 00:59:13.119174  723417 addons.go:436] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1122 00:59:13.119202  723417 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1122 00:59:13.159740  723417 addons.go:436] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1122 00:59:13.159802  723417 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1122 00:59:13.196569  723417 addons.go:436] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1122 00:59:13.196637  723417 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1122 00:59:13.219090  723417 addons.go:436] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1122 00:59:13.219119  723417 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1122 00:59:13.236662  723417 addons.go:436] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1122 00:59:13.236687  723417 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1122 00:59:13.250317  723417 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1122 00:59:15.662928  721299 pod_ready.go:104] pod "coredns-66bc5c9577-448gn" is not "Ready", error: <nil>
	W1122 00:59:17.664863  721299 pod_ready.go:104] pod "coredns-66bc5c9577-448gn" is not "Ready", error: <nil>
	I1122 00:59:23.422953  723417 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (10.834612876s)
	I1122 00:59:23.422998  723417 ssh_runner.go:235] Completed: sudo pgrep -xnf kube-apiserver.*minikube.*: (10.832719351s)
	I1122 00:59:23.423011  723417 api_server.go:72] duration metric: took 11.437466683s to wait for apiserver process to appear ...
	I1122 00:59:23.423016  723417 api_server.go:88] waiting for apiserver healthz status ...
	I1122 00:59:23.423032  723417 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1122 00:59:23.423326  723417 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (10.826098401s)
	I1122 00:59:23.423580  723417 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (10.173231632s)
	I1122 00:59:23.427418  723417 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p newest-cni-683181 addons enable metrics-server
	
	I1122 00:59:23.462704  723417 api_server.go:279] https://192.168.76.2:8443/healthz returned 200:
	ok
	I1122 00:59:23.463728  723417 api_server.go:141] control plane version: v1.34.1
	I1122 00:59:23.463786  723417 api_server.go:131] duration metric: took 40.763607ms to wait for apiserver health ...
	I1122 00:59:23.463810  723417 system_pods.go:43] waiting for kube-system pods to appear ...
	I1122 00:59:23.469060  723417 system_pods.go:59] 8 kube-system pods found
	I1122 00:59:23.469142  723417 system_pods.go:61] "coredns-66bc5c9577-t729j" [aeaa479f-a434-45f0-a153-9930c355bc90] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. no new claims to deallocate, preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I1122 00:59:23.469167  723417 system_pods.go:61] "etcd-newest-cni-683181" [a7afb010-b8c8-4f7c-b259-9bda74317a71] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1122 00:59:23.469205  723417 system_pods.go:61] "kindnet-bpmkp" [a8d571f3-91a7-4136-8402-f32f10864617] Running
	I1122 00:59:23.469231  723417 system_pods.go:61] "kube-apiserver-newest-cni-683181" [0ae77e9e-2bcc-4530-a9af-edb6a2775a1c] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1122 00:59:23.469253  723417 system_pods.go:61] "kube-controller-manager-newest-cni-683181" [b8386b4e-6a08-4989-b637-baf2a4d446bb] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1122 00:59:23.469286  723417 system_pods.go:61] "kube-proxy-s5mhf" [386ab39d-8d29-482b-b752-52257e97dde8] Running
	I1122 00:59:23.469312  723417 system_pods.go:61] "kube-scheduler-newest-cni-683181" [8b62cbb0-d4b5-487b-bc74-7459fb8fc92f] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1122 00:59:23.469332  723417 system_pods.go:61] "storage-provisioner" [1b4ee39a-586b-4b95-b610-8cd6ad0ca178] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. no new claims to deallocate, preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I1122 00:59:23.469365  723417 system_pods.go:74] duration metric: took 5.537028ms to wait for pod list to return data ...
	I1122 00:59:23.469391  723417 default_sa.go:34] waiting for default service account to be created ...
	I1122 00:59:23.473080  723417 default_sa.go:45] found service account: "default"
	I1122 00:59:23.473101  723417 default_sa.go:55] duration metric: took 3.691839ms for default service account to be created ...
	I1122 00:59:23.473113  723417 kubeadm.go:587] duration metric: took 11.487566994s to wait for: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1122 00:59:23.473129  723417 node_conditions.go:102] verifying NodePressure condition ...
	I1122 00:59:23.475652  723417 out.go:179] * Enabled addons: storage-provisioner, dashboard, default-storageclass
	I1122 00:59:23.478173  723417 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1122 00:59:23.478201  723417 node_conditions.go:123] node cpu capacity is 2
	I1122 00:59:23.478213  723417 node_conditions.go:105] duration metric: took 5.079482ms to run NodePressure ...
	I1122 00:59:23.478225  723417 start.go:242] waiting for startup goroutines ...
	I1122 00:59:23.479120  723417 addons.go:530] duration metric: took 11.493222525s for enable addons: enabled=[storage-provisioner dashboard default-storageclass]
	I1122 00:59:23.479205  723417 start.go:247] waiting for cluster config update ...
	I1122 00:59:23.479232  723417 start.go:256] writing updated cluster config ...
	I1122 00:59:23.479551  723417 ssh_runner.go:195] Run: rm -f paused
	I1122 00:59:23.580532  723417 start.go:628] kubectl: 1.33.2, cluster: 1.34.1 (minor skew: 1)
	I1122 00:59:23.584465  723417 out.go:179] * Done! kubectl is now configured to use "newest-cni-683181" cluster and "default" namespace by default
	W1122 00:59:19.677419  721299 pod_ready.go:104] pod "coredns-66bc5c9577-448gn" is not "Ready", error: <nil>
	W1122 00:59:22.180607  721299 pod_ready.go:104] pod "coredns-66bc5c9577-448gn" is not "Ready", error: <nil>
	W1122 00:59:24.679032  721299 pod_ready.go:104] pod "coredns-66bc5c9577-448gn" is not "Ready", error: <nil>
	W1122 00:59:27.162645  721299 pod_ready.go:104] pod "coredns-66bc5c9577-448gn" is not "Ready", error: <nil>
	W1122 00:59:29.163278  721299 pod_ready.go:104] pod "coredns-66bc5c9577-448gn" is not "Ready", error: <nil>
	
	
	==> CRI-O <==
	Nov 22 00:59:21 newest-cni-683181 crio[616]: time="2025-11-22T00:59:21.423296932Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 22 00:59:21 newest-cni-683181 crio[616]: time="2025-11-22T00:59:21.429429035Z" level=info msg="Running pod sandbox: kube-system/kube-proxy-s5mhf/POD" id=9d67c0c5-cf9a-43e9-a2da-ded26e41d558 name=/runtime.v1.RuntimeService/RunPodSandbox
	Nov 22 00:59:21 newest-cni-683181 crio[616]: time="2025-11-22T00:59:21.429501698Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 22 00:59:21 newest-cni-683181 crio[616]: time="2025-11-22T00:59:21.4688268Z" level=warning msg="Skipping invalid sysctl specified by config {net.ipv4.ip_unprivileged_port_start 0}: \"net.ipv4.ip_unprivileged_port_start\" not allowed with host net enabled" id=e7e8df5d-c504-4d8f-a01c-d031c625cdff name=/runtime.v1.RuntimeService/RunPodSandbox
	Nov 22 00:59:21 newest-cni-683181 crio[616]: time="2025-11-22T00:59:21.473975146Z" level=warning msg="Skipping invalid sysctl specified by config {net.ipv4.ip_unprivileged_port_start 0}: \"net.ipv4.ip_unprivileged_port_start\" not allowed with host net enabled" id=9d67c0c5-cf9a-43e9-a2da-ded26e41d558 name=/runtime.v1.RuntimeService/RunPodSandbox
	Nov 22 00:59:21 newest-cni-683181 crio[616]: time="2025-11-22T00:59:21.498764648Z" level=info msg="Ran pod sandbox 3801f73c18a0299371050750be096e11fc892006f4e7a0df2ad650af342af016 with infra container: kube-system/kindnet-bpmkp/POD" id=e7e8df5d-c504-4d8f-a01c-d031c625cdff name=/runtime.v1.RuntimeService/RunPodSandbox
	Nov 22 00:59:21 newest-cni-683181 crio[616]: time="2025-11-22T00:59:21.504954309Z" level=info msg="Ran pod sandbox be61ae63e65ba736f4321fdb424b77590117599e9a79bd1c7a2eddf0d953b694 with infra container: kube-system/kube-proxy-s5mhf/POD" id=9d67c0c5-cf9a-43e9-a2da-ded26e41d558 name=/runtime.v1.RuntimeService/RunPodSandbox
	Nov 22 00:59:21 newest-cni-683181 crio[616]: time="2025-11-22T00:59:21.54094693Z" level=info msg="Checking image status: docker.io/kindest/kindnetd:v20250512-df8de77b" id=0679e05a-df64-4c00-8b07-4b59a3ed8274 name=/runtime.v1.ImageService/ImageStatus
	Nov 22 00:59:21 newest-cni-683181 crio[616]: time="2025-11-22T00:59:21.565513137Z" level=info msg="Checking image status: docker.io/kindest/kindnetd:v20250512-df8de77b" id=70460a3e-740e-4fa5-9a48-37640571794a name=/runtime.v1.ImageService/ImageStatus
	Nov 22 00:59:21 newest-cni-683181 crio[616]: time="2025-11-22T00:59:21.56591516Z" level=info msg="Checking image status: registry.k8s.io/kube-proxy:v1.34.1" id=ec4147a9-fa98-45ff-a56e-f29ecc1c3ac4 name=/runtime.v1.ImageService/ImageStatus
	Nov 22 00:59:21 newest-cni-683181 crio[616]: time="2025-11-22T00:59:21.572246544Z" level=info msg="Checking image status: registry.k8s.io/kube-proxy:v1.34.1" id=162e1a1e-fe57-4d8d-97dd-33985bb3095b name=/runtime.v1.ImageService/ImageStatus
	Nov 22 00:59:21 newest-cni-683181 crio[616]: time="2025-11-22T00:59:21.575440354Z" level=info msg="Creating container: kube-system/kindnet-bpmkp/kindnet-cni" id=9eca5064-c285-4568-b226-1fcd7338e640 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 22 00:59:21 newest-cni-683181 crio[616]: time="2025-11-22T00:59:21.575546616Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 22 00:59:21 newest-cni-683181 crio[616]: time="2025-11-22T00:59:21.586597656Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 22 00:59:21 newest-cni-683181 crio[616]: time="2025-11-22T00:59:21.593602842Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 22 00:59:21 newest-cni-683181 crio[616]: time="2025-11-22T00:59:21.597651405Z" level=info msg="Creating container: kube-system/kube-proxy-s5mhf/kube-proxy" id=62460a36-de5b-4424-a281-d0d10c1208f4 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 22 00:59:21 newest-cni-683181 crio[616]: time="2025-11-22T00:59:21.598227544Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 22 00:59:21 newest-cni-683181 crio[616]: time="2025-11-22T00:59:21.619448634Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 22 00:59:21 newest-cni-683181 crio[616]: time="2025-11-22T00:59:21.619951405Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 22 00:59:21 newest-cni-683181 crio[616]: time="2025-11-22T00:59:21.751221173Z" level=info msg="Created container 7e89eb5cadf037e487a32d9ea4517b84e0355838991972f6b49fefb3847298aa: kube-system/kindnet-bpmkp/kindnet-cni" id=9eca5064-c285-4568-b226-1fcd7338e640 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 22 00:59:21 newest-cni-683181 crio[616]: time="2025-11-22T00:59:21.751834308Z" level=info msg="Starting container: 7e89eb5cadf037e487a32d9ea4517b84e0355838991972f6b49fefb3847298aa" id=91648659-81c6-42db-9914-45e5972af62f name=/runtime.v1.RuntimeService/StartContainer
	Nov 22 00:59:21 newest-cni-683181 crio[616]: time="2025-11-22T00:59:21.765466541Z" level=info msg="Started container" PID=1067 containerID=7e89eb5cadf037e487a32d9ea4517b84e0355838991972f6b49fefb3847298aa description=kube-system/kindnet-bpmkp/kindnet-cni id=91648659-81c6-42db-9914-45e5972af62f name=/runtime.v1.RuntimeService/StartContainer sandboxID=3801f73c18a0299371050750be096e11fc892006f4e7a0df2ad650af342af016
	Nov 22 00:59:21 newest-cni-683181 crio[616]: time="2025-11-22T00:59:21.769607409Z" level=info msg="Created container c6dc4dd7fc04ddecd3b2bb2080ec2863a7ec44e94fdeabd8e2cccba7fd814d22: kube-system/kube-proxy-s5mhf/kube-proxy" id=62460a36-de5b-4424-a281-d0d10c1208f4 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 22 00:59:21 newest-cni-683181 crio[616]: time="2025-11-22T00:59:21.771469467Z" level=info msg="Starting container: c6dc4dd7fc04ddecd3b2bb2080ec2863a7ec44e94fdeabd8e2cccba7fd814d22" id=2b901052-0042-414d-bf46-ab79473d9020 name=/runtime.v1.RuntimeService/StartContainer
	Nov 22 00:59:21 newest-cni-683181 crio[616]: time="2025-11-22T00:59:21.779602431Z" level=info msg="Started container" PID=1066 containerID=c6dc4dd7fc04ddecd3b2bb2080ec2863a7ec44e94fdeabd8e2cccba7fd814d22 description=kube-system/kube-proxy-s5mhf/kube-proxy id=2b901052-0042-414d-bf46-ab79473d9020 name=/runtime.v1.RuntimeService/StartContainer sandboxID=be61ae63e65ba736f4321fdb424b77590117599e9a79bd1c7a2eddf0d953b694
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                         NAMESPACE
	c6dc4dd7fc04d       05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9   8 seconds ago       Running             kube-proxy                1                   be61ae63e65ba       kube-proxy-s5mhf                            kube-system
	7e89eb5cadf03       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c   8 seconds ago       Running             kindnet-cni               1                   3801f73c18a02       kindnet-bpmkp                               kube-system
	8765e88de49e2       a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e   18 seconds ago      Running             etcd                      1                   87bdb0869ad50       etcd-newest-cni-683181                      kube-system
	5b3cac023bb69       7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a   18 seconds ago      Running             kube-controller-manager   1                   4651247fd91ce       kube-controller-manager-newest-cni-683181   kube-system
	2cf11a2399791       43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196   18 seconds ago      Running             kube-apiserver            1                   653131c6328bc       kube-apiserver-newest-cni-683181            kube-system
	a49ade414a411       b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0   18 seconds ago      Running             kube-scheduler            1                   7e5ea269b0be6       kube-scheduler-newest-cni-683181            kube-system
	
	
	==> describe nodes <==
	Name:               newest-cni-683181
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=newest-cni-683181
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=299bbe887a12c40541707cc636234f35f4ff1785
	                    minikube.k8s.io/name=newest-cni-683181
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_11_22T00_58_51_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 22 Nov 2025 00:58:47 +0000
	Taints:             node.kubernetes.io/not-ready:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  newest-cni-683181
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 22 Nov 2025 00:59:20 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 22 Nov 2025 00:59:20 +0000   Sat, 22 Nov 2025 00:58:43 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 22 Nov 2025 00:59:20 +0000   Sat, 22 Nov 2025 00:58:43 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 22 Nov 2025 00:59:20 +0000   Sat, 22 Nov 2025 00:58:43 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            False   Sat, 22 Nov 2025 00:59:20 +0000   Sat, 22 Nov 2025 00:58:43 +0000   KubeletNotReady              container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/cni/net.d/. Has your network provider started?
	Addresses:
	  InternalIP:  192.168.76.2
	  Hostname:    newest-cni-683181
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	System Info:
	  Machine ID:                 c92eea4d3c03c0156e01fcf8691e3907
	  System UUID:                6a33f369-61a2-4323-af82-24618416d16b
	  Boot ID:                    72ac7385-472f-47d1-a23e-bd80468d6e09
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.2
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.42.0.0/24
	PodCIDRs:                     10.42.0.0/24
	Non-terminated Pods:          (6 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  kube-system                 etcd-newest-cni-683181                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         40s
	  kube-system                 kindnet-bpmkp                                100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      35s
	  kube-system                 kube-apiserver-newest-cni-683181             250m (12%)    0 (0%)      0 (0%)           0 (0%)         40s
	  kube-system                 kube-controller-manager-newest-cni-683181    200m (10%)    0 (0%)      0 (0%)           0 (0%)         40s
	  kube-system                 kube-proxy-s5mhf                             0 (0%)        0 (0%)      0 (0%)           0 (0%)         35s
	  kube-system                 kube-scheduler-newest-cni-683181             100m (5%)     0 (0%)      0 (0%)           0 (0%)         40s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  100m (5%)
	  memory             150Mi (1%)  50Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                From             Message
	  ----     ------                   ----               ----             -------
	  Normal   Starting                 33s                kube-proxy       
	  Normal   Starting                 7s                 kube-proxy       
	  Normal   NodeHasSufficientMemory  48s (x8 over 48s)  kubelet          Node newest-cni-683181 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    48s (x8 over 48s)  kubelet          Node newest-cni-683181 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     48s (x8 over 48s)  kubelet          Node newest-cni-683181 status is now: NodeHasSufficientPID
	  Normal   NodeHasNoDiskPressure    40s                kubelet          Node newest-cni-683181 status is now: NodeHasNoDiskPressure
	  Warning  CgroupV1                 40s                kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  40s                kubelet          Node newest-cni-683181 status is now: NodeHasSufficientMemory
	  Normal   NodeHasSufficientPID     40s                kubelet          Node newest-cni-683181 status is now: NodeHasSufficientPID
	  Normal   Starting                 40s                kubelet          Starting kubelet.
	  Normal   RegisteredNode           36s                node-controller  Node newest-cni-683181 event: Registered Node newest-cni-683181 in Controller
	  Normal   Starting                 19s                kubelet          Starting kubelet.
	  Warning  CgroupV1                 19s                kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  19s (x8 over 19s)  kubelet          Node newest-cni-683181 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    19s (x8 over 19s)  kubelet          Node newest-cni-683181 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     19s (x8 over 19s)  kubelet          Node newest-cni-683181 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           5s                 node-controller  Node newest-cni-683181 event: Registered Node newest-cni-683181 in Controller
	
	
	==> dmesg <==
	[Nov22 00:37] overlayfs: idmapped layers are currently not supported
	[ +56.322609] overlayfs: idmapped layers are currently not supported
	[Nov22 00:38] overlayfs: idmapped layers are currently not supported
	[Nov22 00:39] overlayfs: idmapped layers are currently not supported
	[ +23.174928] overlayfs: idmapped layers are currently not supported
	[Nov22 00:41] overlayfs: idmapped layers are currently not supported
	[Nov22 00:42] overlayfs: idmapped layers are currently not supported
	[Nov22 00:44] overlayfs: idmapped layers are currently not supported
	[Nov22 00:45] overlayfs: idmapped layers are currently not supported
	[Nov22 00:46] overlayfs: idmapped layers are currently not supported
	[Nov22 00:48] overlayfs: idmapped layers are currently not supported
	[Nov22 00:50] overlayfs: idmapped layers are currently not supported
	[Nov22 00:51] overlayfs: idmapped layers are currently not supported
	[ +11.900293] overlayfs: idmapped layers are currently not supported
	[ +28.922055] overlayfs: idmapped layers are currently not supported
	[Nov22 00:52] overlayfs: idmapped layers are currently not supported
	[Nov22 00:53] overlayfs: idmapped layers are currently not supported
	[Nov22 00:54] overlayfs: idmapped layers are currently not supported
	[Nov22 00:55] overlayfs: idmapped layers are currently not supported
	[Nov22 00:56] overlayfs: idmapped layers are currently not supported
	[Nov22 00:57] overlayfs: idmapped layers are currently not supported
	[Nov22 00:58] overlayfs: idmapped layers are currently not supported
	[ +43.407301] overlayfs: idmapped layers are currently not supported
	[Nov22 00:59] overlayfs: idmapped layers are currently not supported
	[  +8.585740] overlayfs: idmapped layers are currently not supported
	
	
	==> etcd [8765e88de49e2be9fd655ccd870bf0fbf040cf37c257af74ba0018ab6313b34a] <==
	{"level":"warn","ts":"2025-11-22T00:59:17.984230Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50466","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-22T00:59:18.062500Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50486","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-22T00:59:18.114364Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50498","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-22T00:59:18.179865Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50522","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-22T00:59:18.252335Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50538","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-22T00:59:18.313451Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50556","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-22T00:59:18.345755Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50560","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-22T00:59:18.417454Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50580","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-22T00:59:18.450364Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50594","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-22T00:59:18.524247Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50612","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-22T00:59:18.555295Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50630","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-22T00:59:18.588532Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50656","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-22T00:59:18.629971Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50678","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-22T00:59:18.697885Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50700","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-22T00:59:18.709234Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50722","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-22T00:59:18.728110Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50742","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-22T00:59:18.781102Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50760","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-22T00:59:18.804202Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50778","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-22T00:59:18.856403Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50804","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-22T00:59:18.940304Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50826","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-22T00:59:19.018585Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50832","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-22T00:59:19.066724Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50852","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-22T00:59:19.121690Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50866","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-22T00:59:19.169455Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50890","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-22T00:59:19.258732Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50906","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 00:59:30 up  5:41,  0 user,  load average: 5.93, 4.31, 3.17
	Linux newest-cni-683181 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [7e89eb5cadf037e487a32d9ea4517b84e0355838991972f6b49fefb3847298aa] <==
	I1122 00:59:21.916593       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1122 00:59:21.923781       1 main.go:139] hostIP = 192.168.76.2
	podIP = 192.168.76.2
	I1122 00:59:21.924010       1 main.go:148] setting mtu 1500 for CNI 
	I1122 00:59:21.924060       1 main.go:178] kindnetd IP family: "ipv4"
	I1122 00:59:21.924100       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-11-22T00:59:22Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1122 00:59:22.195823       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1122 00:59:22.195841       1 controller.go:381] "Waiting for informer caches to sync"
	I1122 00:59:22.195850       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1122 00:59:22.196137       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	
	
	==> kube-apiserver [2cf11a2399791e45cf7ed67b2198c31cbb95b3ccb3913ab1861bd2d43031f670] <==
	I1122 00:59:20.707046       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I1122 00:59:20.712852       1 shared_informer.go:356] "Caches are synced" controller="*generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]"
	I1122 00:59:20.712877       1 policy_source.go:240] refreshing policies
	I1122 00:59:20.718716       1 shared_informer.go:356] "Caches are synced" controller="kubernetes-service-cidr-controller"
	I1122 00:59:20.718775       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1122 00:59:20.724248       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	E1122 00:59:20.752122       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I1122 00:59:20.757934       1 shared_informer.go:356] "Caches are synced" controller="node_authorizer"
	I1122 00:59:20.758952       1 shared_informer.go:356] "Caches are synced" controller="ipallocator-repair-controller"
	I1122 00:59:20.802418       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I1122 00:59:20.802436       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I1122 00:59:20.808178       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1122 00:59:21.198956       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1122 00:59:21.518596       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1122 00:59:22.849693       1 controller.go:667] quota admission added evaluator for: namespaces
	I1122 00:59:22.929363       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1122 00:59:22.971052       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1122 00:59:23.010909       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1122 00:59:23.156851       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.100.71.68"}
	I1122 00:59:23.188384       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.100.39.113"}
	I1122 00:59:25.365690       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I1122 00:59:25.390129       1 controller.go:667] quota admission added evaluator for: endpoints
	I1122 00:59:25.501275       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1122 00:59:25.547673       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1122 00:59:25.547725       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	
	
	==> kube-controller-manager [5b3cac023bb69eb303946916eb3bee91968b9f879ebd1c3aacc0ade3047e950b] <==
	I1122 00:59:25.133494       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I1122 00:59:25.133616       1 shared_informer.go:356] "Caches are synced" controller="TTL"
	I1122 00:59:25.139959       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I1122 00:59:25.157543       1 shared_informer.go:356] "Caches are synced" controller="PVC protection"
	I1122 00:59:25.160882       1 shared_informer.go:356] "Caches are synced" controller="ReplicationController"
	I1122 00:59:25.168228       1 shared_informer.go:356] "Caches are synced" controller="GC"
	I1122 00:59:25.174064       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I1122 00:59:25.176848       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1122 00:59:25.193950       1 shared_informer.go:356] "Caches are synced" controller="VAC protection"
	I1122 00:59:25.194583       1 shared_informer.go:356] "Caches are synced" controller="node"
	I1122 00:59:25.231262       1 range_allocator.go:177] "Sending events to api server" logger="node-ipam-controller"
	I1122 00:59:25.231320       1 range_allocator.go:183] "Starting range CIDR allocator" logger="node-ipam-controller"
	I1122 00:59:25.231366       1 shared_informer.go:349] "Waiting for caches to sync" controller="cidrallocator"
	I1122 00:59:25.231376       1 shared_informer.go:356] "Caches are synced" controller="cidrallocator"
	I1122 00:59:25.195508       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I1122 00:59:25.196177       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	I1122 00:59:25.239075       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1122 00:59:25.239205       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1122 00:59:25.239231       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I1122 00:59:25.240275       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1122 00:59:25.262215       1 shared_informer.go:356] "Caches are synced" controller="disruption"
	I1122 00:59:25.297949       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I1122 00:59:25.299236       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1122 00:59:25.299259       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1122 00:59:25.299269       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	
	
	==> kube-proxy [c6dc4dd7fc04ddecd3b2bb2080ec2863a7ec44e94fdeabd8e2cccba7fd814d22] <==
	I1122 00:59:22.806855       1 server_linux.go:53] "Using iptables proxy"
	I1122 00:59:23.030723       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1122 00:59:23.133973       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1122 00:59:23.134013       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.76.2"]
	E1122 00:59:23.134083       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1122 00:59:23.322216       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1122 00:59:23.322298       1 server_linux.go:132] "Using iptables Proxier"
	I1122 00:59:23.385281       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1122 00:59:23.385654       1 server.go:527] "Version info" version="v1.34.1"
	I1122 00:59:23.385669       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1122 00:59:23.416719       1 config.go:200] "Starting service config controller"
	I1122 00:59:23.416738       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1122 00:59:23.416754       1 config.go:106] "Starting endpoint slice config controller"
	I1122 00:59:23.416759       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1122 00:59:23.416769       1 config.go:403] "Starting serviceCIDR config controller"
	I1122 00:59:23.416773       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1122 00:59:23.424288       1 config.go:309] "Starting node config controller"
	I1122 00:59:23.429422       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1122 00:59:23.429439       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1122 00:59:23.517365       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1122 00:59:23.517367       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1122 00:59:23.517391       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [a49ade414a411cf4537b0004c3cb9293ea4b12b1790212ea98dcd0dc746c2e0f] <==
	I1122 00:59:15.587906       1 serving.go:386] Generated self-signed cert in-memory
	I1122 00:59:21.206315       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.1"
	I1122 00:59:21.206352       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1122 00:59:21.272049       1 requestheader_controller.go:180] Starting RequestHeaderAuthRequestController
	I1122 00:59:21.272156       1 shared_informer.go:349] "Waiting for caches to sync" controller="RequestHeaderAuthRequestController"
	I1122 00:59:21.272314       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1122 00:59:21.272349       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1122 00:59:21.272848       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1122 00:59:21.273063       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1122 00:59:21.272931       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1122 00:59:21.273045       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1122 00:59:21.379006       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1122 00:59:21.379085       1 shared_informer.go:356] "Caches are synced" controller="RequestHeaderAuthRequestController"
	I1122 00:59:21.402404       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	
	
	==> kubelet <==
	Nov 22 00:59:16 newest-cni-683181 kubelet[736]: E1122 00:59:16.377726     736 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"newest-cni-683181\" not found" node="newest-cni-683181"
	Nov 22 00:59:18 newest-cni-683181 kubelet[736]: E1122 00:59:18.848347     736 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"newest-cni-683181\" not found" node="newest-cni-683181"
	Nov 22 00:59:20 newest-cni-683181 kubelet[736]: I1122 00:59:20.537218     736 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/etcd-newest-cni-683181"
	Nov 22 00:59:20 newest-cni-683181 kubelet[736]: E1122 00:59:20.783945     736 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"etcd-newest-cni-683181\" already exists" pod="kube-system/etcd-newest-cni-683181"
	Nov 22 00:59:20 newest-cni-683181 kubelet[736]: I1122 00:59:20.783999     736 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-newest-cni-683181"
	Nov 22 00:59:20 newest-cni-683181 kubelet[736]: I1122 00:59:20.788405     736 kubelet_node_status.go:124] "Node was previously registered" node="newest-cni-683181"
	Nov 22 00:59:20 newest-cni-683181 kubelet[736]: I1122 00:59:20.788552     736 kubelet_node_status.go:78] "Successfully registered node" node="newest-cni-683181"
	Nov 22 00:59:20 newest-cni-683181 kubelet[736]: I1122 00:59:20.788584     736 kuberuntime_manager.go:1828] "Updating runtime config through cri with podcidr" CIDR="10.42.0.0/24"
	Nov 22 00:59:20 newest-cni-683181 kubelet[736]: I1122 00:59:20.790125     736 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.42.0.0/24"
	Nov 22 00:59:20 newest-cni-683181 kubelet[736]: E1122 00:59:20.794353     736 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-apiserver-newest-cni-683181\" already exists" pod="kube-system/kube-apiserver-newest-cni-683181"
	Nov 22 00:59:20 newest-cni-683181 kubelet[736]: I1122 00:59:20.794510     736 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-newest-cni-683181"
	Nov 22 00:59:20 newest-cni-683181 kubelet[736]: E1122 00:59:20.840794     736 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-controller-manager-newest-cni-683181\" already exists" pod="kube-system/kube-controller-manager-newest-cni-683181"
	Nov 22 00:59:20 newest-cni-683181 kubelet[736]: I1122 00:59:20.840828     736 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-newest-cni-683181"
	Nov 22 00:59:20 newest-cni-683181 kubelet[736]: E1122 00:59:20.864603     736 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-scheduler-newest-cni-683181\" already exists" pod="kube-system/kube-scheduler-newest-cni-683181"
	Nov 22 00:59:21 newest-cni-683181 kubelet[736]: I1122 00:59:21.084521     736 apiserver.go:52] "Watching apiserver"
	Nov 22 00:59:21 newest-cni-683181 kubelet[736]: I1122 00:59:21.139469     736 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world"
	Nov 22 00:59:21 newest-cni-683181 kubelet[736]: I1122 00:59:21.153936     736 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/a8d571f3-91a7-4136-8402-f32f10864617-lib-modules\") pod \"kindnet-bpmkp\" (UID: \"a8d571f3-91a7-4136-8402-f32f10864617\") " pod="kube-system/kindnet-bpmkp"
	Nov 22 00:59:21 newest-cni-683181 kubelet[736]: I1122 00:59:21.153982     736 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/386ab39d-8d29-482b-b752-52257e97dde8-lib-modules\") pod \"kube-proxy-s5mhf\" (UID: \"386ab39d-8d29-482b-b752-52257e97dde8\") " pod="kube-system/kube-proxy-s5mhf"
	Nov 22 00:59:21 newest-cni-683181 kubelet[736]: I1122 00:59:21.154038     736 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/a8d571f3-91a7-4136-8402-f32f10864617-xtables-lock\") pod \"kindnet-bpmkp\" (UID: \"a8d571f3-91a7-4136-8402-f32f10864617\") " pod="kube-system/kindnet-bpmkp"
	Nov 22 00:59:21 newest-cni-683181 kubelet[736]: I1122 00:59:21.154059     736 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/386ab39d-8d29-482b-b752-52257e97dde8-xtables-lock\") pod \"kube-proxy-s5mhf\" (UID: \"386ab39d-8d29-482b-b752-52257e97dde8\") " pod="kube-system/kube-proxy-s5mhf"
	Nov 22 00:59:21 newest-cni-683181 kubelet[736]: I1122 00:59:21.154115     736 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/a8d571f3-91a7-4136-8402-f32f10864617-cni-cfg\") pod \"kindnet-bpmkp\" (UID: \"a8d571f3-91a7-4136-8402-f32f10864617\") " pod="kube-system/kindnet-bpmkp"
	Nov 22 00:59:21 newest-cni-683181 kubelet[736]: I1122 00:59:21.239667     736 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory"
	Nov 22 00:59:25 newest-cni-683181 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Nov 22 00:59:25 newest-cni-683181 systemd[1]: kubelet.service: Deactivated successfully.
	Nov 22 00:59:25 newest-cni-683181 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-683181 -n newest-cni-683181
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-683181 -n newest-cni-683181: exit status 2 (364.761655ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:269: (dbg) Run:  kubectl --context newest-cni-683181 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:280: non-running pods: coredns-66bc5c9577-t729j storage-provisioner dashboard-metrics-scraper-6ffb444bf9-nkd9c kubernetes-dashboard-855c9754f9-tlcvt
helpers_test.go:282: ======> post-mortem[TestStartStop/group/newest-cni/serial/Pause]: describe non-running pods <======
helpers_test.go:285: (dbg) Run:  kubectl --context newest-cni-683181 describe pod coredns-66bc5c9577-t729j storage-provisioner dashboard-metrics-scraper-6ffb444bf9-nkd9c kubernetes-dashboard-855c9754f9-tlcvt
helpers_test.go:285: (dbg) Non-zero exit: kubectl --context newest-cni-683181 describe pod coredns-66bc5c9577-t729j storage-provisioner dashboard-metrics-scraper-6ffb444bf9-nkd9c kubernetes-dashboard-855c9754f9-tlcvt: exit status 1 (87.778585ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "coredns-66bc5c9577-t729j" not found
	Error from server (NotFound): pods "storage-provisioner" not found
	Error from server (NotFound): pods "dashboard-metrics-scraper-6ffb444bf9-nkd9c" not found
	Error from server (NotFound): pods "kubernetes-dashboard-855c9754f9-tlcvt" not found

                                                
                                                
** /stderr **
helpers_test.go:287: kubectl --context newest-cni-683181 describe pod coredns-66bc5c9577-t729j storage-provisioner dashboard-metrics-scraper-6ffb444bf9-nkd9c kubernetes-dashboard-855c9754f9-tlcvt: exit status 1
--- FAIL: TestStartStop/group/newest-cni/serial/Pause (6.88s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Pause (9.46s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 pause -p default-k8s-diff-port-882305 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-arm64 pause -p default-k8s-diff-port-882305 --alsologtostderr -v=1: exit status 80 (2.431858699s)

                                                
                                                
-- stdout --
	* Pausing node default-k8s-diff-port-882305 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1122 00:59:58.160067  729413 out.go:360] Setting OutFile to fd 1 ...
	I1122 00:59:58.160169  729413 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1122 00:59:58.160174  729413 out.go:374] Setting ErrFile to fd 2...
	I1122 00:59:58.160178  729413 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1122 00:59:58.160524  729413 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21934-513600/.minikube/bin
	I1122 00:59:58.160783  729413 out.go:368] Setting JSON to false
	I1122 00:59:58.160798  729413 mustload.go:66] Loading cluster: default-k8s-diff-port-882305
	I1122 00:59:58.161437  729413 config.go:182] Loaded profile config "default-k8s-diff-port-882305": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1122 00:59:58.162227  729413 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-882305 --format={{.State.Status}}
	I1122 00:59:58.181794  729413 host.go:66] Checking if "default-k8s-diff-port-882305" exists ...
	I1122 00:59:58.182173  729413 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1122 00:59:58.267624  729413 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:2 ContainersRunning:2 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:50 OomKillDisable:true NGoroutines:63 SystemTime:2025-11-22 00:59:58.250247078 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1122 00:59:58.268257  729413 pause.go:60] "namespaces" [kube-system kubernetes-dashboard istio-operator]="keys" map[addons:[] all:%!s(bool=false) apiserver-ips:[] apiserver-name:minikubeCA apiserver-names:[] apiserver-port:%!s(int=8443) auto-pause-interval:1m0s auto-update-drivers:%!s(bool=true) base-image:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763588073-21934@sha256:19d3da0413e1bfa354cbb88004c6796f8e9772a083e0230b0f6e50212ee04c7e binary-mirror: bootstrapper:kubeadm cache-images:%!s(bool=true) cancel-scheduled:%!s(bool=false) cert-expiration:26280h0m0s cni: container-runtime: cpus:2 cri-socket: delete-on-failure:%!s(bool=false) disable-coredns-log:%!s(bool=false) disable-driver-mounts:%!s(bool=false) disable-metrics:%!s(bool=false) disable-optimizations:%!s(bool=false) disk-size:20000mb dns-domain:cluster.local dns-proxy:%!s(bool=false) docker-env:[] docker-opt:[] download-only:%!s(bool=false) driver: dry-run:%!s(bool=false) embed-certs:%!s(bool=false) embedcerts:%!s(bool=false) enable-default-
cni:%!s(bool=false) extra-config: extra-disks:%!s(int=0) feature-gates: force:%!s(bool=false) force-systemd:%!s(bool=false) gpus: ha:%!s(bool=false) host-dns-resolver:%!s(bool=true) host-only-cidr:192.168.59.1/24 host-only-nic-type:virtio hyperkit-vpnkit-sock: hyperkit-vsock-ports:[] hyperv-external-adapter: hyperv-use-external-switch:%!s(bool=false) hyperv-virtual-switch: image-mirror-country: image-repository: insecure-registry:[] install-addons:%!s(bool=true) interactive:%!s(bool=true) iso-url:[https://storage.googleapis.com/minikube-builds/iso/21924/minikube-v1.37.0-1763503576-21924-arm64.iso https://github.com/kubernetes/minikube/releases/download/v1.37.0-1763503576-21924/minikube-v1.37.0-1763503576-21924-arm64.iso https://kubernetes.oss-cn-hangzhou.aliyuncs.com/minikube/iso/minikube-v1.37.0-1763503576-21924-arm64.iso] keep-context:%!s(bool=false) keep-context-active:%!s(bool=false) kubernetes-version: kvm-gpu:%!s(bool=false) kvm-hidden:%!s(bool=false) kvm-network:default kvm-numa-count:%!s(int=1) kvm-qe
mu-uri:qemu:///system listen-address: maxauditentries:%!s(int=1000) memory: mount:%!s(bool=false) mount-9p-version:9p2000.L mount-gid:docker mount-ip: mount-msize:%!s(int=262144) mount-options:[] mount-port:0 mount-string: mount-type:9p mount-uid:docker namespace:default nat-nic-type:virtio native-ssh:%!s(bool=true) network: network-plugin: nfs-share:[] nfs-shares-root:/nfsshares no-kubernetes:%!s(bool=false) no-vtx-check:%!s(bool=false) nodes:%!s(int=1) output:text ports:[] preload:%!s(bool=true) profile:default-k8s-diff-port-882305 purge:%!s(bool=false) qemu-firmware-path: registry-mirror:[] reminderwaitperiodinhours:%!s(int=24) rootless:%!s(bool=false) schedule:0s service-cluster-ip-range:10.96.0.0/12 skip-audit:%!s(bool=false) socket-vmnet-client-path: socket-vmnet-path: ssh-ip-address: ssh-key: ssh-port:%!s(int=22) ssh-user:root static-ip: subnet: trace: user: uuid: vm:%!s(bool=false) vm-driver: wait:[apiserver system_pods] wait-timeout:6m0s wantnonedriverwarning:%!s(bool=true) wantupdatenotification:%!s
(bool=true) wantvirtualboxdriverwarning:%!s(bool=true)]="(MISSING)"
	I1122 00:59:58.271701  729413 out.go:179] * Pausing node default-k8s-diff-port-882305 ... 
	I1122 00:59:58.275264  729413 host.go:66] Checking if "default-k8s-diff-port-882305" exists ...
	I1122 00:59:58.275594  729413 ssh_runner.go:195] Run: systemctl --version
	I1122 00:59:58.275644  729413 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-882305
	I1122 00:59:58.305842  729413 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33812 SSHKeyPath:/home/jenkins/minikube-integration/21934-513600/.minikube/machines/default-k8s-diff-port-882305/id_rsa Username:docker}
	I1122 00:59:58.415005  729413 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1122 00:59:58.428981  729413 pause.go:52] kubelet running: true
	I1122 00:59:58.429051  729413 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1122 00:59:58.734396  729413 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1122 00:59:58.734559  729413 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1122 00:59:58.840768  729413 cri.go:89] found id: "6797e48f56252ab176007011001840ada8a9976acd404dda959a8334d3c46cdb"
	I1122 00:59:58.840840  729413 cri.go:89] found id: "77f6a3c0f1d2e079997f3dddd18e52dfa729d725f0cb10e1940295c459f10d6b"
	I1122 00:59:58.840861  729413 cri.go:89] found id: "99efae290674153fda78e1cc8d351668db7f8f1a6a89e147416cef08d7b43096"
	I1122 00:59:58.840882  729413 cri.go:89] found id: "24c64924a669eecc6f41d3f6f2a0935ebe7520e41d8214678fc5533fb88d7dd3"
	I1122 00:59:58.840900  729413 cri.go:89] found id: "e34c46d28bc8c466e4b69397894de9dbaf562f334db936138792ea857f7984cf"
	I1122 00:59:58.840933  729413 cri.go:89] found id: "ef5cf3bc0e8a1e84b865765165a5244f97715b14ad4afe6bdecb47483cb802ba"
	I1122 00:59:58.840960  729413 cri.go:89] found id: "d1d854f1c70c8c8f58aacea7d3bc3bea0c433b6787c467ffaf9f43d30127f3aa"
	I1122 00:59:58.840981  729413 cri.go:89] found id: "c0ae03824089747781ca3fa95c137501b3b35608e772c7bf534789a146554e3c"
	I1122 00:59:58.841017  729413 cri.go:89] found id: "1ce380445cfc1fe8d2cbb405092ab03fd65cb6c2cf8bac3317898266e679c5d3"
	I1122 00:59:58.841050  729413 cri.go:89] found id: "e6c681247aa417267830503d9c58396605d3706c59d9c0cf213900ef1533158d"
	I1122 00:59:58.841067  729413 cri.go:89] found id: "93cb84e4ab699eb2196717799321e4330d7fd89c7b5847a1367298b8dc5f69b4"
	I1122 00:59:58.841098  729413 cri.go:89] found id: ""
	I1122 00:59:58.841206  729413 ssh_runner.go:195] Run: sudo runc list -f json
	I1122 00:59:58.857125  729413 retry.go:31] will retry after 218.27384ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-22T00:59:58Z" level=error msg="open /run/runc: no such file or directory"
	I1122 00:59:59.075597  729413 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1122 00:59:59.091272  729413 pause.go:52] kubelet running: false
	I1122 00:59:59.091378  729413 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1122 00:59:59.359083  729413 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1122 00:59:59.359277  729413 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1122 00:59:59.441130  729413 cri.go:89] found id: "6797e48f56252ab176007011001840ada8a9976acd404dda959a8334d3c46cdb"
	I1122 00:59:59.441155  729413 cri.go:89] found id: "77f6a3c0f1d2e079997f3dddd18e52dfa729d725f0cb10e1940295c459f10d6b"
	I1122 00:59:59.441162  729413 cri.go:89] found id: "99efae290674153fda78e1cc8d351668db7f8f1a6a89e147416cef08d7b43096"
	I1122 00:59:59.441166  729413 cri.go:89] found id: "24c64924a669eecc6f41d3f6f2a0935ebe7520e41d8214678fc5533fb88d7dd3"
	I1122 00:59:59.441170  729413 cri.go:89] found id: "e34c46d28bc8c466e4b69397894de9dbaf562f334db936138792ea857f7984cf"
	I1122 00:59:59.441174  729413 cri.go:89] found id: "ef5cf3bc0e8a1e84b865765165a5244f97715b14ad4afe6bdecb47483cb802ba"
	I1122 00:59:59.441178  729413 cri.go:89] found id: "d1d854f1c70c8c8f58aacea7d3bc3bea0c433b6787c467ffaf9f43d30127f3aa"
	I1122 00:59:59.441181  729413 cri.go:89] found id: "c0ae03824089747781ca3fa95c137501b3b35608e772c7bf534789a146554e3c"
	I1122 00:59:59.441185  729413 cri.go:89] found id: "1ce380445cfc1fe8d2cbb405092ab03fd65cb6c2cf8bac3317898266e679c5d3"
	I1122 00:59:59.441203  729413 cri.go:89] found id: "e6c681247aa417267830503d9c58396605d3706c59d9c0cf213900ef1533158d"
	I1122 00:59:59.441211  729413 cri.go:89] found id: "93cb84e4ab699eb2196717799321e4330d7fd89c7b5847a1367298b8dc5f69b4"
	I1122 00:59:59.441214  729413 cri.go:89] found id: ""
	I1122 00:59:59.441271  729413 ssh_runner.go:195] Run: sudo runc list -f json
	I1122 00:59:59.453026  729413 retry.go:31] will retry after 428.55752ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-22T00:59:59Z" level=error msg="open /run/runc: no such file or directory"
	I1122 00:59:59.882789  729413 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1122 00:59:59.900033  729413 pause.go:52] kubelet running: false
	I1122 00:59:59.900105  729413 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1122 01:00:00.214353  729413 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1122 01:00:00.214450  729413 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1122 01:00:00.433935  729413 cri.go:89] found id: "6797e48f56252ab176007011001840ada8a9976acd404dda959a8334d3c46cdb"
	I1122 01:00:00.433964  729413 cri.go:89] found id: "77f6a3c0f1d2e079997f3dddd18e52dfa729d725f0cb10e1940295c459f10d6b"
	I1122 01:00:00.433971  729413 cri.go:89] found id: "99efae290674153fda78e1cc8d351668db7f8f1a6a89e147416cef08d7b43096"
	I1122 01:00:00.433975  729413 cri.go:89] found id: "24c64924a669eecc6f41d3f6f2a0935ebe7520e41d8214678fc5533fb88d7dd3"
	I1122 01:00:00.433980  729413 cri.go:89] found id: "e34c46d28bc8c466e4b69397894de9dbaf562f334db936138792ea857f7984cf"
	I1122 01:00:00.433985  729413 cri.go:89] found id: "ef5cf3bc0e8a1e84b865765165a5244f97715b14ad4afe6bdecb47483cb802ba"
	I1122 01:00:00.433989  729413 cri.go:89] found id: "d1d854f1c70c8c8f58aacea7d3bc3bea0c433b6787c467ffaf9f43d30127f3aa"
	I1122 01:00:00.433993  729413 cri.go:89] found id: "c0ae03824089747781ca3fa95c137501b3b35608e772c7bf534789a146554e3c"
	I1122 01:00:00.433996  729413 cri.go:89] found id: "1ce380445cfc1fe8d2cbb405092ab03fd65cb6c2cf8bac3317898266e679c5d3"
	I1122 01:00:00.434004  729413 cri.go:89] found id: "e6c681247aa417267830503d9c58396605d3706c59d9c0cf213900ef1533158d"
	I1122 01:00:00.434007  729413 cri.go:89] found id: "93cb84e4ab699eb2196717799321e4330d7fd89c7b5847a1367298b8dc5f69b4"
	I1122 01:00:00.434011  729413 cri.go:89] found id: ""
	I1122 01:00:00.434082  729413 ssh_runner.go:195] Run: sudo runc list -f json
	I1122 01:00:00.465946  729413 out.go:203] 
	W1122 01:00:00.471426  729413 out.go:285] X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-22T01:00:00Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-22T01:00:00Z" level=error msg="open /run/runc: no such file or directory"
	
	W1122 01:00:00.471485  729413 out.go:285] * 
	* 
	W1122 01:00:00.482370  729413 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1122 01:00:00.488476  729413 out.go:203] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:309: out/minikube-linux-arm64 pause -p default-k8s-diff-port-882305 --alsologtostderr -v=1 failed: exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/Pause]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/Pause]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect default-k8s-diff-port-882305
helpers_test.go:243: (dbg) docker inspect default-k8s-diff-port-882305:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "3f972239d661c1a7b13b144967a8f04cbcdb72c736bec80a78bae8fab1d357e1",
	        "Created": "2025-11-22T00:57:41.715477223Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 721430,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-11-22T00:58:54.762328967Z",
	            "FinishedAt": "2025-11-22T00:58:53.878561188Z"
	        },
	        "Image": "sha256:97061622515a85470dad13cb046ad5fc5021fcd2b05aa921a620abb52951cd5d",
	        "ResolvConfPath": "/var/lib/docker/containers/3f972239d661c1a7b13b144967a8f04cbcdb72c736bec80a78bae8fab1d357e1/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/3f972239d661c1a7b13b144967a8f04cbcdb72c736bec80a78bae8fab1d357e1/hostname",
	        "HostsPath": "/var/lib/docker/containers/3f972239d661c1a7b13b144967a8f04cbcdb72c736bec80a78bae8fab1d357e1/hosts",
	        "LogPath": "/var/lib/docker/containers/3f972239d661c1a7b13b144967a8f04cbcdb72c736bec80a78bae8fab1d357e1/3f972239d661c1a7b13b144967a8f04cbcdb72c736bec80a78bae8fab1d357e1-json.log",
	        "Name": "/default-k8s-diff-port-882305",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "default-k8s-diff-port-882305:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "default-k8s-diff-port-882305",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8444/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "3f972239d661c1a7b13b144967a8f04cbcdb72c736bec80a78bae8fab1d357e1",
	                "LowerDir": "/var/lib/docker/overlay2/2a6ef4a6700f6fb0e1d43ffbcadcf526e3d5c7ff5a78ad3ae005fd03563625b2-init/diff:/var/lib/docker/overlay2/7e8788c6de692bc1c3758a2bb2c4b8da0fbba26855f855c0f3b655bfbdd92f8e/diff",
	                "MergedDir": "/var/lib/docker/overlay2/2a6ef4a6700f6fb0e1d43ffbcadcf526e3d5c7ff5a78ad3ae005fd03563625b2/merged",
	                "UpperDir": "/var/lib/docker/overlay2/2a6ef4a6700f6fb0e1d43ffbcadcf526e3d5c7ff5a78ad3ae005fd03563625b2/diff",
	                "WorkDir": "/var/lib/docker/overlay2/2a6ef4a6700f6fb0e1d43ffbcadcf526e3d5c7ff5a78ad3ae005fd03563625b2/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "default-k8s-diff-port-882305",
	                "Source": "/var/lib/docker/volumes/default-k8s-diff-port-882305/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "default-k8s-diff-port-882305",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8444/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763588073-21934@sha256:19d3da0413e1bfa354cbb88004c6796f8e9772a083e0230b0f6e50212ee04c7e",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "default-k8s-diff-port-882305",
	                "name.minikube.sigs.k8s.io": "default-k8s-diff-port-882305",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "22f138de61f9278f25b67745fc7a3d678329237fa47cfcabfb4fe36d425d3a5c",
	            "SandboxKey": "/var/run/docker/netns/22f138de61f9",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33812"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33813"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33816"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33814"
	                    }
	                ],
	                "8444/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33815"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "default-k8s-diff-port-882305": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.85.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "da:c7:8f:97:68:d5",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "b345e3fe787228de3ab90525c1947dc1357720a8a249cb6a46c68e40ecbfe59b",
	                    "EndpointID": "e8d8e49b23f29c0542a40582af41dc213ef0d9b4e28836a0e84ef845cfedf6d5",
	                    "Gateway": "192.168.85.1",
	                    "IPAddress": "192.168.85.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "default-k8s-diff-port-882305",
	                        "3f972239d661"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-882305 -n default-k8s-diff-port-882305
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-882305 -n default-k8s-diff-port-882305: exit status 2 (732.942966ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestStartStop/group/default-k8s-diff-port/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/Pause]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p default-k8s-diff-port-882305 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p default-k8s-diff-port-882305 logs -n 25: (2.291482643s)
helpers_test.go:260: TestStartStop/group/default-k8s-diff-port/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬──────────────
───────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼──────────────
───────┤
	│ pause   │ -p no-preload-165130 --alsologtostderr -v=1                                                                                                                                                                                                   │ no-preload-165130            │ jenkins │ v1.37.0 │ 22 Nov 25 00:57 UTC │                     │
	│ delete  │ -p no-preload-165130                                                                                                                                                                                                                          │ no-preload-165130            │ jenkins │ v1.37.0 │ 22 Nov 25 00:57 UTC │ 22 Nov 25 00:57 UTC │
	│ delete  │ -p no-preload-165130                                                                                                                                                                                                                          │ no-preload-165130            │ jenkins │ v1.37.0 │ 22 Nov 25 00:57 UTC │ 22 Nov 25 00:57 UTC │
	│ delete  │ -p disable-driver-mounts-046489                                                                                                                                                                                                               │ disable-driver-mounts-046489 │ jenkins │ v1.37.0 │ 22 Nov 25 00:57 UTC │ 22 Nov 25 00:57 UTC │
	│ start   │ -p default-k8s-diff-port-882305 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                      │ default-k8s-diff-port-882305 │ jenkins │ v1.37.0 │ 22 Nov 25 00:57 UTC │ 22 Nov 25 00:58 UTC │
	│ image   │ embed-certs-879000 image list --format=json                                                                                                                                                                                                   │ embed-certs-879000           │ jenkins │ v1.37.0 │ 22 Nov 25 00:58 UTC │ 22 Nov 25 00:58 UTC │
	│ pause   │ -p embed-certs-879000 --alsologtostderr -v=1                                                                                                                                                                                                  │ embed-certs-879000           │ jenkins │ v1.37.0 │ 22 Nov 25 00:58 UTC │                     │
	│ delete  │ -p embed-certs-879000                                                                                                                                                                                                                         │ embed-certs-879000           │ jenkins │ v1.37.0 │ 22 Nov 25 00:58 UTC │ 22 Nov 25 00:58 UTC │
	│ delete  │ -p embed-certs-879000                                                                                                                                                                                                                         │ embed-certs-879000           │ jenkins │ v1.37.0 │ 22 Nov 25 00:58 UTC │ 22 Nov 25 00:58 UTC │
	│ start   │ -p newest-cni-683181 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1 │ newest-cni-683181            │ jenkins │ v1.37.0 │ 22 Nov 25 00:58 UTC │ 22 Nov 25 00:58 UTC │
	│ addons  │ enable metrics-server -p default-k8s-diff-port-882305 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                            │ default-k8s-diff-port-882305 │ jenkins │ v1.37.0 │ 22 Nov 25 00:58 UTC │                     │
	│ stop    │ -p default-k8s-diff-port-882305 --alsologtostderr -v=3                                                                                                                                                                                        │ default-k8s-diff-port-882305 │ jenkins │ v1.37.0 │ 22 Nov 25 00:58 UTC │ 22 Nov 25 00:58 UTC │
	│ addons  │ enable dashboard -p default-k8s-diff-port-882305 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                       │ default-k8s-diff-port-882305 │ jenkins │ v1.37.0 │ 22 Nov 25 00:58 UTC │ 22 Nov 25 00:58 UTC │
	│ start   │ -p default-k8s-diff-port-882305 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                      │ default-k8s-diff-port-882305 │ jenkins │ v1.37.0 │ 22 Nov 25 00:58 UTC │ 22 Nov 25 00:59 UTC │
	│ addons  │ enable metrics-server -p newest-cni-683181 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ newest-cni-683181            │ jenkins │ v1.37.0 │ 22 Nov 25 00:58 UTC │                     │
	│ stop    │ -p newest-cni-683181 --alsologtostderr -v=3                                                                                                                                                                                                   │ newest-cni-683181            │ jenkins │ v1.37.0 │ 22 Nov 25 00:59 UTC │ 22 Nov 25 00:59 UTC │
	│ addons  │ enable dashboard -p newest-cni-683181 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ newest-cni-683181            │ jenkins │ v1.37.0 │ 22 Nov 25 00:59 UTC │ 22 Nov 25 00:59 UTC │
	│ start   │ -p newest-cni-683181 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1 │ newest-cni-683181            │ jenkins │ v1.37.0 │ 22 Nov 25 00:59 UTC │ 22 Nov 25 00:59 UTC │
	│ image   │ newest-cni-683181 image list --format=json                                                                                                                                                                                                    │ newest-cni-683181            │ jenkins │ v1.37.0 │ 22 Nov 25 00:59 UTC │ 22 Nov 25 00:59 UTC │
	│ pause   │ -p newest-cni-683181 --alsologtostderr -v=1                                                                                                                                                                                                   │ newest-cni-683181            │ jenkins │ v1.37.0 │ 22 Nov 25 00:59 UTC │                     │
	│ delete  │ -p newest-cni-683181                                                                                                                                                                                                                          │ newest-cni-683181            │ jenkins │ v1.37.0 │ 22 Nov 25 00:59 UTC │ 22 Nov 25 00:59 UTC │
	│ delete  │ -p newest-cni-683181                                                                                                                                                                                                                          │ newest-cni-683181            │ jenkins │ v1.37.0 │ 22 Nov 25 00:59 UTC │ 22 Nov 25 00:59 UTC │
	│ start   │ -p auto-163229 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=crio                                                                                                                       │ auto-163229                  │ jenkins │ v1.37.0 │ 22 Nov 25 00:59 UTC │                     │
	│ image   │ default-k8s-diff-port-882305 image list --format=json                                                                                                                                                                                         │ default-k8s-diff-port-882305 │ jenkins │ v1.37.0 │ 22 Nov 25 00:59 UTC │ 22 Nov 25 00:59 UTC │
	│ pause   │ -p default-k8s-diff-port-882305 --alsologtostderr -v=1                                                                                                                                                                                        │ default-k8s-diff-port-882305 │ jenkins │ v1.37.0 │ 22 Nov 25 00:59 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴──────────────
───────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/22 00:59:34
	Running on machine: ip-172-31-21-244
	Binary: Built with gc go1.25.3 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1122 00:59:34.250089  727352 out.go:360] Setting OutFile to fd 1 ...
	I1122 00:59:34.250301  727352 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1122 00:59:34.250346  727352 out.go:374] Setting ErrFile to fd 2...
	I1122 00:59:34.250376  727352 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1122 00:59:34.250891  727352 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21934-513600/.minikube/bin
	I1122 00:59:34.252058  727352 out.go:368] Setting JSON to false
	I1122 00:59:34.253384  727352 start.go:133] hostinfo: {"hostname":"ip-172-31-21-244","uptime":20491,"bootTime":1763752684,"procs":199,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"da8ac1fd-6236-412a-a346-95873c98230d"}
	I1122 00:59:34.253457  727352 start.go:143] virtualization:  
	I1122 00:59:34.257147  727352 out.go:179] * [auto-163229] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1122 00:59:34.261123  727352 out.go:179]   - MINIKUBE_LOCATION=21934
	I1122 00:59:34.261191  727352 notify.go:221] Checking for updates...
	I1122 00:59:34.267559  727352 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1122 00:59:34.270502  727352 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21934-513600/kubeconfig
	I1122 00:59:34.273474  727352 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21934-513600/.minikube
	I1122 00:59:34.276295  727352 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1122 00:59:34.279285  727352 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1122 00:59:34.282658  727352 config.go:182] Loaded profile config "default-k8s-diff-port-882305": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1122 00:59:34.282783  727352 driver.go:422] Setting default libvirt URI to qemu:///system
	I1122 00:59:34.311096  727352 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1122 00:59:34.311221  727352 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1122 00:59:34.369669  727352 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-11-22 00:59:34.35981101 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aa
rch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pa
th:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1122 00:59:34.369777  727352 docker.go:319] overlay module found
	I1122 00:59:34.373047  727352 out.go:179] * Using the docker driver based on user configuration
	W1122 00:59:31.664259  721299 pod_ready.go:104] pod "coredns-66bc5c9577-448gn" is not "Ready", error: <nil>
	W1122 00:59:33.668023  721299 pod_ready.go:104] pod "coredns-66bc5c9577-448gn" is not "Ready", error: <nil>
	I1122 00:59:34.376039  727352 start.go:309] selected driver: docker
	I1122 00:59:34.376058  727352 start.go:930] validating driver "docker" against <nil>
	I1122 00:59:34.376071  727352 start.go:941] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1122 00:59:34.376780  727352 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1122 00:59:34.436164  727352 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-11-22 00:59:34.426287157 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1122 00:59:34.436334  727352 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1122 00:59:34.436566  727352 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1122 00:59:34.439516  727352 out.go:179] * Using Docker driver with root privileges
	I1122 00:59:34.442486  727352 cni.go:84] Creating CNI manager for ""
	I1122 00:59:34.442556  727352 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1122 00:59:34.442569  727352 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1122 00:59:34.442655  727352 start.go:353] cluster config:
	{Name:auto-163229 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763588073-21934@sha256:19d3da0413e1bfa354cbb88004c6796f8e9772a083e0230b0f6e50212ee04c7e Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:auto-163229 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:cri
o CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: Au
toPauseInterval:1m0s}
	I1122 00:59:34.445871  727352 out.go:179] * Starting "auto-163229" primary control-plane node in "auto-163229" cluster
	I1122 00:59:34.448580  727352 cache.go:134] Beginning downloading kic base image for docker with crio
	I1122 00:59:34.451577  727352 out.go:179] * Pulling base image v0.0.48-1763588073-21934 ...
	I1122 00:59:34.454488  727352 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1122 00:59:34.454532  727352 preload.go:203] Found local preload: /home/jenkins/minikube-integration/21934-513600/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4
	I1122 00:59:34.454541  727352 cache.go:65] Caching tarball of preloaded images
	I1122 00:59:34.454640  727352 preload.go:238] Found /home/jenkins/minikube-integration/21934-513600/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1122 00:59:34.454650  727352 cache.go:68] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1122 00:59:34.454752  727352 profile.go:143] Saving config to /home/jenkins/minikube-integration/21934-513600/.minikube/profiles/auto-163229/config.json ...
	I1122 00:59:34.454793  727352 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21934-513600/.minikube/profiles/auto-163229/config.json: {Name:mk35cd32535ce46e47638e4f6a6d136cd76a9b93 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1122 00:59:34.454939  727352 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763588073-21934@sha256:19d3da0413e1bfa354cbb88004c6796f8e9772a083e0230b0f6e50212ee04c7e in local docker daemon
	I1122 00:59:34.476300  727352 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763588073-21934@sha256:19d3da0413e1bfa354cbb88004c6796f8e9772a083e0230b0f6e50212ee04c7e in local docker daemon, skipping pull
	I1122 00:59:34.476322  727352 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763588073-21934@sha256:19d3da0413e1bfa354cbb88004c6796f8e9772a083e0230b0f6e50212ee04c7e exists in daemon, skipping load
	I1122 00:59:34.476336  727352 cache.go:243] Successfully downloaded all kic artifacts
	I1122 00:59:34.476358  727352 start.go:360] acquireMachinesLock for auto-163229: {Name:mkc3cbe710dfebc8fd711e50a5fb6ebe2f3767b2 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1122 00:59:34.476457  727352 start.go:364] duration metric: took 80.105µs to acquireMachinesLock for "auto-163229"
	I1122 00:59:34.476486  727352 start.go:93] Provisioning new machine with config: &{Name:auto-163229 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763588073-21934@sha256:19d3da0413e1bfa354cbb88004c6796f8e9772a083e0230b0f6e50212ee04c7e Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:auto-163229 Namespace:default APIServerHAVIP: APIServerName:minikub
eCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: Soc
ketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1122 00:59:34.476555  727352 start.go:125] createHost starting for "" (driver="docker")
	I1122 00:59:34.480027  727352 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1122 00:59:34.480293  727352 start.go:159] libmachine.API.Create for "auto-163229" (driver="docker")
	I1122 00:59:34.480326  727352 client.go:173] LocalClient.Create starting
	I1122 00:59:34.480401  727352 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21934-513600/.minikube/certs/ca.pem
	I1122 00:59:34.480435  727352 main.go:143] libmachine: Decoding PEM data...
	I1122 00:59:34.480457  727352 main.go:143] libmachine: Parsing certificate...
	I1122 00:59:34.480511  727352 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21934-513600/.minikube/certs/cert.pem
	I1122 00:59:34.480533  727352 main.go:143] libmachine: Decoding PEM data...
	I1122 00:59:34.480548  727352 main.go:143] libmachine: Parsing certificate...
	I1122 00:59:34.480894  727352 cli_runner.go:164] Run: docker network inspect auto-163229 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1122 00:59:34.497281  727352 cli_runner.go:211] docker network inspect auto-163229 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1122 00:59:34.497360  727352 network_create.go:284] running [docker network inspect auto-163229] to gather additional debugging logs...
	I1122 00:59:34.497381  727352 cli_runner.go:164] Run: docker network inspect auto-163229
	W1122 00:59:34.521231  727352 cli_runner.go:211] docker network inspect auto-163229 returned with exit code 1
	I1122 00:59:34.521284  727352 network_create.go:287] error running [docker network inspect auto-163229]: docker network inspect auto-163229: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network auto-163229 not found
	I1122 00:59:34.521299  727352 network_create.go:289] output of [docker network inspect auto-163229]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network auto-163229 not found
	
	** /stderr **
	I1122 00:59:34.521397  727352 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1122 00:59:34.538964  727352 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-b16c782e3da8 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:ba:82:00:9d:45:d0} reservation:<nil>}
	I1122 00:59:34.539319  727352 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-13c9c00b5de5 IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:7a:4e:a4:3d:42:9e} reservation:<nil>}
	I1122 00:59:34.539686  727352 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-3c074a6aa87b IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:9a:1f:77:e5:90:0b} reservation:<nil>}
	I1122 00:59:34.540128  727352 network.go:206] using free private subnet 192.168.76.0/24: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x4001a06320}
	I1122 00:59:34.540152  727352 network_create.go:124] attempt to create docker network auto-163229 192.168.76.0/24 with gateway 192.168.76.1 and MTU of 1500 ...
	I1122 00:59:34.540207  727352 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=auto-163229 auto-163229
	I1122 00:59:34.605220  727352 network_create.go:108] docker network auto-163229 192.168.76.0/24 created
	I1122 00:59:34.605249  727352 kic.go:121] calculated static IP "192.168.76.2" for the "auto-163229" container
	I1122 00:59:34.605347  727352 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1122 00:59:34.622876  727352 cli_runner.go:164] Run: docker volume create auto-163229 --label name.minikube.sigs.k8s.io=auto-163229 --label created_by.minikube.sigs.k8s.io=true
	I1122 00:59:34.640655  727352 oci.go:103] Successfully created a docker volume auto-163229
	I1122 00:59:34.640756  727352 cli_runner.go:164] Run: docker run --rm --name auto-163229-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=auto-163229 --entrypoint /usr/bin/test -v auto-163229:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763588073-21934@sha256:19d3da0413e1bfa354cbb88004c6796f8e9772a083e0230b0f6e50212ee04c7e -d /var/lib
	I1122 00:59:35.191753  727352 oci.go:107] Successfully prepared a docker volume auto-163229
	I1122 00:59:35.191836  727352 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1122 00:59:35.191854  727352 kic.go:194] Starting extracting preloaded images to volume ...
	I1122 00:59:35.191921  727352 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21934-513600/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v auto-163229:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763588073-21934@sha256:19d3da0413e1bfa354cbb88004c6796f8e9772a083e0230b0f6e50212ee04c7e -I lz4 -xf /preloaded.tar -C /extractDir
	W1122 00:59:36.163083  721299 pod_ready.go:104] pod "coredns-66bc5c9577-448gn" is not "Ready", error: <nil>
	W1122 00:59:38.664088  721299 pod_ready.go:104] pod "coredns-66bc5c9577-448gn" is not "Ready", error: <nil>
	I1122 00:59:39.536076  727352 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21934-513600/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v auto-163229:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763588073-21934@sha256:19d3da0413e1bfa354cbb88004c6796f8e9772a083e0230b0f6e50212ee04c7e -I lz4 -xf /preloaded.tar -C /extractDir: (4.344115048s)
	I1122 00:59:39.536115  727352 kic.go:203] duration metric: took 4.344257978s to extract preloaded images to volume ...
	W1122 00:59:39.536253  727352 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I1122 00:59:39.536370  727352 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1122 00:59:39.586887  727352 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname auto-163229 --name auto-163229 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=auto-163229 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=auto-163229 --network auto-163229 --ip 192.168.76.2 --volume auto-163229:/var --security-opt apparmor=unconfined --memory=3072mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763588073-21934@sha256:19d3da0413e1bfa354cbb88004c6796f8e9772a083e0230b0f6e50212ee04c7e
	I1122 00:59:39.886587  727352 cli_runner.go:164] Run: docker container inspect auto-163229 --format={{.State.Running}}
	I1122 00:59:39.908705  727352 cli_runner.go:164] Run: docker container inspect auto-163229 --format={{.State.Status}}
	I1122 00:59:39.938080  727352 cli_runner.go:164] Run: docker exec auto-163229 stat /var/lib/dpkg/alternatives/iptables
	I1122 00:59:39.991103  727352 oci.go:144] the created container "auto-163229" has a running status.
	I1122 00:59:39.991141  727352 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21934-513600/.minikube/machines/auto-163229/id_rsa...
	I1122 00:59:40.315442  727352 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21934-513600/.minikube/machines/auto-163229/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1122 00:59:40.341572  727352 cli_runner.go:164] Run: docker container inspect auto-163229 --format={{.State.Status}}
	I1122 00:59:40.376319  727352 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1122 00:59:40.376340  727352 kic_runner.go:114] Args: [docker exec --privileged auto-163229 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1122 00:59:40.440720  727352 cli_runner.go:164] Run: docker container inspect auto-163229 --format={{.State.Status}}
	I1122 00:59:40.471504  727352 machine.go:94] provisionDockerMachine start ...
	I1122 00:59:40.471593  727352 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-163229
	I1122 00:59:40.489316  727352 main.go:143] libmachine: Using SSH client type: native
	I1122 00:59:40.489660  727352 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33822 <nil> <nil>}
	I1122 00:59:40.489674  727352 main.go:143] libmachine: About to run SSH command:
	hostname
	I1122 00:59:40.490347  727352 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:33168->127.0.0.1:33822: read: connection reset by peer
	I1122 00:59:43.641618  727352 main.go:143] libmachine: SSH cmd err, output: <nil>: auto-163229
	
	I1122 00:59:43.641644  727352 ubuntu.go:182] provisioning hostname "auto-163229"
	I1122 00:59:43.641714  727352 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-163229
	I1122 00:59:43.661338  727352 main.go:143] libmachine: Using SSH client type: native
	I1122 00:59:43.661645  727352 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33822 <nil> <nil>}
	I1122 00:59:43.661655  727352 main.go:143] libmachine: About to run SSH command:
	sudo hostname auto-163229 && echo "auto-163229" | sudo tee /etc/hostname
	I1122 00:59:43.815194  727352 main.go:143] libmachine: SSH cmd err, output: <nil>: auto-163229
	
	I1122 00:59:43.815294  727352 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-163229
	I1122 00:59:43.841068  727352 main.go:143] libmachine: Using SSH client type: native
	I1122 00:59:43.841400  727352 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33822 <nil> <nil>}
	I1122 00:59:43.841421  727352 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sauto-163229' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 auto-163229/g' /etc/hosts;
				else 
					echo '127.0.1.1 auto-163229' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1122 00:59:43.982106  727352 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1122 00:59:43.982184  727352 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21934-513600/.minikube CaCertPath:/home/jenkins/minikube-integration/21934-513600/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21934-513600/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21934-513600/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21934-513600/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21934-513600/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21934-513600/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21934-513600/.minikube}
	I1122 00:59:43.982211  727352 ubuntu.go:190] setting up certificates
	I1122 00:59:43.982221  727352 provision.go:84] configureAuth start
	I1122 00:59:43.982291  727352 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" auto-163229
	I1122 00:59:44.001187  727352 provision.go:143] copyHostCerts
	I1122 00:59:44.001266  727352 exec_runner.go:144] found /home/jenkins/minikube-integration/21934-513600/.minikube/cert.pem, removing ...
	I1122 00:59:44.001278  727352 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21934-513600/.minikube/cert.pem
	I1122 00:59:44.001371  727352 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21934-513600/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21934-513600/.minikube/cert.pem (1123 bytes)
	I1122 00:59:44.001478  727352 exec_runner.go:144] found /home/jenkins/minikube-integration/21934-513600/.minikube/key.pem, removing ...
	I1122 00:59:44.001483  727352 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21934-513600/.minikube/key.pem
	I1122 00:59:44.001512  727352 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21934-513600/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21934-513600/.minikube/key.pem (1675 bytes)
	I1122 00:59:44.001567  727352 exec_runner.go:144] found /home/jenkins/minikube-integration/21934-513600/.minikube/ca.pem, removing ...
	I1122 00:59:44.001571  727352 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21934-513600/.minikube/ca.pem
	I1122 00:59:44.001601  727352 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21934-513600/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21934-513600/.minikube/ca.pem (1078 bytes)
	I1122 00:59:44.001654  727352 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21934-513600/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21934-513600/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21934-513600/.minikube/certs/ca-key.pem org=jenkins.auto-163229 san=[127.0.0.1 192.168.76.2 auto-163229 localhost minikube]
	W1122 00:59:41.163906  721299 pod_ready.go:104] pod "coredns-66bc5c9577-448gn" is not "Ready", error: <nil>
	W1122 00:59:43.665278  721299 pod_ready.go:104] pod "coredns-66bc5c9577-448gn" is not "Ready", error: <nil>
	I1122 00:59:44.667728  721299 pod_ready.go:94] pod "coredns-66bc5c9577-448gn" is "Ready"
	I1122 00:59:44.667759  721299 pod_ready.go:86] duration metric: took 33.510009512s for pod "coredns-66bc5c9577-448gn" in "kube-system" namespace to be "Ready" or be gone ...
	I1122 00:59:44.670826  721299 pod_ready.go:83] waiting for pod "etcd-default-k8s-diff-port-882305" in "kube-system" namespace to be "Ready" or be gone ...
	I1122 00:59:44.676764  721299 pod_ready.go:94] pod "etcd-default-k8s-diff-port-882305" is "Ready"
	I1122 00:59:44.676800  721299 pod_ready.go:86] duration metric: took 5.948955ms for pod "etcd-default-k8s-diff-port-882305" in "kube-system" namespace to be "Ready" or be gone ...
	I1122 00:59:44.679257  721299 pod_ready.go:83] waiting for pod "kube-apiserver-default-k8s-diff-port-882305" in "kube-system" namespace to be "Ready" or be gone ...
	I1122 00:59:44.685446  721299 pod_ready.go:94] pod "kube-apiserver-default-k8s-diff-port-882305" is "Ready"
	I1122 00:59:44.685474  721299 pod_ready.go:86] duration metric: took 6.193233ms for pod "kube-apiserver-default-k8s-diff-port-882305" in "kube-system" namespace to be "Ready" or be gone ...
	I1122 00:59:44.687892  721299 pod_ready.go:83] waiting for pod "kube-controller-manager-default-k8s-diff-port-882305" in "kube-system" namespace to be "Ready" or be gone ...
	I1122 00:59:44.863822  721299 pod_ready.go:94] pod "kube-controller-manager-default-k8s-diff-port-882305" is "Ready"
	I1122 00:59:44.863854  721299 pod_ready.go:86] duration metric: took 175.932972ms for pod "kube-controller-manager-default-k8s-diff-port-882305" in "kube-system" namespace to be "Ready" or be gone ...
	I1122 00:59:45.067555  721299 pod_ready.go:83] waiting for pod "kube-proxy-59l6x" in "kube-system" namespace to be "Ready" or be gone ...
	I1122 00:59:45.461778  721299 pod_ready.go:94] pod "kube-proxy-59l6x" is "Ready"
	I1122 00:59:45.461831  721299 pod_ready.go:86] duration metric: took 394.248623ms for pod "kube-proxy-59l6x" in "kube-system" namespace to be "Ready" or be gone ...
	I1122 00:59:45.662473  721299 pod_ready.go:83] waiting for pod "kube-scheduler-default-k8s-diff-port-882305" in "kube-system" namespace to be "Ready" or be gone ...
	I1122 00:59:46.061220  721299 pod_ready.go:94] pod "kube-scheduler-default-k8s-diff-port-882305" is "Ready"
	I1122 00:59:46.061273  721299 pod_ready.go:86] duration metric: took 398.759783ms for pod "kube-scheduler-default-k8s-diff-port-882305" in "kube-system" namespace to be "Ready" or be gone ...
	I1122 00:59:46.061287  721299 pod_ready.go:40] duration metric: took 34.963106194s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1122 00:59:46.155480  721299 start.go:628] kubectl: 1.33.2, cluster: 1.34.1 (minor skew: 1)
	I1122 00:59:46.159306  721299 out.go:179] * Done! kubectl is now configured to use "default-k8s-diff-port-882305" cluster and "default" namespace by default
	I1122 00:59:44.634681  727352 provision.go:177] copyRemoteCerts
	I1122 00:59:44.634770  727352 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1122 00:59:44.634843  727352 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-163229
	I1122 00:59:44.654432  727352 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33822 SSHKeyPath:/home/jenkins/minikube-integration/21934-513600/.minikube/machines/auto-163229/id_rsa Username:docker}
	I1122 00:59:44.765688  727352 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21934-513600/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1122 00:59:44.785138  727352 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21934-513600/.minikube/machines/server.pem --> /etc/docker/server.pem (1204 bytes)
	I1122 00:59:44.804882  727352 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21934-513600/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1122 00:59:44.833145  727352 provision.go:87] duration metric: took 850.905992ms to configureAuth
	I1122 00:59:44.833170  727352 ubuntu.go:206] setting minikube options for container-runtime
	I1122 00:59:44.833366  727352 config.go:182] Loaded profile config "auto-163229": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1122 00:59:44.833498  727352 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-163229
	I1122 00:59:44.853143  727352 main.go:143] libmachine: Using SSH client type: native
	I1122 00:59:44.854123  727352 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33822 <nil> <nil>}
	I1122 00:59:44.854149  727352 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1122 00:59:45.278826  727352 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1122 00:59:45.279156  727352 machine.go:97] duration metric: took 4.807628961s to provisionDockerMachine
	I1122 00:59:45.279245  727352 client.go:176] duration metric: took 10.798904715s to LocalClient.Create
	I1122 00:59:45.279416  727352 start.go:167] duration metric: took 10.799120652s to libmachine.API.Create "auto-163229"
	I1122 00:59:45.279511  727352 start.go:293] postStartSetup for "auto-163229" (driver="docker")
	I1122 00:59:45.279523  727352 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1122 00:59:45.279589  727352 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1122 00:59:45.279638  727352 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-163229
	I1122 00:59:45.310981  727352 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33822 SSHKeyPath:/home/jenkins/minikube-integration/21934-513600/.minikube/machines/auto-163229/id_rsa Username:docker}
	I1122 00:59:45.420665  727352 ssh_runner.go:195] Run: cat /etc/os-release
	I1122 00:59:45.424168  727352 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1122 00:59:45.424199  727352 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1122 00:59:45.424211  727352 filesync.go:126] Scanning /home/jenkins/minikube-integration/21934-513600/.minikube/addons for local assets ...
	I1122 00:59:45.424265  727352 filesync.go:126] Scanning /home/jenkins/minikube-integration/21934-513600/.minikube/files for local assets ...
	I1122 00:59:45.424345  727352 filesync.go:149] local asset: /home/jenkins/minikube-integration/21934-513600/.minikube/files/etc/ssl/certs/5169372.pem -> 5169372.pem in /etc/ssl/certs
	I1122 00:59:45.424453  727352 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1122 00:59:45.433646  727352 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21934-513600/.minikube/files/etc/ssl/certs/5169372.pem --> /etc/ssl/certs/5169372.pem (1708 bytes)
	I1122 00:59:45.462922  727352 start.go:296] duration metric: took 183.396783ms for postStartSetup
	I1122 00:59:45.463342  727352 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" auto-163229
	I1122 00:59:45.481009  727352 profile.go:143] Saving config to /home/jenkins/minikube-integration/21934-513600/.minikube/profiles/auto-163229/config.json ...
	I1122 00:59:45.481316  727352 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1122 00:59:45.481370  727352 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-163229
	I1122 00:59:45.498886  727352 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33822 SSHKeyPath:/home/jenkins/minikube-integration/21934-513600/.minikube/machines/auto-163229/id_rsa Username:docker}
	I1122 00:59:45.598859  727352 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1122 00:59:45.603721  727352 start.go:128] duration metric: took 11.127150238s to createHost
	I1122 00:59:45.603746  727352 start.go:83] releasing machines lock for "auto-163229", held for 11.127275854s
	I1122 00:59:45.603838  727352 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" auto-163229
	I1122 00:59:45.621850  727352 ssh_runner.go:195] Run: cat /version.json
	I1122 00:59:45.621890  727352 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1122 00:59:45.621917  727352 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-163229
	I1122 00:59:45.621959  727352 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-163229
	I1122 00:59:45.643182  727352 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33822 SSHKeyPath:/home/jenkins/minikube-integration/21934-513600/.minikube/machines/auto-163229/id_rsa Username:docker}
	I1122 00:59:45.659784  727352 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33822 SSHKeyPath:/home/jenkins/minikube-integration/21934-513600/.minikube/machines/auto-163229/id_rsa Username:docker}
	I1122 00:59:45.838734  727352 ssh_runner.go:195] Run: systemctl --version
	I1122 00:59:45.845373  727352 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1122 00:59:45.887568  727352 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1122 00:59:45.892002  727352 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1122 00:59:45.892077  727352 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1122 00:59:45.921513  727352 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/10-crio-bridge.conflist.disabled] bridge cni config(s)
	I1122 00:59:45.921536  727352 start.go:496] detecting cgroup driver to use...
	I1122 00:59:45.921579  727352 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1122 00:59:45.921637  727352 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1122 00:59:45.939064  727352 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1122 00:59:45.952530  727352 docker.go:218] disabling cri-docker service (if available) ...
	I1122 00:59:45.952597  727352 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1122 00:59:45.970430  727352 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1122 00:59:45.989037  727352 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1122 00:59:46.141203  727352 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1122 00:59:46.326829  727352 docker.go:234] disabling docker service ...
	I1122 00:59:46.326889  727352 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1122 00:59:46.362021  727352 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1122 00:59:46.378296  727352 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1122 00:59:46.551394  727352 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1122 00:59:46.722367  727352 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1122 00:59:46.735812  727352 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1122 00:59:46.750270  727352 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1122 00:59:46.750339  727352 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1122 00:59:46.759004  727352 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1122 00:59:46.759087  727352 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1122 00:59:46.768325  727352 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1122 00:59:46.778517  727352 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1122 00:59:46.787944  727352 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1122 00:59:46.796508  727352 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1122 00:59:46.805341  727352 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1122 00:59:46.819565  727352 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1122 00:59:46.828055  727352 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1122 00:59:46.835498  727352 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1122 00:59:46.842398  727352 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1122 00:59:46.962503  727352 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1122 00:59:47.144582  727352 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1122 00:59:47.144729  727352 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1122 00:59:47.148663  727352 start.go:564] Will wait 60s for crictl version
	I1122 00:59:47.148723  727352 ssh_runner.go:195] Run: which crictl
	I1122 00:59:47.152354  727352 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1122 00:59:47.178750  727352 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.2
	RuntimeApiVersion:  v1
	I1122 00:59:47.178833  727352 ssh_runner.go:195] Run: crio --version
	I1122 00:59:47.208313  727352 ssh_runner.go:195] Run: crio --version
	I1122 00:59:47.251382  727352 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.2 ...
	I1122 00:59:47.254294  727352 cli_runner.go:164] Run: docker network inspect auto-163229 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1122 00:59:47.270822  727352 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I1122 00:59:47.275570  727352 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1122 00:59:47.285463  727352 kubeadm.go:884] updating cluster {Name:auto-163229 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763588073-21934@sha256:19d3da0413e1bfa354cbb88004c6796f8e9772a083e0230b0f6e50212ee04c7e Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:auto-163229 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:
[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMne
tClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1122 00:59:47.285587  727352 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1122 00:59:47.285646  727352 ssh_runner.go:195] Run: sudo crictl images --output json
	I1122 00:59:47.320213  727352 crio.go:514] all images are preloaded for cri-o runtime.
	I1122 00:59:47.320237  727352 crio.go:433] Images already preloaded, skipping extraction
	I1122 00:59:47.320293  727352 ssh_runner.go:195] Run: sudo crictl images --output json
	I1122 00:59:47.346511  727352 crio.go:514] all images are preloaded for cri-o runtime.
	I1122 00:59:47.346536  727352 cache_images.go:86] Images are preloaded, skipping loading
	I1122 00:59:47.346545  727352 kubeadm.go:935] updating node { 192.168.76.2 8443 v1.34.1 crio true true} ...
	I1122 00:59:47.346633  727352 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=auto-163229 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:auto-163229 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1122 00:59:47.346716  727352 ssh_runner.go:195] Run: crio config
	I1122 00:59:47.407109  727352 cni.go:84] Creating CNI manager for ""
	I1122 00:59:47.407131  727352 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1122 00:59:47.407166  727352 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1122 00:59:47.407194  727352 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:auto-163229 NodeName:auto-163229 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/
manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1122 00:59:47.407346  727352 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "auto-163229"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.76.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1122 00:59:47.407423  727352 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1122 00:59:47.415468  727352 binaries.go:51] Found k8s binaries, skipping transfer
	I1122 00:59:47.415542  727352 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1122 00:59:47.423413  727352 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (361 bytes)
	I1122 00:59:47.438444  727352 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1122 00:59:47.451577  727352 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2208 bytes)
	I1122 00:59:47.467750  727352 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I1122 00:59:47.471271  727352 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1122 00:59:47.480619  727352 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1122 00:59:47.602183  727352 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1122 00:59:47.616885  727352 certs.go:69] Setting up /home/jenkins/minikube-integration/21934-513600/.minikube/profiles/auto-163229 for IP: 192.168.76.2
	I1122 00:59:47.616905  727352 certs.go:195] generating shared ca certs ...
	I1122 00:59:47.616920  727352 certs.go:227] acquiring lock for ca certs: {Name:mkaf4c79493334cb2058b349c36be7473837f9b7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1122 00:59:47.617054  727352 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21934-513600/.minikube/ca.key
	I1122 00:59:47.617101  727352 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21934-513600/.minikube/proxy-client-ca.key
	I1122 00:59:47.617108  727352 certs.go:257] generating profile certs ...
	I1122 00:59:47.617160  727352 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21934-513600/.minikube/profiles/auto-163229/client.key
	I1122 00:59:47.617171  727352 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21934-513600/.minikube/profiles/auto-163229/client.crt with IP's: []
	I1122 00:59:47.773612  727352 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21934-513600/.minikube/profiles/auto-163229/client.crt ...
	I1122 00:59:47.773647  727352 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21934-513600/.minikube/profiles/auto-163229/client.crt: {Name:mk5580137d08122d88e24c87855247707ecd684e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1122 00:59:47.773893  727352 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21934-513600/.minikube/profiles/auto-163229/client.key ...
	I1122 00:59:47.773908  727352 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21934-513600/.minikube/profiles/auto-163229/client.key: {Name:mk70359a60a71ae6aafe962c3d06ef297af57caf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1122 00:59:47.774008  727352 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21934-513600/.minikube/profiles/auto-163229/apiserver.key.b2ec0605
	I1122 00:59:47.774027  727352 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21934-513600/.minikube/profiles/auto-163229/apiserver.crt.b2ec0605 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.76.2]
	I1122 00:59:48.157177  727352 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21934-513600/.minikube/profiles/auto-163229/apiserver.crt.b2ec0605 ...
	I1122 00:59:48.157210  727352 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21934-513600/.minikube/profiles/auto-163229/apiserver.crt.b2ec0605: {Name:mk04b9b1215305a4ce60df91b5f1218eb67834ac Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1122 00:59:48.157401  727352 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21934-513600/.minikube/profiles/auto-163229/apiserver.key.b2ec0605 ...
	I1122 00:59:48.157418  727352 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21934-513600/.minikube/profiles/auto-163229/apiserver.key.b2ec0605: {Name:mk4b495c49e8f0dcb2b9ed7589a131239c2e003d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1122 00:59:48.157508  727352 certs.go:382] copying /home/jenkins/minikube-integration/21934-513600/.minikube/profiles/auto-163229/apiserver.crt.b2ec0605 -> /home/jenkins/minikube-integration/21934-513600/.minikube/profiles/auto-163229/apiserver.crt
	I1122 00:59:48.157588  727352 certs.go:386] copying /home/jenkins/minikube-integration/21934-513600/.minikube/profiles/auto-163229/apiserver.key.b2ec0605 -> /home/jenkins/minikube-integration/21934-513600/.minikube/profiles/auto-163229/apiserver.key
	I1122 00:59:48.157651  727352 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21934-513600/.minikube/profiles/auto-163229/proxy-client.key
	I1122 00:59:48.157670  727352 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21934-513600/.minikube/profiles/auto-163229/proxy-client.crt with IP's: []
	I1122 00:59:48.343366  727352 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21934-513600/.minikube/profiles/auto-163229/proxy-client.crt ...
	I1122 00:59:48.343403  727352 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21934-513600/.minikube/profiles/auto-163229/proxy-client.crt: {Name:mk7ec962a4e17ad4a186ee789f8019e69095844d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1122 00:59:48.343584  727352 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21934-513600/.minikube/profiles/auto-163229/proxy-client.key ...
	I1122 00:59:48.343595  727352 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21934-513600/.minikube/profiles/auto-163229/proxy-client.key: {Name:mkc71167ce9561739ff6445f93ee6d2004526d29 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1122 00:59:48.343786  727352 certs.go:484] found cert: /home/jenkins/minikube-integration/21934-513600/.minikube/certs/516937.pem (1338 bytes)
	W1122 00:59:48.343834  727352 certs.go:480] ignoring /home/jenkins/minikube-integration/21934-513600/.minikube/certs/516937_empty.pem, impossibly tiny 0 bytes
	I1122 00:59:48.343851  727352 certs.go:484] found cert: /home/jenkins/minikube-integration/21934-513600/.minikube/certs/ca-key.pem (1679 bytes)
	I1122 00:59:48.343879  727352 certs.go:484] found cert: /home/jenkins/minikube-integration/21934-513600/.minikube/certs/ca.pem (1078 bytes)
	I1122 00:59:48.343907  727352 certs.go:484] found cert: /home/jenkins/minikube-integration/21934-513600/.minikube/certs/cert.pem (1123 bytes)
	I1122 00:59:48.343936  727352 certs.go:484] found cert: /home/jenkins/minikube-integration/21934-513600/.minikube/certs/key.pem (1675 bytes)
	I1122 00:59:48.343987  727352 certs.go:484] found cert: /home/jenkins/minikube-integration/21934-513600/.minikube/files/etc/ssl/certs/5169372.pem (1708 bytes)
	I1122 00:59:48.344614  727352 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21934-513600/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1122 00:59:48.362870  727352 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21934-513600/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1122 00:59:48.385911  727352 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21934-513600/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1122 00:59:48.404064  727352 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21934-513600/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1122 00:59:48.423748  727352 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21934-513600/.minikube/profiles/auto-163229/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1415 bytes)
	I1122 00:59:48.442037  727352 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21934-513600/.minikube/profiles/auto-163229/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1122 00:59:48.464720  727352 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21934-513600/.minikube/profiles/auto-163229/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1122 00:59:48.483000  727352 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21934-513600/.minikube/profiles/auto-163229/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1122 00:59:48.502692  727352 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21934-513600/.minikube/files/etc/ssl/certs/5169372.pem --> /usr/share/ca-certificates/5169372.pem (1708 bytes)
	I1122 00:59:48.530443  727352 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21934-513600/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1122 00:59:48.553388  727352 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21934-513600/.minikube/certs/516937.pem --> /usr/share/ca-certificates/516937.pem (1338 bytes)
	I1122 00:59:48.575191  727352 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1122 00:59:48.589563  727352 ssh_runner.go:195] Run: openssl version
	I1122 00:59:48.596256  727352 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1122 00:59:48.604477  727352 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1122 00:59:48.608272  727352 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov 21 23:48 /usr/share/ca-certificates/minikubeCA.pem
	I1122 00:59:48.608351  727352 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1122 00:59:48.649460  727352 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1122 00:59:48.658306  727352 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/516937.pem && ln -fs /usr/share/ca-certificates/516937.pem /etc/ssl/certs/516937.pem"
	I1122 00:59:48.666677  727352 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/516937.pem
	I1122 00:59:48.670595  727352 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov 21 23:55 /usr/share/ca-certificates/516937.pem
	I1122 00:59:48.670691  727352 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/516937.pem
	I1122 00:59:48.711944  727352 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/516937.pem /etc/ssl/certs/51391683.0"
	I1122 00:59:48.720024  727352 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/5169372.pem && ln -fs /usr/share/ca-certificates/5169372.pem /etc/ssl/certs/5169372.pem"
	I1122 00:59:48.728115  727352 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/5169372.pem
	I1122 00:59:48.731676  727352 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov 21 23:55 /usr/share/ca-certificates/5169372.pem
	I1122 00:59:48.731744  727352 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/5169372.pem
	I1122 00:59:48.772569  727352 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/5169372.pem /etc/ssl/certs/3ec20f2e.0"
	I1122 00:59:48.780728  727352 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1122 00:59:48.784018  727352 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1122 00:59:48.784071  727352 kubeadm.go:401] StartCluster: {Name:auto-163229 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763588073-21934@sha256:19d3da0413e1bfa354cbb88004c6796f8e9772a083e0230b0f6e50212ee04c7e Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:auto-163229 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[]
APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetCl
ientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1122 00:59:48.784153  727352 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1122 00:59:48.784216  727352 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1122 00:59:48.811826  727352 cri.go:89] found id: ""
	I1122 00:59:48.811939  727352 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1122 00:59:48.820457  727352 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1122 00:59:48.828313  727352 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1122 00:59:48.828406  727352 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1122 00:59:48.837469  727352 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1122 00:59:48.837487  727352 kubeadm.go:158] found existing configuration files:
	
	I1122 00:59:48.837535  727352 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1122 00:59:48.845373  727352 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1122 00:59:48.845463  727352 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1122 00:59:48.852820  727352 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1122 00:59:48.860513  727352 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1122 00:59:48.860599  727352 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1122 00:59:48.868196  727352 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1122 00:59:48.876166  727352 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1122 00:59:48.876238  727352 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1122 00:59:48.883537  727352 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1122 00:59:48.890999  727352 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1122 00:59:48.891063  727352 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1122 00:59:48.898436  727352 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1122 00:59:48.942457  727352 kubeadm.go:319] [init] Using Kubernetes version: v1.34.1
	I1122 00:59:48.942683  727352 kubeadm.go:319] [preflight] Running pre-flight checks
	I1122 00:59:48.964789  727352 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1122 00:59:48.964868  727352 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1122 00:59:48.964908  727352 kubeadm.go:319] OS: Linux
	I1122 00:59:48.964960  727352 kubeadm.go:319] CGROUPS_CPU: enabled
	I1122 00:59:48.965013  727352 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1122 00:59:48.965063  727352 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1122 00:59:48.965115  727352 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1122 00:59:48.965166  727352 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1122 00:59:48.965218  727352 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1122 00:59:48.965267  727352 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1122 00:59:48.965318  727352 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1122 00:59:48.965368  727352 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1122 00:59:49.030716  727352 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1122 00:59:49.030833  727352 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1122 00:59:49.030930  727352 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1122 00:59:49.041285  727352 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1122 00:59:49.047898  727352 out.go:252]   - Generating certificates and keys ...
	I1122 00:59:49.047995  727352 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1122 00:59:49.048070  727352 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1122 00:59:49.328163  727352 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1122 00:59:50.196631  727352 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1122 00:59:50.595835  727352 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1122 00:59:51.368980  727352 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1122 00:59:52.157278  727352 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1122 00:59:52.157497  727352 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [auto-163229 localhost] and IPs [192.168.76.2 127.0.0.1 ::1]
	I1122 00:59:52.579519  727352 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1122 00:59:52.579882  727352 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [auto-163229 localhost] and IPs [192.168.76.2 127.0.0.1 ::1]
	I1122 00:59:53.580391  727352 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1122 00:59:53.794266  727352 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1122 00:59:54.111652  727352 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1122 00:59:54.112005  727352 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1122 00:59:54.724094  727352 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1122 00:59:55.736272  727352 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1122 00:59:55.865678  727352 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1122 00:59:56.405655  727352 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1122 00:59:57.098307  727352 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1122 00:59:57.099700  727352 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1122 00:59:57.103812  727352 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1122 00:59:57.107123  727352 out.go:252]   - Booting up control plane ...
	I1122 00:59:57.107221  727352 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1122 00:59:57.107307  727352 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1122 00:59:57.107376  727352 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1122 00:59:57.124097  727352 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1122 00:59:57.124219  727352 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1122 00:59:57.132222  727352 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1122 00:59:57.132535  727352 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1122 00:59:57.132581  727352 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1122 00:59:57.254893  727352 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1122 00:59:57.255034  727352 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	
	
	==> CRI-O <==
	Nov 22 00:59:44 default-k8s-diff-port-882305 crio[652]: time="2025-11-22T00:59:44.813956301Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=abd052a6-1b79-456e-9da3-7f85865892ed name=/runtime.v1.ImageService/ImageStatus
	Nov 22 00:59:44 default-k8s-diff-port-882305 crio[652]: time="2025-11-22T00:59:44.816033728Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=c2b86634-46c2-4056-81e4-c208697c4361 name=/runtime.v1.ImageService/ImageStatus
	Nov 22 00:59:44 default-k8s-diff-port-882305 crio[652]: time="2025-11-22T00:59:44.817182333Z" level=info msg="Creating container: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-qhkdl/dashboard-metrics-scraper" id=b672711d-90d1-4384-aaac-57629b0289ea name=/runtime.v1.RuntimeService/CreateContainer
	Nov 22 00:59:44 default-k8s-diff-port-882305 crio[652]: time="2025-11-22T00:59:44.817286946Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 22 00:59:44 default-k8s-diff-port-882305 crio[652]: time="2025-11-22T00:59:44.829228614Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 22 00:59:44 default-k8s-diff-port-882305 crio[652]: time="2025-11-22T00:59:44.831043401Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 22 00:59:44 default-k8s-diff-port-882305 crio[652]: time="2025-11-22T00:59:44.853560906Z" level=info msg="Created container e6c681247aa417267830503d9c58396605d3706c59d9c0cf213900ef1533158d: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-qhkdl/dashboard-metrics-scraper" id=b672711d-90d1-4384-aaac-57629b0289ea name=/runtime.v1.RuntimeService/CreateContainer
	Nov 22 00:59:44 default-k8s-diff-port-882305 crio[652]: time="2025-11-22T00:59:44.861295891Z" level=info msg="Starting container: e6c681247aa417267830503d9c58396605d3706c59d9c0cf213900ef1533158d" id=4d6b603b-4848-49e8-ad76-1e99357dfdc9 name=/runtime.v1.RuntimeService/StartContainer
	Nov 22 00:59:44 default-k8s-diff-port-882305 crio[652]: time="2025-11-22T00:59:44.863378913Z" level=info msg="Started container" PID=1638 containerID=e6c681247aa417267830503d9c58396605d3706c59d9c0cf213900ef1533158d description=kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-qhkdl/dashboard-metrics-scraper id=4d6b603b-4848-49e8-ad76-1e99357dfdc9 name=/runtime.v1.RuntimeService/StartContainer sandboxID=1a453755c2263d3266645840281d1cf1f6ea63efc84761c5e0db4c52c760cb39
	Nov 22 00:59:44 default-k8s-diff-port-882305 conmon[1636]: conmon e6c681247aa417267830 <ninfo>: container 1638 exited with status 1
	Nov 22 00:59:45 default-k8s-diff-port-882305 crio[652]: time="2025-11-22T00:59:45.272909836Z" level=info msg="Removing container: 7fef82247ff2f4e4a77f9f05faa9569d4394bd65ead3af491328f162ebef040d" id=6d7b4fb7-efee-4934-97a5-84b4f984aba6 name=/runtime.v1.RuntimeService/RemoveContainer
	Nov 22 00:59:45 default-k8s-diff-port-882305 crio[652]: time="2025-11-22T00:59:45.284854679Z" level=info msg="Error loading conmon cgroup of container 7fef82247ff2f4e4a77f9f05faa9569d4394bd65ead3af491328f162ebef040d: cgroup deleted" id=6d7b4fb7-efee-4934-97a5-84b4f984aba6 name=/runtime.v1.RuntimeService/RemoveContainer
	Nov 22 00:59:45 default-k8s-diff-port-882305 crio[652]: time="2025-11-22T00:59:45.301936815Z" level=info msg="Removed container 7fef82247ff2f4e4a77f9f05faa9569d4394bd65ead3af491328f162ebef040d: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-qhkdl/dashboard-metrics-scraper" id=6d7b4fb7-efee-4934-97a5-84b4f984aba6 name=/runtime.v1.RuntimeService/RemoveContainer
	Nov 22 00:59:51 default-k8s-diff-port-882305 crio[652]: time="2025-11-22T00:59:51.412178499Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 22 00:59:51 default-k8s-diff-port-882305 crio[652]: time="2025-11-22T00:59:51.41920841Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 22 00:59:51 default-k8s-diff-port-882305 crio[652]: time="2025-11-22T00:59:51.419368505Z" level=info msg="Updated default CNI network name to kindnet"
	Nov 22 00:59:51 default-k8s-diff-port-882305 crio[652]: time="2025-11-22T00:59:51.419442718Z" level=info msg="CNI monitoring event WRITE         \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 22 00:59:51 default-k8s-diff-port-882305 crio[652]: time="2025-11-22T00:59:51.426054516Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 22 00:59:51 default-k8s-diff-port-882305 crio[652]: time="2025-11-22T00:59:51.426218417Z" level=info msg="Updated default CNI network name to kindnet"
	Nov 22 00:59:51 default-k8s-diff-port-882305 crio[652]: time="2025-11-22T00:59:51.426289652Z" level=info msg="CNI monitoring event RENAME        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 22 00:59:51 default-k8s-diff-port-882305 crio[652]: time="2025-11-22T00:59:51.433167182Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 22 00:59:51 default-k8s-diff-port-882305 crio[652]: time="2025-11-22T00:59:51.433315838Z" level=info msg="Updated default CNI network name to kindnet"
	Nov 22 00:59:51 default-k8s-diff-port-882305 crio[652]: time="2025-11-22T00:59:51.433385219Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist\" ← \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 22 00:59:51 default-k8s-diff-port-882305 crio[652]: time="2025-11-22T00:59:51.437305887Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 22 00:59:51 default-k8s-diff-port-882305 crio[652]: time="2025-11-22T00:59:51.437446282Z" level=info msg="Updated default CNI network name to kindnet"
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                      CREATED             STATE               NAME                        ATTEMPT             POD ID              POD                                                    NAMESPACE
	e6c681247aa41       a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7                                           17 seconds ago      Exited              dashboard-metrics-scraper   2                   1a453755c2263       dashboard-metrics-scraper-6ffb444bf9-qhkdl             kubernetes-dashboard
	6797e48f56252       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                           21 seconds ago      Running             storage-provisioner         2                   882a29a7f6a3e       storage-provisioner                                    kube-system
	93cb84e4ab699       docker.io/kubernetesui/dashboard@sha256:5c52c60663b473628bd98e4ffee7a747ef1f88d8c7bcee957b089fb3f61bdedf   41 seconds ago      Running             kubernetes-dashboard        0                   443e84d4af101       kubernetes-dashboard-855c9754f9-sx5ls                  kubernetes-dashboard
	77f6a3c0f1d2e       138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc                                           51 seconds ago      Running             coredns                     1                   1b306f7653d80       coredns-66bc5c9577-448gn                               kube-system
	4358d6a53beb0       1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c                                           51 seconds ago      Running             busybox                     1                   46d69b833ecb2       busybox                                                default
	99efae2906741       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c                                           51 seconds ago      Running             kindnet-cni                 1                   af7ee23936519       kindnet-kcwqj                                          kube-system
	24c64924a669e       05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9                                           51 seconds ago      Running             kube-proxy                  1                   287f524f55754       kube-proxy-59l6x                                       kube-system
	e34c46d28bc8c       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                           51 seconds ago      Exited              storage-provisioner         1                   882a29a7f6a3e       storage-provisioner                                    kube-system
	ef5cf3bc0e8a1       7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a                                           59 seconds ago      Running             kube-controller-manager     1                   a64cd93c0583f       kube-controller-manager-default-k8s-diff-port-882305   kube-system
	d1d854f1c70c8       b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0                                           59 seconds ago      Running             kube-scheduler              1                   8d2f2277e2734       kube-scheduler-default-k8s-diff-port-882305            kube-system
	c0ae038240897       a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e                                           59 seconds ago      Running             etcd                        1                   1478edc689a14       etcd-default-k8s-diff-port-882305                      kube-system
	1ce380445cfc1       43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196                                           59 seconds ago      Running             kube-apiserver              1                   aa4e4c0b74e90       kube-apiserver-default-k8s-diff-port-882305            kube-system
	
	
	==> coredns [77f6a3c0f1d2e079997f3dddd18e52dfa729d725f0cb10e1940295c459f10d6b] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = fa9a0cdcdddcb4be74a0eaf7cfcb211c40e29ddf5507e03bbfc0065bade31f0f2641a2513136e246f32328dd126fc93236fb5c595246f0763926a524386705e8
	CoreDNS-1.12.1
	linux/arm64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:59403 - 7797 "HINFO IN 4530320780478815200.5693597820438234680. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.007762063s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	
	
	==> describe nodes <==
	Name:               default-k8s-diff-port-882305
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=default-k8s-diff-port-882305
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=299bbe887a12c40541707cc636234f35f4ff1785
	                    minikube.k8s.io/name=default-k8s-diff-port-882305
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_11_22T00_58_07_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 22 Nov 2025 00:58:03 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  default-k8s-diff-port-882305
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 22 Nov 2025 00:59:50 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 22 Nov 2025 00:59:49 +0000   Sat, 22 Nov 2025 00:57:59 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 22 Nov 2025 00:59:49 +0000   Sat, 22 Nov 2025 00:57:59 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 22 Nov 2025 00:59:49 +0000   Sat, 22 Nov 2025 00:57:59 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 22 Nov 2025 00:59:49 +0000   Sat, 22 Nov 2025 00:58:25 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.85.2
	  Hostname:    default-k8s-diff-port-882305
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	System Info:
	  Machine ID:                 c92eea4d3c03c0156e01fcf8691e3907
	  System UUID:                3e7302ec-f0a5-4d0d-8a5f-75986888bef8
	  Boot ID:                    72ac7385-472f-47d1-a23e-bd80468d6e09
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.2
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                    ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         94s
	  kube-system                 coredns-66bc5c9577-448gn                                100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     111s
	  kube-system                 etcd-default-k8s-diff-port-882305                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         115s
	  kube-system                 kindnet-kcwqj                                           100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      111s
	  kube-system                 kube-apiserver-default-k8s-diff-port-882305             250m (12%)    0 (0%)      0 (0%)           0 (0%)         116s
	  kube-system                 kube-controller-manager-default-k8s-diff-port-882305    200m (10%)    0 (0%)      0 (0%)           0 (0%)         115s
	  kube-system                 kube-proxy-59l6x                                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         111s
	  kube-system                 kube-scheduler-default-k8s-diff-port-882305             100m (5%)     0 (0%)      0 (0%)           0 (0%)         116s
	  kube-system                 storage-provisioner                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         109s
	  kubernetes-dashboard        dashboard-metrics-scraper-6ffb444bf9-qhkdl              0 (0%)        0 (0%)      0 (0%)           0 (0%)         50s
	  kubernetes-dashboard        kubernetes-dashboard-855c9754f9-sx5ls                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         50s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  100m (5%)
	  memory             220Mi (2%)  220Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                  From             Message
	  ----     ------                   ----                 ----             -------
	  Normal   Starting                 108s                 kube-proxy       
	  Normal   Starting                 51s                  kube-proxy       
	  Normal   NodeHasSufficientMemory  2m3s (x8 over 2m3s)  kubelet          Node default-k8s-diff-port-882305 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    2m3s (x8 over 2m3s)  kubelet          Node default-k8s-diff-port-882305 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     2m3s (x8 over 2m3s)  kubelet          Node default-k8s-diff-port-882305 status is now: NodeHasSufficientPID
	  Normal   NodeHasNoDiskPressure    116s                 kubelet          Node default-k8s-diff-port-882305 status is now: NodeHasNoDiskPressure
	  Warning  CgroupV1                 116s                 kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  116s                 kubelet          Node default-k8s-diff-port-882305 status is now: NodeHasSufficientMemory
	  Normal   NodeHasSufficientPID     116s                 kubelet          Node default-k8s-diff-port-882305 status is now: NodeHasSufficientPID
	  Normal   Starting                 116s                 kubelet          Starting kubelet.
	  Normal   RegisteredNode           112s                 node-controller  Node default-k8s-diff-port-882305 event: Registered Node default-k8s-diff-port-882305 in Controller
	  Normal   NodeReady                97s                  kubelet          Node default-k8s-diff-port-882305 status is now: NodeReady
	  Normal   Starting                 60s                  kubelet          Starting kubelet.
	  Warning  CgroupV1                 60s                  kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  60s (x8 over 60s)    kubelet          Node default-k8s-diff-port-882305 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    60s (x8 over 60s)    kubelet          Node default-k8s-diff-port-882305 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     60s (x8 over 60s)    kubelet          Node default-k8s-diff-port-882305 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           50s                  node-controller  Node default-k8s-diff-port-882305 event: Registered Node default-k8s-diff-port-882305 in Controller
	
	
	==> dmesg <==
	[ +56.322609] overlayfs: idmapped layers are currently not supported
	[Nov22 00:38] overlayfs: idmapped layers are currently not supported
	[Nov22 00:39] overlayfs: idmapped layers are currently not supported
	[ +23.174928] overlayfs: idmapped layers are currently not supported
	[Nov22 00:41] overlayfs: idmapped layers are currently not supported
	[Nov22 00:42] overlayfs: idmapped layers are currently not supported
	[Nov22 00:44] overlayfs: idmapped layers are currently not supported
	[Nov22 00:45] overlayfs: idmapped layers are currently not supported
	[Nov22 00:46] overlayfs: idmapped layers are currently not supported
	[Nov22 00:48] overlayfs: idmapped layers are currently not supported
	[Nov22 00:50] overlayfs: idmapped layers are currently not supported
	[Nov22 00:51] overlayfs: idmapped layers are currently not supported
	[ +11.900293] overlayfs: idmapped layers are currently not supported
	[ +28.922055] overlayfs: idmapped layers are currently not supported
	[Nov22 00:52] overlayfs: idmapped layers are currently not supported
	[Nov22 00:53] overlayfs: idmapped layers are currently not supported
	[Nov22 00:54] overlayfs: idmapped layers are currently not supported
	[Nov22 00:55] overlayfs: idmapped layers are currently not supported
	[Nov22 00:56] overlayfs: idmapped layers are currently not supported
	[Nov22 00:57] overlayfs: idmapped layers are currently not supported
	[Nov22 00:58] overlayfs: idmapped layers are currently not supported
	[ +43.407301] overlayfs: idmapped layers are currently not supported
	[Nov22 00:59] overlayfs: idmapped layers are currently not supported
	[  +8.585740] overlayfs: idmapped layers are currently not supported
	[Nov22 01:00] overlayfs: idmapped layers are currently not supported
	
	
	==> etcd [c0ae03824089747781ca3fa95c137501b3b35608e772c7bf534789a146554e3c] <==
	{"level":"warn","ts":"2025-11-22T00:59:07.029564Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60828","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-22T00:59:07.054272Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60878","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-22T00:59:07.087750Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60904","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-22T00:59:07.100268Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60920","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-22T00:59:07.109064Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60944","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-22T00:59:07.148563Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60960","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-22T00:59:07.164513Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60976","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-22T00:59:07.189834Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60984","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-22T00:59:07.208016Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:32770","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-22T00:59:07.249473Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:32790","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-22T00:59:07.262290Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:32802","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-22T00:59:07.286270Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:32830","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-22T00:59:07.298624Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:32842","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-22T00:59:07.335948Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:32870","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-22T00:59:07.349161Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:32876","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-22T00:59:07.364503Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:32890","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-22T00:59:07.383696Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:32896","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-22T00:59:07.407283Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:32906","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-22T00:59:07.423463Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:32930","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-22T00:59:07.448141Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:32948","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-22T00:59:07.467726Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:32958","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-22T00:59:07.513321Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:32972","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-22T00:59:07.542417Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:32978","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-22T00:59:07.560597Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33000","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-22T00:59:07.657267Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33010","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 01:00:03 up  5:41,  0 user,  load average: 5.18, 4.28, 3.19
	Linux default-k8s-diff-port-882305 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [99efae290674153fda78e1cc8d351668db7f8f1a6a89e147416cef08d7b43096] <==
	I1122 00:59:11.213944       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1122 00:59:11.214287       1 main.go:139] hostIP = 192.168.85.2
	podIP = 192.168.85.2
	I1122 00:59:11.220187       1 main.go:148] setting mtu 1500 for CNI 
	I1122 00:59:11.220367       1 main.go:178] kindnetd IP family: "ipv4"
	I1122 00:59:11.220415       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-11-22T00:59:11Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1122 00:59:11.411480       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1122 00:59:11.411573       1 controller.go:381] "Waiting for informer caches to sync"
	I1122 00:59:11.411605       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1122 00:59:11.412374       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	E1122 00:59:41.412254       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Namespace: Get \"https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Namespace"
	E1122 00:59:41.412374       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.96.0.1:443/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Node"
	E1122 00:59:41.412456       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Pod: Get \"https://10.96.0.1:443/api/v1/pods?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Pod"
	E1122 00:59:41.412535       1 reflector.go:200] "Failed to watch" err="failed to list *v1.NetworkPolicy: Get \"https://10.96.0.1:443/apis/networking.k8s.io/v1/networkpolicies?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.NetworkPolicy"
	I1122 00:59:43.012806       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1122 00:59:43.012842       1 metrics.go:72] Registering metrics
	I1122 00:59:43.012904       1 controller.go:711] "Syncing nftables rules"
	I1122 00:59:51.411777       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1122 00:59:51.411907       1 main.go:301] handling current node
	I1122 01:00:01.419472       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1122 01:00:01.419532       1 main.go:301] handling current node
	
	
	==> kube-apiserver [1ce380445cfc1fe8d2cbb405092ab03fd65cb6c2cf8bac3317898266e679c5d3] <==
	I1122 00:59:09.166145       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I1122 00:59:09.166273       1 shared_informer.go:356] "Caches are synced" controller="cluster_authentication_trust_controller"
	I1122 00:59:09.166315       1 cache.go:39] Caches are synced for LocalAvailability controller
	I1122 00:59:09.166547       1 aggregator.go:171] initial CRD sync complete...
	I1122 00:59:09.166556       1 autoregister_controller.go:144] Starting autoregister controller
	I1122 00:59:09.166561       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1122 00:59:09.166566       1 cache.go:39] Caches are synced for autoregister controller
	I1122 00:59:09.166727       1 shared_informer.go:356] "Caches are synced" controller="ipallocator-repair-controller"
	I1122 00:59:09.174446       1 shared_informer.go:356] "Caches are synced" controller="*generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]"
	I1122 00:59:09.174473       1 policy_source.go:240] refreshing policies
	I1122 00:59:09.190300       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1122 00:59:09.197252       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1122 00:59:09.219609       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I1122 00:59:09.428166       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1122 00:59:10.167478       1 controller.go:667] quota admission added evaluator for: namespaces
	I1122 00:59:10.241611       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1122 00:59:10.346097       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1122 00:59:10.418866       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1122 00:59:10.490698       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1122 00:59:10.626123       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.111.206.183"}
	I1122 00:59:10.709293       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.97.232.63"}
	I1122 00:59:12.388266       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I1122 00:59:12.437666       1 controller.go:667] quota admission added evaluator for: endpoints
	I1122 00:59:12.480282       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1122 00:59:12.508830       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	
	
	==> kube-controller-manager [ef5cf3bc0e8a1e84b865765165a5244f97715b14ad4afe6bdecb47483cb802ba] <==
	I1122 00:59:12.282504       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I1122 00:59:12.282612       1 shared_informer.go:356] "Caches are synced" controller="namespace"
	I1122 00:59:12.282671       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I1122 00:59:12.286093       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1122 00:59:12.302419       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I1122 00:59:12.306003       1 shared_informer.go:356] "Caches are synced" controller="taint"
	I1122 00:59:12.318283       1 shared_informer.go:356] "Caches are synced" controller="job"
	I1122 00:59:12.318426       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	I1122 00:59:12.335849       1 shared_informer.go:356] "Caches are synced" controller="ReplicationController"
	I1122 00:59:12.335959       1 shared_informer.go:356] "Caches are synced" controller="validatingadmissionpolicy-status"
	I1122 00:59:12.336010       1 shared_informer.go:356] "Caches are synced" controller="cronjob"
	I1122 00:59:12.336077       1 shared_informer.go:356] "Caches are synced" controller="VAC protection"
	I1122 00:59:12.336117       1 shared_informer.go:356] "Caches are synced" controller="PVC protection"
	I1122 00:59:12.336177       1 shared_informer.go:356] "Caches are synced" controller="service account"
	I1122 00:59:12.336500       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1122 00:59:12.338310       1 shared_informer.go:356] "Caches are synced" controller="disruption"
	I1122 00:59:12.343745       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I1122 00:59:12.349541       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="default-k8s-diff-port-882305"
	I1122 00:59:12.350546       1 node_lifecycle_controller.go:1067] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I1122 00:59:12.343892       1 shared_informer.go:356] "Caches are synced" controller="service-cidr-controller"
	I1122 00:59:12.344600       1 shared_informer.go:356] "Caches are synced" controller="legacy-service-account-token-cleaner"
	I1122 00:59:12.428981       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1122 00:59:12.429270       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1122 00:59:12.429357       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1122 00:59:12.429461       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	
	
	==> kube-proxy [24c64924a669eecc6f41d3f6f2a0935ebe7520e41d8214678fc5533fb88d7dd3] <==
	I1122 00:59:11.471029       1 server_linux.go:53] "Using iptables proxy"
	I1122 00:59:11.632210       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1122 00:59:11.755208       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1122 00:59:11.759137       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.85.2"]
	E1122 00:59:11.766042       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1122 00:59:11.879624       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1122 00:59:11.879675       1 server_linux.go:132] "Using iptables Proxier"
	I1122 00:59:11.889079       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1122 00:59:11.889344       1 server.go:527] "Version info" version="v1.34.1"
	I1122 00:59:11.889359       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1122 00:59:11.891013       1 config.go:200] "Starting service config controller"
	I1122 00:59:11.891037       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1122 00:59:11.891061       1 config.go:106] "Starting endpoint slice config controller"
	I1122 00:59:11.891066       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1122 00:59:11.891081       1 config.go:403] "Starting serviceCIDR config controller"
	I1122 00:59:11.891085       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1122 00:59:11.895682       1 config.go:309] "Starting node config controller"
	I1122 00:59:11.895701       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1122 00:59:11.895709       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1122 00:59:11.993456       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1122 00:59:11.993502       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1122 00:59:11.993541       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [d1d854f1c70c8c8f58aacea7d3bc3bea0c433b6787c467ffaf9f43d30127f3aa] <==
	I1122 00:59:08.780069       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1122 00:59:08.830655       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1122 00:59:08.830692       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1122 00:59:08.832564       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1122 00:59:08.832685       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	E1122 00:59:08.867215       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1122 00:59:08.867293       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1122 00:59:08.867588       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1122 00:59:08.867655       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1122 00:59:08.867694       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1122 00:59:08.867730       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1122 00:59:08.867767       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1122 00:59:08.867807       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1122 00:59:08.867846       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1122 00:59:08.867882       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1122 00:59:08.867916       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1122 00:59:08.867960       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1122 00:59:08.868588       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_arm64.s:1223" type="*v1.ConfigMap"
	E1122 00:59:08.890275       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1122 00:59:08.890338       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1122 00:59:08.890378       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1122 00:59:08.890417       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1122 00:59:08.890454       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1122 00:59:08.890503       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	I1122 00:59:10.033861       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Nov 22 00:59:12 default-k8s-diff-port-882305 kubelet[782]: I1122 00:59:12.768979     782 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-djl87\" (UniqueName: \"kubernetes.io/projected/a27d7302-b089-4adf-a86b-4d6b9bfdb28c-kube-api-access-djl87\") pod \"kubernetes-dashboard-855c9754f9-sx5ls\" (UID: \"a27d7302-b089-4adf-a86b-4d6b9bfdb28c\") " pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-sx5ls"
	Nov 22 00:59:12 default-k8s-diff-port-882305 kubelet[782]: I1122 00:59:12.769492     782 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/a27d7302-b089-4adf-a86b-4d6b9bfdb28c-tmp-volume\") pod \"kubernetes-dashboard-855c9754f9-sx5ls\" (UID: \"a27d7302-b089-4adf-a86b-4d6b9bfdb28c\") " pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-sx5ls"
	Nov 22 00:59:12 default-k8s-diff-port-882305 kubelet[782]: I1122 00:59:12.869986     782 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/de50e910-8206-46b1-918d-353f76a54323-tmp-volume\") pod \"dashboard-metrics-scraper-6ffb444bf9-qhkdl\" (UID: \"de50e910-8206-46b1-918d-353f76a54323\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-qhkdl"
	Nov 22 00:59:12 default-k8s-diff-port-882305 kubelet[782]: I1122 00:59:12.870193     782 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9sc8c\" (UniqueName: \"kubernetes.io/projected/de50e910-8206-46b1-918d-353f76a54323-kube-api-access-9sc8c\") pod \"dashboard-metrics-scraper-6ffb444bf9-qhkdl\" (UID: \"de50e910-8206-46b1-918d-353f76a54323\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-qhkdl"
	Nov 22 00:59:12 default-k8s-diff-port-882305 kubelet[782]: W1122 00:59:12.995860     782 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/3f972239d661c1a7b13b144967a8f04cbcdb72c736bec80a78bae8fab1d357e1/crio-443e84d4af101581c7521ebf3de82847ebb19b32ca053e0d41057f86372641f2 WatchSource:0}: Error finding container 443e84d4af101581c7521ebf3de82847ebb19b32ca053e0d41057f86372641f2: Status 404 returned error can't find the container with id 443e84d4af101581c7521ebf3de82847ebb19b32ca053e0d41057f86372641f2
	Nov 22 00:59:13 default-k8s-diff-port-882305 kubelet[782]: W1122 00:59:13.157355     782 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/3f972239d661c1a7b13b144967a8f04cbcdb72c736bec80a78bae8fab1d357e1/crio-1a453755c2263d3266645840281d1cf1f6ea63efc84761c5e0db4c52c760cb39 WatchSource:0}: Error finding container 1a453755c2263d3266645840281d1cf1f6ea63efc84761c5e0db4c52c760cb39: Status 404 returned error can't find the container with id 1a453755c2263d3266645840281d1cf1f6ea63efc84761c5e0db4c52c760cb39
	Nov 22 00:59:28 default-k8s-diff-port-882305 kubelet[782]: I1122 00:59:28.212297     782 scope.go:117] "RemoveContainer" containerID="2d7935ddb77f1a24c452a280df0ed370bb0576b6eecca2c0445ba137dcac57cc"
	Nov 22 00:59:28 default-k8s-diff-port-882305 kubelet[782]: I1122 00:59:28.245608     782 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-sx5ls" podStartSLOduration=8.221576667 podStartE2EDuration="16.24559078s" podCreationTimestamp="2025-11-22 00:59:12 +0000 UTC" firstStartedPulling="2025-11-22 00:59:13.008829887 +0000 UTC m=+10.496820335" lastFinishedPulling="2025-11-22 00:59:21.032844 +0000 UTC m=+18.520834448" observedRunningTime="2025-11-22 00:59:21.217308446 +0000 UTC m=+18.705298902" watchObservedRunningTime="2025-11-22 00:59:28.24559078 +0000 UTC m=+25.733581236"
	Nov 22 00:59:29 default-k8s-diff-port-882305 kubelet[782]: I1122 00:59:29.216600     782 scope.go:117] "RemoveContainer" containerID="2d7935ddb77f1a24c452a280df0ed370bb0576b6eecca2c0445ba137dcac57cc"
	Nov 22 00:59:29 default-k8s-diff-port-882305 kubelet[782]: I1122 00:59:29.216916     782 scope.go:117] "RemoveContainer" containerID="7fef82247ff2f4e4a77f9f05faa9569d4394bd65ead3af491328f162ebef040d"
	Nov 22 00:59:29 default-k8s-diff-port-882305 kubelet[782]: E1122 00:59:29.217077     782 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-qhkdl_kubernetes-dashboard(de50e910-8206-46b1-918d-353f76a54323)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-qhkdl" podUID="de50e910-8206-46b1-918d-353f76a54323"
	Nov 22 00:59:30 default-k8s-diff-port-882305 kubelet[782]: I1122 00:59:30.222352     782 scope.go:117] "RemoveContainer" containerID="7fef82247ff2f4e4a77f9f05faa9569d4394bd65ead3af491328f162ebef040d"
	Nov 22 00:59:30 default-k8s-diff-port-882305 kubelet[782]: E1122 00:59:30.222994     782 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-qhkdl_kubernetes-dashboard(de50e910-8206-46b1-918d-353f76a54323)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-qhkdl" podUID="de50e910-8206-46b1-918d-353f76a54323"
	Nov 22 00:59:33 default-k8s-diff-port-882305 kubelet[782]: I1122 00:59:33.033921     782 scope.go:117] "RemoveContainer" containerID="7fef82247ff2f4e4a77f9f05faa9569d4394bd65ead3af491328f162ebef040d"
	Nov 22 00:59:33 default-k8s-diff-port-882305 kubelet[782]: E1122 00:59:33.034627     782 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-qhkdl_kubernetes-dashboard(de50e910-8206-46b1-918d-353f76a54323)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-qhkdl" podUID="de50e910-8206-46b1-918d-353f76a54323"
	Nov 22 00:59:41 default-k8s-diff-port-882305 kubelet[782]: I1122 00:59:41.252577     782 scope.go:117] "RemoveContainer" containerID="e34c46d28bc8c466e4b69397894de9dbaf562f334db936138792ea857f7984cf"
	Nov 22 00:59:44 default-k8s-diff-port-882305 kubelet[782]: I1122 00:59:44.812964     782 scope.go:117] "RemoveContainer" containerID="7fef82247ff2f4e4a77f9f05faa9569d4394bd65ead3af491328f162ebef040d"
	Nov 22 00:59:45 default-k8s-diff-port-882305 kubelet[782]: I1122 00:59:45.267301     782 scope.go:117] "RemoveContainer" containerID="7fef82247ff2f4e4a77f9f05faa9569d4394bd65ead3af491328f162ebef040d"
	Nov 22 00:59:45 default-k8s-diff-port-882305 kubelet[782]: I1122 00:59:45.267635     782 scope.go:117] "RemoveContainer" containerID="e6c681247aa417267830503d9c58396605d3706c59d9c0cf213900ef1533158d"
	Nov 22 00:59:45 default-k8s-diff-port-882305 kubelet[782]: E1122 00:59:45.267942     782 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-qhkdl_kubernetes-dashboard(de50e910-8206-46b1-918d-353f76a54323)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-qhkdl" podUID="de50e910-8206-46b1-918d-353f76a54323"
	Nov 22 00:59:53 default-k8s-diff-port-882305 kubelet[782]: I1122 00:59:53.034045     782 scope.go:117] "RemoveContainer" containerID="e6c681247aa417267830503d9c58396605d3706c59d9c0cf213900ef1533158d"
	Nov 22 00:59:53 default-k8s-diff-port-882305 kubelet[782]: E1122 00:59:53.034266     782 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-qhkdl_kubernetes-dashboard(de50e910-8206-46b1-918d-353f76a54323)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-qhkdl" podUID="de50e910-8206-46b1-918d-353f76a54323"
	Nov 22 00:59:58 default-k8s-diff-port-882305 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Nov 22 00:59:58 default-k8s-diff-port-882305 systemd[1]: kubelet.service: Deactivated successfully.
	Nov 22 00:59:58 default-k8s-diff-port-882305 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	
	
	==> kubernetes-dashboard [93cb84e4ab699eb2196717799321e4330d7fd89c7b5847a1367298b8dc5f69b4] <==
	2025/11/22 00:59:21 Using namespace: kubernetes-dashboard
	2025/11/22 00:59:21 Using in-cluster config to connect to apiserver
	2025/11/22 00:59:21 Using secret token for csrf signing
	2025/11/22 00:59:21 Initializing csrf token from kubernetes-dashboard-csrf secret
	2025/11/22 00:59:21 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2025/11/22 00:59:21 Successful initial request to the apiserver, version: v1.34.1
	2025/11/22 00:59:21 Generating JWE encryption key
	2025/11/22 00:59:21 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2025/11/22 00:59:21 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2025/11/22 00:59:22 Initializing JWE encryption key from synchronized object
	2025/11/22 00:59:22 Creating in-cluster Sidecar client
	2025/11/22 00:59:22 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/11/22 00:59:22 Serving insecurely on HTTP port: 9090
	2025/11/22 00:59:52 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/11/22 00:59:21 Starting overwatch
	
	
	==> storage-provisioner [6797e48f56252ab176007011001840ada8a9976acd404dda959a8334d3c46cdb] <==
	I1122 00:59:41.307400       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1122 00:59:41.320439       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1122 00:59:41.320554       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W1122 00:59:41.323028       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1122 00:59:44.779172       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1122 00:59:49.040448       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1122 00:59:52.639304       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1122 00:59:55.692511       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1122 00:59:58.715844       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1122 00:59:58.722690       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1122 00:59:58.722952       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1122 00:59:58.725187       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-882305_f3fb3511-5bfa-4e5e-b5dc-34953193612b!
	I1122 00:59:58.732243       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"5cef023a-e193-4fc5-8350-b0d9fd8c5815", APIVersion:"v1", ResourceVersion:"688", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' default-k8s-diff-port-882305_f3fb3511-5bfa-4e5e-b5dc-34953193612b became leader
	W1122 00:59:58.733603       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1122 00:59:58.759176       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1122 00:59:58.827473       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-882305_f3fb3511-5bfa-4e5e-b5dc-34953193612b!
	W1122 01:00:00.774904       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1122 01:00:00.796597       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1122 01:00:02.799633       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1122 01:00:02.814046       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	
	
	==> storage-provisioner [e34c46d28bc8c466e4b69397894de9dbaf562f334db936138792ea857f7984cf] <==
	I1122 00:59:10.917080       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1122 00:59:40.919323       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p default-k8s-diff-port-882305 -n default-k8s-diff-port-882305
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p default-k8s-diff-port-882305 -n default-k8s-diff-port-882305: exit status 2 (597.942024ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:269: (dbg) Run:  kubectl --context default-k8s-diff-port-882305 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/default-k8s-diff-port/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/Pause]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/Pause]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect default-k8s-diff-port-882305
helpers_test.go:243: (dbg) docker inspect default-k8s-diff-port-882305:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "3f972239d661c1a7b13b144967a8f04cbcdb72c736bec80a78bae8fab1d357e1",
	        "Created": "2025-11-22T00:57:41.715477223Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 721430,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-11-22T00:58:54.762328967Z",
	            "FinishedAt": "2025-11-22T00:58:53.878561188Z"
	        },
	        "Image": "sha256:97061622515a85470dad13cb046ad5fc5021fcd2b05aa921a620abb52951cd5d",
	        "ResolvConfPath": "/var/lib/docker/containers/3f972239d661c1a7b13b144967a8f04cbcdb72c736bec80a78bae8fab1d357e1/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/3f972239d661c1a7b13b144967a8f04cbcdb72c736bec80a78bae8fab1d357e1/hostname",
	        "HostsPath": "/var/lib/docker/containers/3f972239d661c1a7b13b144967a8f04cbcdb72c736bec80a78bae8fab1d357e1/hosts",
	        "LogPath": "/var/lib/docker/containers/3f972239d661c1a7b13b144967a8f04cbcdb72c736bec80a78bae8fab1d357e1/3f972239d661c1a7b13b144967a8f04cbcdb72c736bec80a78bae8fab1d357e1-json.log",
	        "Name": "/default-k8s-diff-port-882305",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "default-k8s-diff-port-882305:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "default-k8s-diff-port-882305",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8444/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "3f972239d661c1a7b13b144967a8f04cbcdb72c736bec80a78bae8fab1d357e1",
	                "LowerDir": "/var/lib/docker/overlay2/2a6ef4a6700f6fb0e1d43ffbcadcf526e3d5c7ff5a78ad3ae005fd03563625b2-init/diff:/var/lib/docker/overlay2/7e8788c6de692bc1c3758a2bb2c4b8da0fbba26855f855c0f3b655bfbdd92f8e/diff",
	                "MergedDir": "/var/lib/docker/overlay2/2a6ef4a6700f6fb0e1d43ffbcadcf526e3d5c7ff5a78ad3ae005fd03563625b2/merged",
	                "UpperDir": "/var/lib/docker/overlay2/2a6ef4a6700f6fb0e1d43ffbcadcf526e3d5c7ff5a78ad3ae005fd03563625b2/diff",
	                "WorkDir": "/var/lib/docker/overlay2/2a6ef4a6700f6fb0e1d43ffbcadcf526e3d5c7ff5a78ad3ae005fd03563625b2/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "default-k8s-diff-port-882305",
	                "Source": "/var/lib/docker/volumes/default-k8s-diff-port-882305/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "default-k8s-diff-port-882305",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8444/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763588073-21934@sha256:19d3da0413e1bfa354cbb88004c6796f8e9772a083e0230b0f6e50212ee04c7e",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "default-k8s-diff-port-882305",
	                "name.minikube.sigs.k8s.io": "default-k8s-diff-port-882305",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "22f138de61f9278f25b67745fc7a3d678329237fa47cfcabfb4fe36d425d3a5c",
	            "SandboxKey": "/var/run/docker/netns/22f138de61f9",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33812"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33813"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33816"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33814"
	                    }
	                ],
	                "8444/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33815"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "default-k8s-diff-port-882305": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.85.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "da:c7:8f:97:68:d5",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "b345e3fe787228de3ab90525c1947dc1357720a8a249cb6a46c68e40ecbfe59b",
	                    "EndpointID": "e8d8e49b23f29c0542a40582af41dc213ef0d9b4e28836a0e84ef845cfedf6d5",
	                    "Gateway": "192.168.85.1",
	                    "IPAddress": "192.168.85.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "default-k8s-diff-port-882305",
	                        "3f972239d661"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-882305 -n default-k8s-diff-port-882305
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-882305 -n default-k8s-diff-port-882305: exit status 2 (441.600847ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestStartStop/group/default-k8s-diff-port/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/Pause]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p default-k8s-diff-port-882305 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p default-k8s-diff-port-882305 logs -n 25: (1.812163812s)
helpers_test.go:260: TestStartStop/group/default-k8s-diff-port/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬──────────────
───────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼──────────────
───────┤
	│ pause   │ -p no-preload-165130 --alsologtostderr -v=1                                                                                                                                                                                                   │ no-preload-165130            │ jenkins │ v1.37.0 │ 22 Nov 25 00:57 UTC │                     │
	│ delete  │ -p no-preload-165130                                                                                                                                                                                                                          │ no-preload-165130            │ jenkins │ v1.37.0 │ 22 Nov 25 00:57 UTC │ 22 Nov 25 00:57 UTC │
	│ delete  │ -p no-preload-165130                                                                                                                                                                                                                          │ no-preload-165130            │ jenkins │ v1.37.0 │ 22 Nov 25 00:57 UTC │ 22 Nov 25 00:57 UTC │
	│ delete  │ -p disable-driver-mounts-046489                                                                                                                                                                                                               │ disable-driver-mounts-046489 │ jenkins │ v1.37.0 │ 22 Nov 25 00:57 UTC │ 22 Nov 25 00:57 UTC │
	│ start   │ -p default-k8s-diff-port-882305 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                      │ default-k8s-diff-port-882305 │ jenkins │ v1.37.0 │ 22 Nov 25 00:57 UTC │ 22 Nov 25 00:58 UTC │
	│ image   │ embed-certs-879000 image list --format=json                                                                                                                                                                                                   │ embed-certs-879000           │ jenkins │ v1.37.0 │ 22 Nov 25 00:58 UTC │ 22 Nov 25 00:58 UTC │
	│ pause   │ -p embed-certs-879000 --alsologtostderr -v=1                                                                                                                                                                                                  │ embed-certs-879000           │ jenkins │ v1.37.0 │ 22 Nov 25 00:58 UTC │                     │
	│ delete  │ -p embed-certs-879000                                                                                                                                                                                                                         │ embed-certs-879000           │ jenkins │ v1.37.0 │ 22 Nov 25 00:58 UTC │ 22 Nov 25 00:58 UTC │
	│ delete  │ -p embed-certs-879000                                                                                                                                                                                                                         │ embed-certs-879000           │ jenkins │ v1.37.0 │ 22 Nov 25 00:58 UTC │ 22 Nov 25 00:58 UTC │
	│ start   │ -p newest-cni-683181 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1 │ newest-cni-683181            │ jenkins │ v1.37.0 │ 22 Nov 25 00:58 UTC │ 22 Nov 25 00:58 UTC │
	│ addons  │ enable metrics-server -p default-k8s-diff-port-882305 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                            │ default-k8s-diff-port-882305 │ jenkins │ v1.37.0 │ 22 Nov 25 00:58 UTC │                     │
	│ stop    │ -p default-k8s-diff-port-882305 --alsologtostderr -v=3                                                                                                                                                                                        │ default-k8s-diff-port-882305 │ jenkins │ v1.37.0 │ 22 Nov 25 00:58 UTC │ 22 Nov 25 00:58 UTC │
	│ addons  │ enable dashboard -p default-k8s-diff-port-882305 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                       │ default-k8s-diff-port-882305 │ jenkins │ v1.37.0 │ 22 Nov 25 00:58 UTC │ 22 Nov 25 00:58 UTC │
	│ start   │ -p default-k8s-diff-port-882305 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                      │ default-k8s-diff-port-882305 │ jenkins │ v1.37.0 │ 22 Nov 25 00:58 UTC │ 22 Nov 25 00:59 UTC │
	│ addons  │ enable metrics-server -p newest-cni-683181 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ newest-cni-683181            │ jenkins │ v1.37.0 │ 22 Nov 25 00:58 UTC │                     │
	│ stop    │ -p newest-cni-683181 --alsologtostderr -v=3                                                                                                                                                                                                   │ newest-cni-683181            │ jenkins │ v1.37.0 │ 22 Nov 25 00:59 UTC │ 22 Nov 25 00:59 UTC │
	│ addons  │ enable dashboard -p newest-cni-683181 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ newest-cni-683181            │ jenkins │ v1.37.0 │ 22 Nov 25 00:59 UTC │ 22 Nov 25 00:59 UTC │
	│ start   │ -p newest-cni-683181 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1 │ newest-cni-683181            │ jenkins │ v1.37.0 │ 22 Nov 25 00:59 UTC │ 22 Nov 25 00:59 UTC │
	│ image   │ newest-cni-683181 image list --format=json                                                                                                                                                                                                    │ newest-cni-683181            │ jenkins │ v1.37.0 │ 22 Nov 25 00:59 UTC │ 22 Nov 25 00:59 UTC │
	│ pause   │ -p newest-cni-683181 --alsologtostderr -v=1                                                                                                                                                                                                   │ newest-cni-683181            │ jenkins │ v1.37.0 │ 22 Nov 25 00:59 UTC │                     │
	│ delete  │ -p newest-cni-683181                                                                                                                                                                                                                          │ newest-cni-683181            │ jenkins │ v1.37.0 │ 22 Nov 25 00:59 UTC │ 22 Nov 25 00:59 UTC │
	│ delete  │ -p newest-cni-683181                                                                                                                                                                                                                          │ newest-cni-683181            │ jenkins │ v1.37.0 │ 22 Nov 25 00:59 UTC │ 22 Nov 25 00:59 UTC │
	│ start   │ -p auto-163229 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=crio                                                                                                                       │ auto-163229                  │ jenkins │ v1.37.0 │ 22 Nov 25 00:59 UTC │                     │
	│ image   │ default-k8s-diff-port-882305 image list --format=json                                                                                                                                                                                         │ default-k8s-diff-port-882305 │ jenkins │ v1.37.0 │ 22 Nov 25 00:59 UTC │ 22 Nov 25 00:59 UTC │
	│ pause   │ -p default-k8s-diff-port-882305 --alsologtostderr -v=1                                                                                                                                                                                        │ default-k8s-diff-port-882305 │ jenkins │ v1.37.0 │ 22 Nov 25 00:59 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴──────────────
───────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/22 00:59:34
	Running on machine: ip-172-31-21-244
	Binary: Built with gc go1.25.3 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1122 00:59:34.250089  727352 out.go:360] Setting OutFile to fd 1 ...
	I1122 00:59:34.250301  727352 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1122 00:59:34.250346  727352 out.go:374] Setting ErrFile to fd 2...
	I1122 00:59:34.250376  727352 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1122 00:59:34.250891  727352 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21934-513600/.minikube/bin
	I1122 00:59:34.252058  727352 out.go:368] Setting JSON to false
	I1122 00:59:34.253384  727352 start.go:133] hostinfo: {"hostname":"ip-172-31-21-244","uptime":20491,"bootTime":1763752684,"procs":199,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"da8ac1fd-6236-412a-a346-95873c98230d"}
	I1122 00:59:34.253457  727352 start.go:143] virtualization:  
	I1122 00:59:34.257147  727352 out.go:179] * [auto-163229] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1122 00:59:34.261123  727352 out.go:179]   - MINIKUBE_LOCATION=21934
	I1122 00:59:34.261191  727352 notify.go:221] Checking for updates...
	I1122 00:59:34.267559  727352 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1122 00:59:34.270502  727352 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21934-513600/kubeconfig
	I1122 00:59:34.273474  727352 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21934-513600/.minikube
	I1122 00:59:34.276295  727352 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1122 00:59:34.279285  727352 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1122 00:59:34.282658  727352 config.go:182] Loaded profile config "default-k8s-diff-port-882305": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1122 00:59:34.282783  727352 driver.go:422] Setting default libvirt URI to qemu:///system
	I1122 00:59:34.311096  727352 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1122 00:59:34.311221  727352 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1122 00:59:34.369669  727352 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-11-22 00:59:34.35981101 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aa
rch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pa
th:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1122 00:59:34.369777  727352 docker.go:319] overlay module found
	I1122 00:59:34.373047  727352 out.go:179] * Using the docker driver based on user configuration
	W1122 00:59:31.664259  721299 pod_ready.go:104] pod "coredns-66bc5c9577-448gn" is not "Ready", error: <nil>
	W1122 00:59:33.668023  721299 pod_ready.go:104] pod "coredns-66bc5c9577-448gn" is not "Ready", error: <nil>
	I1122 00:59:34.376039  727352 start.go:309] selected driver: docker
	I1122 00:59:34.376058  727352 start.go:930] validating driver "docker" against <nil>
	I1122 00:59:34.376071  727352 start.go:941] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1122 00:59:34.376780  727352 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1122 00:59:34.436164  727352 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-11-22 00:59:34.426287157 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1122 00:59:34.436334  727352 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1122 00:59:34.436566  727352 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1122 00:59:34.439516  727352 out.go:179] * Using Docker driver with root privileges
	I1122 00:59:34.442486  727352 cni.go:84] Creating CNI manager for ""
	I1122 00:59:34.442556  727352 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1122 00:59:34.442569  727352 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1122 00:59:34.442655  727352 start.go:353] cluster config:
	{Name:auto-163229 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763588073-21934@sha256:19d3da0413e1bfa354cbb88004c6796f8e9772a083e0230b0f6e50212ee04c7e Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:auto-163229 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:cri
o CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: Au
toPauseInterval:1m0s}
	I1122 00:59:34.445871  727352 out.go:179] * Starting "auto-163229" primary control-plane node in "auto-163229" cluster
	I1122 00:59:34.448580  727352 cache.go:134] Beginning downloading kic base image for docker with crio
	I1122 00:59:34.451577  727352 out.go:179] * Pulling base image v0.0.48-1763588073-21934 ...
	I1122 00:59:34.454488  727352 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1122 00:59:34.454532  727352 preload.go:203] Found local preload: /home/jenkins/minikube-integration/21934-513600/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4
	I1122 00:59:34.454541  727352 cache.go:65] Caching tarball of preloaded images
	I1122 00:59:34.454640  727352 preload.go:238] Found /home/jenkins/minikube-integration/21934-513600/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1122 00:59:34.454650  727352 cache.go:68] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1122 00:59:34.454752  727352 profile.go:143] Saving config to /home/jenkins/minikube-integration/21934-513600/.minikube/profiles/auto-163229/config.json ...
	I1122 00:59:34.454793  727352 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21934-513600/.minikube/profiles/auto-163229/config.json: {Name:mk35cd32535ce46e47638e4f6a6d136cd76a9b93 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1122 00:59:34.454939  727352 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763588073-21934@sha256:19d3da0413e1bfa354cbb88004c6796f8e9772a083e0230b0f6e50212ee04c7e in local docker daemon
	I1122 00:59:34.476300  727352 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763588073-21934@sha256:19d3da0413e1bfa354cbb88004c6796f8e9772a083e0230b0f6e50212ee04c7e in local docker daemon, skipping pull
	I1122 00:59:34.476322  727352 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763588073-21934@sha256:19d3da0413e1bfa354cbb88004c6796f8e9772a083e0230b0f6e50212ee04c7e exists in daemon, skipping load
	I1122 00:59:34.476336  727352 cache.go:243] Successfully downloaded all kic artifacts
	I1122 00:59:34.476358  727352 start.go:360] acquireMachinesLock for auto-163229: {Name:mkc3cbe710dfebc8fd711e50a5fb6ebe2f3767b2 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1122 00:59:34.476457  727352 start.go:364] duration metric: took 80.105µs to acquireMachinesLock for "auto-163229"
	I1122 00:59:34.476486  727352 start.go:93] Provisioning new machine with config: &{Name:auto-163229 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763588073-21934@sha256:19d3da0413e1bfa354cbb88004c6796f8e9772a083e0230b0f6e50212ee04c7e Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:auto-163229 Namespace:default APIServerHAVIP: APIServerName:minikub
eCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: Soc
ketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1122 00:59:34.476555  727352 start.go:125] createHost starting for "" (driver="docker")
	I1122 00:59:34.480027  727352 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1122 00:59:34.480293  727352 start.go:159] libmachine.API.Create for "auto-163229" (driver="docker")
	I1122 00:59:34.480326  727352 client.go:173] LocalClient.Create starting
	I1122 00:59:34.480401  727352 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21934-513600/.minikube/certs/ca.pem
	I1122 00:59:34.480435  727352 main.go:143] libmachine: Decoding PEM data...
	I1122 00:59:34.480457  727352 main.go:143] libmachine: Parsing certificate...
	I1122 00:59:34.480511  727352 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21934-513600/.minikube/certs/cert.pem
	I1122 00:59:34.480533  727352 main.go:143] libmachine: Decoding PEM data...
	I1122 00:59:34.480548  727352 main.go:143] libmachine: Parsing certificate...
	I1122 00:59:34.480894  727352 cli_runner.go:164] Run: docker network inspect auto-163229 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1122 00:59:34.497281  727352 cli_runner.go:211] docker network inspect auto-163229 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1122 00:59:34.497360  727352 network_create.go:284] running [docker network inspect auto-163229] to gather additional debugging logs...
	I1122 00:59:34.497381  727352 cli_runner.go:164] Run: docker network inspect auto-163229
	W1122 00:59:34.521231  727352 cli_runner.go:211] docker network inspect auto-163229 returned with exit code 1
	I1122 00:59:34.521284  727352 network_create.go:287] error running [docker network inspect auto-163229]: docker network inspect auto-163229: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network auto-163229 not found
	I1122 00:59:34.521299  727352 network_create.go:289] output of [docker network inspect auto-163229]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network auto-163229 not found
	
	** /stderr **
	I1122 00:59:34.521397  727352 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1122 00:59:34.538964  727352 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-b16c782e3da8 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:ba:82:00:9d:45:d0} reservation:<nil>}
	I1122 00:59:34.539319  727352 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-13c9c00b5de5 IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:7a:4e:a4:3d:42:9e} reservation:<nil>}
	I1122 00:59:34.539686  727352 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-3c074a6aa87b IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:9a:1f:77:e5:90:0b} reservation:<nil>}
	I1122 00:59:34.540128  727352 network.go:206] using free private subnet 192.168.76.0/24: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x4001a06320}
	I1122 00:59:34.540152  727352 network_create.go:124] attempt to create docker network auto-163229 192.168.76.0/24 with gateway 192.168.76.1 and MTU of 1500 ...
	I1122 00:59:34.540207  727352 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=auto-163229 auto-163229
	I1122 00:59:34.605220  727352 network_create.go:108] docker network auto-163229 192.168.76.0/24 created
	I1122 00:59:34.605249  727352 kic.go:121] calculated static IP "192.168.76.2" for the "auto-163229" container
	I1122 00:59:34.605347  727352 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1122 00:59:34.622876  727352 cli_runner.go:164] Run: docker volume create auto-163229 --label name.minikube.sigs.k8s.io=auto-163229 --label created_by.minikube.sigs.k8s.io=true
	I1122 00:59:34.640655  727352 oci.go:103] Successfully created a docker volume auto-163229
	I1122 00:59:34.640756  727352 cli_runner.go:164] Run: docker run --rm --name auto-163229-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=auto-163229 --entrypoint /usr/bin/test -v auto-163229:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763588073-21934@sha256:19d3da0413e1bfa354cbb88004c6796f8e9772a083e0230b0f6e50212ee04c7e -d /var/lib
	I1122 00:59:35.191753  727352 oci.go:107] Successfully prepared a docker volume auto-163229
	I1122 00:59:35.191836  727352 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1122 00:59:35.191854  727352 kic.go:194] Starting extracting preloaded images to volume ...
	I1122 00:59:35.191921  727352 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21934-513600/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v auto-163229:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763588073-21934@sha256:19d3da0413e1bfa354cbb88004c6796f8e9772a083e0230b0f6e50212ee04c7e -I lz4 -xf /preloaded.tar -C /extractDir
	W1122 00:59:36.163083  721299 pod_ready.go:104] pod "coredns-66bc5c9577-448gn" is not "Ready", error: <nil>
	W1122 00:59:38.664088  721299 pod_ready.go:104] pod "coredns-66bc5c9577-448gn" is not "Ready", error: <nil>
	I1122 00:59:39.536076  727352 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21934-513600/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v auto-163229:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763588073-21934@sha256:19d3da0413e1bfa354cbb88004c6796f8e9772a083e0230b0f6e50212ee04c7e -I lz4 -xf /preloaded.tar -C /extractDir: (4.344115048s)
	I1122 00:59:39.536115  727352 kic.go:203] duration metric: took 4.344257978s to extract preloaded images to volume ...
	W1122 00:59:39.536253  727352 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I1122 00:59:39.536370  727352 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1122 00:59:39.586887  727352 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname auto-163229 --name auto-163229 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=auto-163229 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=auto-163229 --network auto-163229 --ip 192.168.76.2 --volume auto-163229:/var --security-opt apparmor=unconfined --memory=3072mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763588073-21934@sha256:19d3da0413e1bfa354cbb88004c6796f8e9772a083e0230b0f6e50212ee04c7e
	I1122 00:59:39.886587  727352 cli_runner.go:164] Run: docker container inspect auto-163229 --format={{.State.Running}}
	I1122 00:59:39.908705  727352 cli_runner.go:164] Run: docker container inspect auto-163229 --format={{.State.Status}}
	I1122 00:59:39.938080  727352 cli_runner.go:164] Run: docker exec auto-163229 stat /var/lib/dpkg/alternatives/iptables
	I1122 00:59:39.991103  727352 oci.go:144] the created container "auto-163229" has a running status.
	I1122 00:59:39.991141  727352 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21934-513600/.minikube/machines/auto-163229/id_rsa...
	I1122 00:59:40.315442  727352 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21934-513600/.minikube/machines/auto-163229/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1122 00:59:40.341572  727352 cli_runner.go:164] Run: docker container inspect auto-163229 --format={{.State.Status}}
	I1122 00:59:40.376319  727352 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1122 00:59:40.376340  727352 kic_runner.go:114] Args: [docker exec --privileged auto-163229 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1122 00:59:40.440720  727352 cli_runner.go:164] Run: docker container inspect auto-163229 --format={{.State.Status}}
	I1122 00:59:40.471504  727352 machine.go:94] provisionDockerMachine start ...
	I1122 00:59:40.471593  727352 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-163229
	I1122 00:59:40.489316  727352 main.go:143] libmachine: Using SSH client type: native
	I1122 00:59:40.489660  727352 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33822 <nil> <nil>}
	I1122 00:59:40.489674  727352 main.go:143] libmachine: About to run SSH command:
	hostname
	I1122 00:59:40.490347  727352 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:33168->127.0.0.1:33822: read: connection reset by peer
	I1122 00:59:43.641618  727352 main.go:143] libmachine: SSH cmd err, output: <nil>: auto-163229
	
	I1122 00:59:43.641644  727352 ubuntu.go:182] provisioning hostname "auto-163229"
	I1122 00:59:43.641714  727352 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-163229
	I1122 00:59:43.661338  727352 main.go:143] libmachine: Using SSH client type: native
	I1122 00:59:43.661645  727352 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33822 <nil> <nil>}
	I1122 00:59:43.661655  727352 main.go:143] libmachine: About to run SSH command:
	sudo hostname auto-163229 && echo "auto-163229" | sudo tee /etc/hostname
	I1122 00:59:43.815194  727352 main.go:143] libmachine: SSH cmd err, output: <nil>: auto-163229
	
	I1122 00:59:43.815294  727352 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-163229
	I1122 00:59:43.841068  727352 main.go:143] libmachine: Using SSH client type: native
	I1122 00:59:43.841400  727352 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33822 <nil> <nil>}
	I1122 00:59:43.841421  727352 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sauto-163229' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 auto-163229/g' /etc/hosts;
				else 
					echo '127.0.1.1 auto-163229' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1122 00:59:43.982106  727352 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1122 00:59:43.982184  727352 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21934-513600/.minikube CaCertPath:/home/jenkins/minikube-integration/21934-513600/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21934-513600/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21934-513600/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21934-513600/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21934-513600/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21934-513600/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21934-513600/.minikube}
	I1122 00:59:43.982211  727352 ubuntu.go:190] setting up certificates
	I1122 00:59:43.982221  727352 provision.go:84] configureAuth start
	I1122 00:59:43.982291  727352 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" auto-163229
	I1122 00:59:44.001187  727352 provision.go:143] copyHostCerts
	I1122 00:59:44.001266  727352 exec_runner.go:144] found /home/jenkins/minikube-integration/21934-513600/.minikube/cert.pem, removing ...
	I1122 00:59:44.001278  727352 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21934-513600/.minikube/cert.pem
	I1122 00:59:44.001371  727352 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21934-513600/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21934-513600/.minikube/cert.pem (1123 bytes)
	I1122 00:59:44.001478  727352 exec_runner.go:144] found /home/jenkins/minikube-integration/21934-513600/.minikube/key.pem, removing ...
	I1122 00:59:44.001483  727352 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21934-513600/.minikube/key.pem
	I1122 00:59:44.001512  727352 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21934-513600/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21934-513600/.minikube/key.pem (1675 bytes)
	I1122 00:59:44.001567  727352 exec_runner.go:144] found /home/jenkins/minikube-integration/21934-513600/.minikube/ca.pem, removing ...
	I1122 00:59:44.001571  727352 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21934-513600/.minikube/ca.pem
	I1122 00:59:44.001601  727352 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21934-513600/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21934-513600/.minikube/ca.pem (1078 bytes)
	I1122 00:59:44.001654  727352 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21934-513600/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21934-513600/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21934-513600/.minikube/certs/ca-key.pem org=jenkins.auto-163229 san=[127.0.0.1 192.168.76.2 auto-163229 localhost minikube]
	W1122 00:59:41.163906  721299 pod_ready.go:104] pod "coredns-66bc5c9577-448gn" is not "Ready", error: <nil>
	W1122 00:59:43.665278  721299 pod_ready.go:104] pod "coredns-66bc5c9577-448gn" is not "Ready", error: <nil>
	I1122 00:59:44.667728  721299 pod_ready.go:94] pod "coredns-66bc5c9577-448gn" is "Ready"
	I1122 00:59:44.667759  721299 pod_ready.go:86] duration metric: took 33.510009512s for pod "coredns-66bc5c9577-448gn" in "kube-system" namespace to be "Ready" or be gone ...
	I1122 00:59:44.670826  721299 pod_ready.go:83] waiting for pod "etcd-default-k8s-diff-port-882305" in "kube-system" namespace to be "Ready" or be gone ...
	I1122 00:59:44.676764  721299 pod_ready.go:94] pod "etcd-default-k8s-diff-port-882305" is "Ready"
	I1122 00:59:44.676800  721299 pod_ready.go:86] duration metric: took 5.948955ms for pod "etcd-default-k8s-diff-port-882305" in "kube-system" namespace to be "Ready" or be gone ...
	I1122 00:59:44.679257  721299 pod_ready.go:83] waiting for pod "kube-apiserver-default-k8s-diff-port-882305" in "kube-system" namespace to be "Ready" or be gone ...
	I1122 00:59:44.685446  721299 pod_ready.go:94] pod "kube-apiserver-default-k8s-diff-port-882305" is "Ready"
	I1122 00:59:44.685474  721299 pod_ready.go:86] duration metric: took 6.193233ms for pod "kube-apiserver-default-k8s-diff-port-882305" in "kube-system" namespace to be "Ready" or be gone ...
	I1122 00:59:44.687892  721299 pod_ready.go:83] waiting for pod "kube-controller-manager-default-k8s-diff-port-882305" in "kube-system" namespace to be "Ready" or be gone ...
	I1122 00:59:44.863822  721299 pod_ready.go:94] pod "kube-controller-manager-default-k8s-diff-port-882305" is "Ready"
	I1122 00:59:44.863854  721299 pod_ready.go:86] duration metric: took 175.932972ms for pod "kube-controller-manager-default-k8s-diff-port-882305" in "kube-system" namespace to be "Ready" or be gone ...
	I1122 00:59:45.067555  721299 pod_ready.go:83] waiting for pod "kube-proxy-59l6x" in "kube-system" namespace to be "Ready" or be gone ...
	I1122 00:59:45.461778  721299 pod_ready.go:94] pod "kube-proxy-59l6x" is "Ready"
	I1122 00:59:45.461831  721299 pod_ready.go:86] duration metric: took 394.248623ms for pod "kube-proxy-59l6x" in "kube-system" namespace to be "Ready" or be gone ...
	I1122 00:59:45.662473  721299 pod_ready.go:83] waiting for pod "kube-scheduler-default-k8s-diff-port-882305" in "kube-system" namespace to be "Ready" or be gone ...
	I1122 00:59:46.061220  721299 pod_ready.go:94] pod "kube-scheduler-default-k8s-diff-port-882305" is "Ready"
	I1122 00:59:46.061273  721299 pod_ready.go:86] duration metric: took 398.759783ms for pod "kube-scheduler-default-k8s-diff-port-882305" in "kube-system" namespace to be "Ready" or be gone ...
	I1122 00:59:46.061287  721299 pod_ready.go:40] duration metric: took 34.963106194s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1122 00:59:46.155480  721299 start.go:628] kubectl: 1.33.2, cluster: 1.34.1 (minor skew: 1)
	I1122 00:59:46.159306  721299 out.go:179] * Done! kubectl is now configured to use "default-k8s-diff-port-882305" cluster and "default" namespace by default
	I1122 00:59:44.634681  727352 provision.go:177] copyRemoteCerts
	I1122 00:59:44.634770  727352 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1122 00:59:44.634843  727352 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-163229
	I1122 00:59:44.654432  727352 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33822 SSHKeyPath:/home/jenkins/minikube-integration/21934-513600/.minikube/machines/auto-163229/id_rsa Username:docker}
	I1122 00:59:44.765688  727352 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21934-513600/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1122 00:59:44.785138  727352 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21934-513600/.minikube/machines/server.pem --> /etc/docker/server.pem (1204 bytes)
	I1122 00:59:44.804882  727352 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21934-513600/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1122 00:59:44.833145  727352 provision.go:87] duration metric: took 850.905992ms to configureAuth
	I1122 00:59:44.833170  727352 ubuntu.go:206] setting minikube options for container-runtime
	I1122 00:59:44.833366  727352 config.go:182] Loaded profile config "auto-163229": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1122 00:59:44.833498  727352 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-163229
	I1122 00:59:44.853143  727352 main.go:143] libmachine: Using SSH client type: native
	I1122 00:59:44.854123  727352 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33822 <nil> <nil>}
	I1122 00:59:44.854149  727352 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1122 00:59:45.278826  727352 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1122 00:59:45.279156  727352 machine.go:97] duration metric: took 4.807628961s to provisionDockerMachine
	I1122 00:59:45.279245  727352 client.go:176] duration metric: took 10.798904715s to LocalClient.Create
	I1122 00:59:45.279416  727352 start.go:167] duration metric: took 10.799120652s to libmachine.API.Create "auto-163229"
	I1122 00:59:45.279511  727352 start.go:293] postStartSetup for "auto-163229" (driver="docker")
	I1122 00:59:45.279523  727352 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1122 00:59:45.279589  727352 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1122 00:59:45.279638  727352 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-163229
	I1122 00:59:45.310981  727352 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33822 SSHKeyPath:/home/jenkins/minikube-integration/21934-513600/.minikube/machines/auto-163229/id_rsa Username:docker}
	I1122 00:59:45.420665  727352 ssh_runner.go:195] Run: cat /etc/os-release
	I1122 00:59:45.424168  727352 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1122 00:59:45.424199  727352 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1122 00:59:45.424211  727352 filesync.go:126] Scanning /home/jenkins/minikube-integration/21934-513600/.minikube/addons for local assets ...
	I1122 00:59:45.424265  727352 filesync.go:126] Scanning /home/jenkins/minikube-integration/21934-513600/.minikube/files for local assets ...
	I1122 00:59:45.424345  727352 filesync.go:149] local asset: /home/jenkins/minikube-integration/21934-513600/.minikube/files/etc/ssl/certs/5169372.pem -> 5169372.pem in /etc/ssl/certs
	I1122 00:59:45.424453  727352 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1122 00:59:45.433646  727352 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21934-513600/.minikube/files/etc/ssl/certs/5169372.pem --> /etc/ssl/certs/5169372.pem (1708 bytes)
	I1122 00:59:45.462922  727352 start.go:296] duration metric: took 183.396783ms for postStartSetup
	I1122 00:59:45.463342  727352 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" auto-163229
	I1122 00:59:45.481009  727352 profile.go:143] Saving config to /home/jenkins/minikube-integration/21934-513600/.minikube/profiles/auto-163229/config.json ...
	I1122 00:59:45.481316  727352 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1122 00:59:45.481370  727352 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-163229
	I1122 00:59:45.498886  727352 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33822 SSHKeyPath:/home/jenkins/minikube-integration/21934-513600/.minikube/machines/auto-163229/id_rsa Username:docker}
	I1122 00:59:45.598859  727352 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1122 00:59:45.603721  727352 start.go:128] duration metric: took 11.127150238s to createHost
	I1122 00:59:45.603746  727352 start.go:83] releasing machines lock for "auto-163229", held for 11.127275854s
	I1122 00:59:45.603838  727352 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" auto-163229
	I1122 00:59:45.621850  727352 ssh_runner.go:195] Run: cat /version.json
	I1122 00:59:45.621890  727352 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1122 00:59:45.621917  727352 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-163229
	I1122 00:59:45.621959  727352 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-163229
	I1122 00:59:45.643182  727352 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33822 SSHKeyPath:/home/jenkins/minikube-integration/21934-513600/.minikube/machines/auto-163229/id_rsa Username:docker}
	I1122 00:59:45.659784  727352 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33822 SSHKeyPath:/home/jenkins/minikube-integration/21934-513600/.minikube/machines/auto-163229/id_rsa Username:docker}
	I1122 00:59:45.838734  727352 ssh_runner.go:195] Run: systemctl --version
	I1122 00:59:45.845373  727352 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1122 00:59:45.887568  727352 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1122 00:59:45.892002  727352 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1122 00:59:45.892077  727352 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1122 00:59:45.921513  727352 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/10-crio-bridge.conflist.disabled] bridge cni config(s)
	I1122 00:59:45.921536  727352 start.go:496] detecting cgroup driver to use...
	I1122 00:59:45.921579  727352 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1122 00:59:45.921637  727352 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1122 00:59:45.939064  727352 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1122 00:59:45.952530  727352 docker.go:218] disabling cri-docker service (if available) ...
	I1122 00:59:45.952597  727352 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1122 00:59:45.970430  727352 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1122 00:59:45.989037  727352 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1122 00:59:46.141203  727352 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1122 00:59:46.326829  727352 docker.go:234] disabling docker service ...
	I1122 00:59:46.326889  727352 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1122 00:59:46.362021  727352 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1122 00:59:46.378296  727352 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1122 00:59:46.551394  727352 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1122 00:59:46.722367  727352 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1122 00:59:46.735812  727352 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1122 00:59:46.750270  727352 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1122 00:59:46.750339  727352 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1122 00:59:46.759004  727352 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1122 00:59:46.759087  727352 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1122 00:59:46.768325  727352 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1122 00:59:46.778517  727352 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1122 00:59:46.787944  727352 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1122 00:59:46.796508  727352 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1122 00:59:46.805341  727352 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1122 00:59:46.819565  727352 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1122 00:59:46.828055  727352 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1122 00:59:46.835498  727352 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1122 00:59:46.842398  727352 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1122 00:59:46.962503  727352 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1122 00:59:47.144582  727352 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1122 00:59:47.144729  727352 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1122 00:59:47.148663  727352 start.go:564] Will wait 60s for crictl version
	I1122 00:59:47.148723  727352 ssh_runner.go:195] Run: which crictl
	I1122 00:59:47.152354  727352 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1122 00:59:47.178750  727352 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.2
	RuntimeApiVersion:  v1
	I1122 00:59:47.178833  727352 ssh_runner.go:195] Run: crio --version
	I1122 00:59:47.208313  727352 ssh_runner.go:195] Run: crio --version
	I1122 00:59:47.251382  727352 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.2 ...
	I1122 00:59:47.254294  727352 cli_runner.go:164] Run: docker network inspect auto-163229 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1122 00:59:47.270822  727352 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I1122 00:59:47.275570  727352 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1122 00:59:47.285463  727352 kubeadm.go:884] updating cluster {Name:auto-163229 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763588073-21934@sha256:19d3da0413e1bfa354cbb88004c6796f8e9772a083e0230b0f6e50212ee04c7e Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:auto-163229 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:
[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMne
tClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1122 00:59:47.285587  727352 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1122 00:59:47.285646  727352 ssh_runner.go:195] Run: sudo crictl images --output json
	I1122 00:59:47.320213  727352 crio.go:514] all images are preloaded for cri-o runtime.
	I1122 00:59:47.320237  727352 crio.go:433] Images already preloaded, skipping extraction
	I1122 00:59:47.320293  727352 ssh_runner.go:195] Run: sudo crictl images --output json
	I1122 00:59:47.346511  727352 crio.go:514] all images are preloaded for cri-o runtime.
	I1122 00:59:47.346536  727352 cache_images.go:86] Images are preloaded, skipping loading
	I1122 00:59:47.346545  727352 kubeadm.go:935] updating node { 192.168.76.2 8443 v1.34.1 crio true true} ...
	I1122 00:59:47.346633  727352 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=auto-163229 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:auto-163229 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1122 00:59:47.346716  727352 ssh_runner.go:195] Run: crio config
	I1122 00:59:47.407109  727352 cni.go:84] Creating CNI manager for ""
	I1122 00:59:47.407131  727352 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1122 00:59:47.407166  727352 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1122 00:59:47.407194  727352 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:auto-163229 NodeName:auto-163229 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/
manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1122 00:59:47.407346  727352 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "auto-163229"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.76.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1122 00:59:47.407423  727352 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1122 00:59:47.415468  727352 binaries.go:51] Found k8s binaries, skipping transfer
	I1122 00:59:47.415542  727352 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1122 00:59:47.423413  727352 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (361 bytes)
	I1122 00:59:47.438444  727352 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1122 00:59:47.451577  727352 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2208 bytes)
	I1122 00:59:47.467750  727352 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I1122 00:59:47.471271  727352 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1122 00:59:47.480619  727352 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1122 00:59:47.602183  727352 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1122 00:59:47.616885  727352 certs.go:69] Setting up /home/jenkins/minikube-integration/21934-513600/.minikube/profiles/auto-163229 for IP: 192.168.76.2
	I1122 00:59:47.616905  727352 certs.go:195] generating shared ca certs ...
	I1122 00:59:47.616920  727352 certs.go:227] acquiring lock for ca certs: {Name:mkaf4c79493334cb2058b349c36be7473837f9b7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1122 00:59:47.617054  727352 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21934-513600/.minikube/ca.key
	I1122 00:59:47.617101  727352 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21934-513600/.minikube/proxy-client-ca.key
	I1122 00:59:47.617108  727352 certs.go:257] generating profile certs ...
	I1122 00:59:47.617160  727352 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21934-513600/.minikube/profiles/auto-163229/client.key
	I1122 00:59:47.617171  727352 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21934-513600/.minikube/profiles/auto-163229/client.crt with IP's: []
	I1122 00:59:47.773612  727352 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21934-513600/.minikube/profiles/auto-163229/client.crt ...
	I1122 00:59:47.773647  727352 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21934-513600/.minikube/profiles/auto-163229/client.crt: {Name:mk5580137d08122d88e24c87855247707ecd684e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1122 00:59:47.773893  727352 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21934-513600/.minikube/profiles/auto-163229/client.key ...
	I1122 00:59:47.773908  727352 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21934-513600/.minikube/profiles/auto-163229/client.key: {Name:mk70359a60a71ae6aafe962c3d06ef297af57caf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1122 00:59:47.774008  727352 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21934-513600/.minikube/profiles/auto-163229/apiserver.key.b2ec0605
	I1122 00:59:47.774027  727352 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21934-513600/.minikube/profiles/auto-163229/apiserver.crt.b2ec0605 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.76.2]
	I1122 00:59:48.157177  727352 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21934-513600/.minikube/profiles/auto-163229/apiserver.crt.b2ec0605 ...
	I1122 00:59:48.157210  727352 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21934-513600/.minikube/profiles/auto-163229/apiserver.crt.b2ec0605: {Name:mk04b9b1215305a4ce60df91b5f1218eb67834ac Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1122 00:59:48.157401  727352 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21934-513600/.minikube/profiles/auto-163229/apiserver.key.b2ec0605 ...
	I1122 00:59:48.157418  727352 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21934-513600/.minikube/profiles/auto-163229/apiserver.key.b2ec0605: {Name:mk4b495c49e8f0dcb2b9ed7589a131239c2e003d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1122 00:59:48.157508  727352 certs.go:382] copying /home/jenkins/minikube-integration/21934-513600/.minikube/profiles/auto-163229/apiserver.crt.b2ec0605 -> /home/jenkins/minikube-integration/21934-513600/.minikube/profiles/auto-163229/apiserver.crt
	I1122 00:59:48.157588  727352 certs.go:386] copying /home/jenkins/minikube-integration/21934-513600/.minikube/profiles/auto-163229/apiserver.key.b2ec0605 -> /home/jenkins/minikube-integration/21934-513600/.minikube/profiles/auto-163229/apiserver.key
	I1122 00:59:48.157651  727352 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21934-513600/.minikube/profiles/auto-163229/proxy-client.key
	I1122 00:59:48.157670  727352 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21934-513600/.minikube/profiles/auto-163229/proxy-client.crt with IP's: []
	I1122 00:59:48.343366  727352 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21934-513600/.minikube/profiles/auto-163229/proxy-client.crt ...
	I1122 00:59:48.343403  727352 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21934-513600/.minikube/profiles/auto-163229/proxy-client.crt: {Name:mk7ec962a4e17ad4a186ee789f8019e69095844d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1122 00:59:48.343584  727352 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21934-513600/.minikube/profiles/auto-163229/proxy-client.key ...
	I1122 00:59:48.343595  727352 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21934-513600/.minikube/profiles/auto-163229/proxy-client.key: {Name:mkc71167ce9561739ff6445f93ee6d2004526d29 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1122 00:59:48.343786  727352 certs.go:484] found cert: /home/jenkins/minikube-integration/21934-513600/.minikube/certs/516937.pem (1338 bytes)
	W1122 00:59:48.343834  727352 certs.go:480] ignoring /home/jenkins/minikube-integration/21934-513600/.minikube/certs/516937_empty.pem, impossibly tiny 0 bytes
	I1122 00:59:48.343851  727352 certs.go:484] found cert: /home/jenkins/minikube-integration/21934-513600/.minikube/certs/ca-key.pem (1679 bytes)
	I1122 00:59:48.343879  727352 certs.go:484] found cert: /home/jenkins/minikube-integration/21934-513600/.minikube/certs/ca.pem (1078 bytes)
	I1122 00:59:48.343907  727352 certs.go:484] found cert: /home/jenkins/minikube-integration/21934-513600/.minikube/certs/cert.pem (1123 bytes)
	I1122 00:59:48.343936  727352 certs.go:484] found cert: /home/jenkins/minikube-integration/21934-513600/.minikube/certs/key.pem (1675 bytes)
	I1122 00:59:48.343987  727352 certs.go:484] found cert: /home/jenkins/minikube-integration/21934-513600/.minikube/files/etc/ssl/certs/5169372.pem (1708 bytes)
	I1122 00:59:48.344614  727352 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21934-513600/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1122 00:59:48.362870  727352 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21934-513600/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1122 00:59:48.385911  727352 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21934-513600/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1122 00:59:48.404064  727352 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21934-513600/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1122 00:59:48.423748  727352 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21934-513600/.minikube/profiles/auto-163229/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1415 bytes)
	I1122 00:59:48.442037  727352 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21934-513600/.minikube/profiles/auto-163229/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1122 00:59:48.464720  727352 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21934-513600/.minikube/profiles/auto-163229/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1122 00:59:48.483000  727352 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21934-513600/.minikube/profiles/auto-163229/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1122 00:59:48.502692  727352 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21934-513600/.minikube/files/etc/ssl/certs/5169372.pem --> /usr/share/ca-certificates/5169372.pem (1708 bytes)
	I1122 00:59:48.530443  727352 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21934-513600/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1122 00:59:48.553388  727352 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21934-513600/.minikube/certs/516937.pem --> /usr/share/ca-certificates/516937.pem (1338 bytes)
	I1122 00:59:48.575191  727352 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1122 00:59:48.589563  727352 ssh_runner.go:195] Run: openssl version
	I1122 00:59:48.596256  727352 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1122 00:59:48.604477  727352 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1122 00:59:48.608272  727352 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov 21 23:48 /usr/share/ca-certificates/minikubeCA.pem
	I1122 00:59:48.608351  727352 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1122 00:59:48.649460  727352 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1122 00:59:48.658306  727352 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/516937.pem && ln -fs /usr/share/ca-certificates/516937.pem /etc/ssl/certs/516937.pem"
	I1122 00:59:48.666677  727352 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/516937.pem
	I1122 00:59:48.670595  727352 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov 21 23:55 /usr/share/ca-certificates/516937.pem
	I1122 00:59:48.670691  727352 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/516937.pem
	I1122 00:59:48.711944  727352 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/516937.pem /etc/ssl/certs/51391683.0"
	I1122 00:59:48.720024  727352 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/5169372.pem && ln -fs /usr/share/ca-certificates/5169372.pem /etc/ssl/certs/5169372.pem"
	I1122 00:59:48.728115  727352 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/5169372.pem
	I1122 00:59:48.731676  727352 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov 21 23:55 /usr/share/ca-certificates/5169372.pem
	I1122 00:59:48.731744  727352 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/5169372.pem
	I1122 00:59:48.772569  727352 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/5169372.pem /etc/ssl/certs/3ec20f2e.0"
	I1122 00:59:48.780728  727352 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1122 00:59:48.784018  727352 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1122 00:59:48.784071  727352 kubeadm.go:401] StartCluster: {Name:auto-163229 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763588073-21934@sha256:19d3da0413e1bfa354cbb88004c6796f8e9772a083e0230b0f6e50212ee04c7e Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:auto-163229 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[]
APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetCl
ientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1122 00:59:48.784153  727352 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1122 00:59:48.784216  727352 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1122 00:59:48.811826  727352 cri.go:89] found id: ""
	I1122 00:59:48.811939  727352 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1122 00:59:48.820457  727352 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1122 00:59:48.828313  727352 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1122 00:59:48.828406  727352 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1122 00:59:48.837469  727352 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1122 00:59:48.837487  727352 kubeadm.go:158] found existing configuration files:
	
	I1122 00:59:48.837535  727352 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1122 00:59:48.845373  727352 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1122 00:59:48.845463  727352 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1122 00:59:48.852820  727352 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1122 00:59:48.860513  727352 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1122 00:59:48.860599  727352 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1122 00:59:48.868196  727352 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1122 00:59:48.876166  727352 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1122 00:59:48.876238  727352 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1122 00:59:48.883537  727352 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1122 00:59:48.890999  727352 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1122 00:59:48.891063  727352 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1122 00:59:48.898436  727352 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1122 00:59:48.942457  727352 kubeadm.go:319] [init] Using Kubernetes version: v1.34.1
	I1122 00:59:48.942683  727352 kubeadm.go:319] [preflight] Running pre-flight checks
	I1122 00:59:48.964789  727352 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1122 00:59:48.964868  727352 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1122 00:59:48.964908  727352 kubeadm.go:319] OS: Linux
	I1122 00:59:48.964960  727352 kubeadm.go:319] CGROUPS_CPU: enabled
	I1122 00:59:48.965013  727352 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1122 00:59:48.965063  727352 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1122 00:59:48.965115  727352 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1122 00:59:48.965166  727352 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1122 00:59:48.965218  727352 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1122 00:59:48.965267  727352 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1122 00:59:48.965318  727352 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1122 00:59:48.965368  727352 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1122 00:59:49.030716  727352 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1122 00:59:49.030833  727352 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1122 00:59:49.030930  727352 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1122 00:59:49.041285  727352 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1122 00:59:49.047898  727352 out.go:252]   - Generating certificates and keys ...
	I1122 00:59:49.047995  727352 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1122 00:59:49.048070  727352 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1122 00:59:49.328163  727352 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1122 00:59:50.196631  727352 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1122 00:59:50.595835  727352 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1122 00:59:51.368980  727352 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1122 00:59:52.157278  727352 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1122 00:59:52.157497  727352 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [auto-163229 localhost] and IPs [192.168.76.2 127.0.0.1 ::1]
	I1122 00:59:52.579519  727352 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1122 00:59:52.579882  727352 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [auto-163229 localhost] and IPs [192.168.76.2 127.0.0.1 ::1]
	I1122 00:59:53.580391  727352 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1122 00:59:53.794266  727352 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1122 00:59:54.111652  727352 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1122 00:59:54.112005  727352 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1122 00:59:54.724094  727352 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1122 00:59:55.736272  727352 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1122 00:59:55.865678  727352 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1122 00:59:56.405655  727352 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1122 00:59:57.098307  727352 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1122 00:59:57.099700  727352 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1122 00:59:57.103812  727352 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1122 00:59:57.107123  727352 out.go:252]   - Booting up control plane ...
	I1122 00:59:57.107221  727352 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1122 00:59:57.107307  727352 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1122 00:59:57.107376  727352 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1122 00:59:57.124097  727352 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1122 00:59:57.124219  727352 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1122 00:59:57.132222  727352 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1122 00:59:57.132535  727352 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1122 00:59:57.132581  727352 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1122 00:59:57.254893  727352 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1122 00:59:57.255034  727352 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1122 01:00:00.267215  727352 kubeadm.go:319] [kubelet-check] The kubelet is healthy after 3.010770888s
	I1122 01:00:00.285738  727352 kubeadm.go:319] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1122 01:00:00.288725  727352 kubeadm.go:319] [control-plane-check] Checking kube-apiserver at https://192.168.76.2:8443/livez
	I1122 01:00:00.289625  727352 kubeadm.go:319] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1122 01:00:00.290227  727352 kubeadm.go:319] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	
	
	==> CRI-O <==
	Nov 22 00:59:44 default-k8s-diff-port-882305 crio[652]: time="2025-11-22T00:59:44.813956301Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=abd052a6-1b79-456e-9da3-7f85865892ed name=/runtime.v1.ImageService/ImageStatus
	Nov 22 00:59:44 default-k8s-diff-port-882305 crio[652]: time="2025-11-22T00:59:44.816033728Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=c2b86634-46c2-4056-81e4-c208697c4361 name=/runtime.v1.ImageService/ImageStatus
	Nov 22 00:59:44 default-k8s-diff-port-882305 crio[652]: time="2025-11-22T00:59:44.817182333Z" level=info msg="Creating container: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-qhkdl/dashboard-metrics-scraper" id=b672711d-90d1-4384-aaac-57629b0289ea name=/runtime.v1.RuntimeService/CreateContainer
	Nov 22 00:59:44 default-k8s-diff-port-882305 crio[652]: time="2025-11-22T00:59:44.817286946Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 22 00:59:44 default-k8s-diff-port-882305 crio[652]: time="2025-11-22T00:59:44.829228614Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 22 00:59:44 default-k8s-diff-port-882305 crio[652]: time="2025-11-22T00:59:44.831043401Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 22 00:59:44 default-k8s-diff-port-882305 crio[652]: time="2025-11-22T00:59:44.853560906Z" level=info msg="Created container e6c681247aa417267830503d9c58396605d3706c59d9c0cf213900ef1533158d: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-qhkdl/dashboard-metrics-scraper" id=b672711d-90d1-4384-aaac-57629b0289ea name=/runtime.v1.RuntimeService/CreateContainer
	Nov 22 00:59:44 default-k8s-diff-port-882305 crio[652]: time="2025-11-22T00:59:44.861295891Z" level=info msg="Starting container: e6c681247aa417267830503d9c58396605d3706c59d9c0cf213900ef1533158d" id=4d6b603b-4848-49e8-ad76-1e99357dfdc9 name=/runtime.v1.RuntimeService/StartContainer
	Nov 22 00:59:44 default-k8s-diff-port-882305 crio[652]: time="2025-11-22T00:59:44.863378913Z" level=info msg="Started container" PID=1638 containerID=e6c681247aa417267830503d9c58396605d3706c59d9c0cf213900ef1533158d description=kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-qhkdl/dashboard-metrics-scraper id=4d6b603b-4848-49e8-ad76-1e99357dfdc9 name=/runtime.v1.RuntimeService/StartContainer sandboxID=1a453755c2263d3266645840281d1cf1f6ea63efc84761c5e0db4c52c760cb39
	Nov 22 00:59:44 default-k8s-diff-port-882305 conmon[1636]: conmon e6c681247aa417267830 <ninfo>: container 1638 exited with status 1
	Nov 22 00:59:45 default-k8s-diff-port-882305 crio[652]: time="2025-11-22T00:59:45.272909836Z" level=info msg="Removing container: 7fef82247ff2f4e4a77f9f05faa9569d4394bd65ead3af491328f162ebef040d" id=6d7b4fb7-efee-4934-97a5-84b4f984aba6 name=/runtime.v1.RuntimeService/RemoveContainer
	Nov 22 00:59:45 default-k8s-diff-port-882305 crio[652]: time="2025-11-22T00:59:45.284854679Z" level=info msg="Error loading conmon cgroup of container 7fef82247ff2f4e4a77f9f05faa9569d4394bd65ead3af491328f162ebef040d: cgroup deleted" id=6d7b4fb7-efee-4934-97a5-84b4f984aba6 name=/runtime.v1.RuntimeService/RemoveContainer
	Nov 22 00:59:45 default-k8s-diff-port-882305 crio[652]: time="2025-11-22T00:59:45.301936815Z" level=info msg="Removed container 7fef82247ff2f4e4a77f9f05faa9569d4394bd65ead3af491328f162ebef040d: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-qhkdl/dashboard-metrics-scraper" id=6d7b4fb7-efee-4934-97a5-84b4f984aba6 name=/runtime.v1.RuntimeService/RemoveContainer
	Nov 22 00:59:51 default-k8s-diff-port-882305 crio[652]: time="2025-11-22T00:59:51.412178499Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 22 00:59:51 default-k8s-diff-port-882305 crio[652]: time="2025-11-22T00:59:51.41920841Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 22 00:59:51 default-k8s-diff-port-882305 crio[652]: time="2025-11-22T00:59:51.419368505Z" level=info msg="Updated default CNI network name to kindnet"
	Nov 22 00:59:51 default-k8s-diff-port-882305 crio[652]: time="2025-11-22T00:59:51.419442718Z" level=info msg="CNI monitoring event WRITE         \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 22 00:59:51 default-k8s-diff-port-882305 crio[652]: time="2025-11-22T00:59:51.426054516Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 22 00:59:51 default-k8s-diff-port-882305 crio[652]: time="2025-11-22T00:59:51.426218417Z" level=info msg="Updated default CNI network name to kindnet"
	Nov 22 00:59:51 default-k8s-diff-port-882305 crio[652]: time="2025-11-22T00:59:51.426289652Z" level=info msg="CNI monitoring event RENAME        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 22 00:59:51 default-k8s-diff-port-882305 crio[652]: time="2025-11-22T00:59:51.433167182Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 22 00:59:51 default-k8s-diff-port-882305 crio[652]: time="2025-11-22T00:59:51.433315838Z" level=info msg="Updated default CNI network name to kindnet"
	Nov 22 00:59:51 default-k8s-diff-port-882305 crio[652]: time="2025-11-22T00:59:51.433385219Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist\" ← \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 22 00:59:51 default-k8s-diff-port-882305 crio[652]: time="2025-11-22T00:59:51.437305887Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 22 00:59:51 default-k8s-diff-port-882305 crio[652]: time="2025-11-22T00:59:51.437446282Z" level=info msg="Updated default CNI network name to kindnet"
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                      CREATED              STATE               NAME                        ATTEMPT             POD ID              POD                                                    NAMESPACE
	e6c681247aa41       a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7                                           21 seconds ago       Exited              dashboard-metrics-scraper   2                   1a453755c2263       dashboard-metrics-scraper-6ffb444bf9-qhkdl             kubernetes-dashboard
	6797e48f56252       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                           24 seconds ago       Running             storage-provisioner         2                   882a29a7f6a3e       storage-provisioner                                    kube-system
	93cb84e4ab699       docker.io/kubernetesui/dashboard@sha256:5c52c60663b473628bd98e4ffee7a747ef1f88d8c7bcee957b089fb3f61bdedf   44 seconds ago       Running             kubernetes-dashboard        0                   443e84d4af101       kubernetes-dashboard-855c9754f9-sx5ls                  kubernetes-dashboard
	77f6a3c0f1d2e       138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc                                           54 seconds ago       Running             coredns                     1                   1b306f7653d80       coredns-66bc5c9577-448gn                               kube-system
	4358d6a53beb0       1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c                                           55 seconds ago       Running             busybox                     1                   46d69b833ecb2       busybox                                                default
	99efae2906741       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c                                           55 seconds ago       Running             kindnet-cni                 1                   af7ee23936519       kindnet-kcwqj                                          kube-system
	24c64924a669e       05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9                                           55 seconds ago       Running             kube-proxy                  1                   287f524f55754       kube-proxy-59l6x                                       kube-system
	e34c46d28bc8c       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                           55 seconds ago       Exited              storage-provisioner         1                   882a29a7f6a3e       storage-provisioner                                    kube-system
	ef5cf3bc0e8a1       7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a                                           About a minute ago   Running             kube-controller-manager     1                   a64cd93c0583f       kube-controller-manager-default-k8s-diff-port-882305   kube-system
	d1d854f1c70c8       b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0                                           About a minute ago   Running             kube-scheduler              1                   8d2f2277e2734       kube-scheduler-default-k8s-diff-port-882305            kube-system
	c0ae038240897       a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e                                           About a minute ago   Running             etcd                        1                   1478edc689a14       etcd-default-k8s-diff-port-882305                      kube-system
	1ce380445cfc1       43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196                                           About a minute ago   Running             kube-apiserver              1                   aa4e4c0b74e90       kube-apiserver-default-k8s-diff-port-882305            kube-system
	
	
	==> coredns [77f6a3c0f1d2e079997f3dddd18e52dfa729d725f0cb10e1940295c459f10d6b] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = fa9a0cdcdddcb4be74a0eaf7cfcb211c40e29ddf5507e03bbfc0065bade31f0f2641a2513136e246f32328dd126fc93236fb5c595246f0763926a524386705e8
	CoreDNS-1.12.1
	linux/arm64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:59403 - 7797 "HINFO IN 4530320780478815200.5693597820438234680. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.007762063s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	
	
	==> describe nodes <==
	Name:               default-k8s-diff-port-882305
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=default-k8s-diff-port-882305
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=299bbe887a12c40541707cc636234f35f4ff1785
	                    minikube.k8s.io/name=default-k8s-diff-port-882305
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_11_22T00_58_07_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 22 Nov 2025 00:58:03 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  default-k8s-diff-port-882305
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 22 Nov 2025 00:59:50 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 22 Nov 2025 00:59:49 +0000   Sat, 22 Nov 2025 00:57:59 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 22 Nov 2025 00:59:49 +0000   Sat, 22 Nov 2025 00:57:59 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 22 Nov 2025 00:59:49 +0000   Sat, 22 Nov 2025 00:57:59 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 22 Nov 2025 00:59:49 +0000   Sat, 22 Nov 2025 00:58:25 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.85.2
	  Hostname:    default-k8s-diff-port-882305
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	System Info:
	  Machine ID:                 c92eea4d3c03c0156e01fcf8691e3907
	  System UUID:                3e7302ec-f0a5-4d0d-8a5f-75986888bef8
	  Boot ID:                    72ac7385-472f-47d1-a23e-bd80468d6e09
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.2
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                    ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         98s
	  kube-system                 coredns-66bc5c9577-448gn                                100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     115s
	  kube-system                 etcd-default-k8s-diff-port-882305                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         119s
	  kube-system                 kindnet-kcwqj                                           100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      115s
	  kube-system                 kube-apiserver-default-k8s-diff-port-882305             250m (12%)    0 (0%)      0 (0%)           0 (0%)         2m
	  kube-system                 kube-controller-manager-default-k8s-diff-port-882305    200m (10%)    0 (0%)      0 (0%)           0 (0%)         119s
	  kube-system                 kube-proxy-59l6x                                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         115s
	  kube-system                 kube-scheduler-default-k8s-diff-port-882305             100m (5%)     0 (0%)      0 (0%)           0 (0%)         2m
	  kube-system                 storage-provisioner                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         113s
	  kubernetes-dashboard        dashboard-metrics-scraper-6ffb444bf9-qhkdl              0 (0%)        0 (0%)      0 (0%)           0 (0%)         54s
	  kubernetes-dashboard        kubernetes-dashboard-855c9754f9-sx5ls                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         54s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  100m (5%)
	  memory             220Mi (2%)  220Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                  From             Message
	  ----     ------                   ----                 ----             -------
	  Normal   Starting                 111s                 kube-proxy       
	  Normal   Starting                 54s                  kube-proxy       
	  Normal   NodeHasSufficientMemory  2m7s (x8 over 2m7s)  kubelet          Node default-k8s-diff-port-882305 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    2m7s (x8 over 2m7s)  kubelet          Node default-k8s-diff-port-882305 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     2m7s (x8 over 2m7s)  kubelet          Node default-k8s-diff-port-882305 status is now: NodeHasSufficientPID
	  Normal   NodeHasNoDiskPressure    2m                   kubelet          Node default-k8s-diff-port-882305 status is now: NodeHasNoDiskPressure
	  Warning  CgroupV1                 2m                   kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  2m                   kubelet          Node default-k8s-diff-port-882305 status is now: NodeHasSufficientMemory
	  Normal   NodeHasSufficientPID     2m                   kubelet          Node default-k8s-diff-port-882305 status is now: NodeHasSufficientPID
	  Normal   Starting                 2m                   kubelet          Starting kubelet.
	  Normal   RegisteredNode           116s                 node-controller  Node default-k8s-diff-port-882305 event: Registered Node default-k8s-diff-port-882305 in Controller
	  Normal   NodeReady                101s                 kubelet          Node default-k8s-diff-port-882305 status is now: NodeReady
	  Normal   Starting                 64s                  kubelet          Starting kubelet.
	  Warning  CgroupV1                 64s                  kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  64s (x8 over 64s)    kubelet          Node default-k8s-diff-port-882305 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    64s (x8 over 64s)    kubelet          Node default-k8s-diff-port-882305 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     64s (x8 over 64s)    kubelet          Node default-k8s-diff-port-882305 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           54s                  node-controller  Node default-k8s-diff-port-882305 event: Registered Node default-k8s-diff-port-882305 in Controller
	
	
	==> dmesg <==
	[ +56.322609] overlayfs: idmapped layers are currently not supported
	[Nov22 00:38] overlayfs: idmapped layers are currently not supported
	[Nov22 00:39] overlayfs: idmapped layers are currently not supported
	[ +23.174928] overlayfs: idmapped layers are currently not supported
	[Nov22 00:41] overlayfs: idmapped layers are currently not supported
	[Nov22 00:42] overlayfs: idmapped layers are currently not supported
	[Nov22 00:44] overlayfs: idmapped layers are currently not supported
	[Nov22 00:45] overlayfs: idmapped layers are currently not supported
	[Nov22 00:46] overlayfs: idmapped layers are currently not supported
	[Nov22 00:48] overlayfs: idmapped layers are currently not supported
	[Nov22 00:50] overlayfs: idmapped layers are currently not supported
	[Nov22 00:51] overlayfs: idmapped layers are currently not supported
	[ +11.900293] overlayfs: idmapped layers are currently not supported
	[ +28.922055] overlayfs: idmapped layers are currently not supported
	[Nov22 00:52] overlayfs: idmapped layers are currently not supported
	[Nov22 00:53] overlayfs: idmapped layers are currently not supported
	[Nov22 00:54] overlayfs: idmapped layers are currently not supported
	[Nov22 00:55] overlayfs: idmapped layers are currently not supported
	[Nov22 00:56] overlayfs: idmapped layers are currently not supported
	[Nov22 00:57] overlayfs: idmapped layers are currently not supported
	[Nov22 00:58] overlayfs: idmapped layers are currently not supported
	[ +43.407301] overlayfs: idmapped layers are currently not supported
	[Nov22 00:59] overlayfs: idmapped layers are currently not supported
	[  +8.585740] overlayfs: idmapped layers are currently not supported
	[Nov22 01:00] overlayfs: idmapped layers are currently not supported
	
	
	==> etcd [c0ae03824089747781ca3fa95c137501b3b35608e772c7bf534789a146554e3c] <==
	{"level":"warn","ts":"2025-11-22T00:59:07.029564Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60828","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-22T00:59:07.054272Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60878","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-22T00:59:07.087750Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60904","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-22T00:59:07.100268Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60920","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-22T00:59:07.109064Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60944","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-22T00:59:07.148563Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60960","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-22T00:59:07.164513Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60976","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-22T00:59:07.189834Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60984","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-22T00:59:07.208016Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:32770","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-22T00:59:07.249473Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:32790","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-22T00:59:07.262290Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:32802","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-22T00:59:07.286270Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:32830","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-22T00:59:07.298624Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:32842","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-22T00:59:07.335948Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:32870","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-22T00:59:07.349161Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:32876","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-22T00:59:07.364503Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:32890","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-22T00:59:07.383696Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:32896","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-22T00:59:07.407283Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:32906","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-22T00:59:07.423463Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:32930","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-22T00:59:07.448141Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:32948","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-22T00:59:07.467726Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:32958","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-22T00:59:07.513321Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:32972","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-22T00:59:07.542417Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:32978","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-22T00:59:07.560597Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33000","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-22T00:59:07.657267Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33010","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 01:00:06 up  5:42,  0 user,  load average: 5.89, 4.45, 3.25
	Linux default-k8s-diff-port-882305 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [99efae290674153fda78e1cc8d351668db7f8f1a6a89e147416cef08d7b43096] <==
	I1122 00:59:11.213944       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1122 00:59:11.214287       1 main.go:139] hostIP = 192.168.85.2
	podIP = 192.168.85.2
	I1122 00:59:11.220187       1 main.go:148] setting mtu 1500 for CNI 
	I1122 00:59:11.220367       1 main.go:178] kindnetd IP family: "ipv4"
	I1122 00:59:11.220415       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-11-22T00:59:11Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1122 00:59:11.411480       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1122 00:59:11.411573       1 controller.go:381] "Waiting for informer caches to sync"
	I1122 00:59:11.411605       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1122 00:59:11.412374       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	E1122 00:59:41.412254       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Namespace: Get \"https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Namespace"
	E1122 00:59:41.412374       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.96.0.1:443/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Node"
	E1122 00:59:41.412456       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Pod: Get \"https://10.96.0.1:443/api/v1/pods?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Pod"
	E1122 00:59:41.412535       1 reflector.go:200] "Failed to watch" err="failed to list *v1.NetworkPolicy: Get \"https://10.96.0.1:443/apis/networking.k8s.io/v1/networkpolicies?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.NetworkPolicy"
	I1122 00:59:43.012806       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1122 00:59:43.012842       1 metrics.go:72] Registering metrics
	I1122 00:59:43.012904       1 controller.go:711] "Syncing nftables rules"
	I1122 00:59:51.411777       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1122 00:59:51.411907       1 main.go:301] handling current node
	I1122 01:00:01.419472       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1122 01:00:01.419532       1 main.go:301] handling current node
	
	
	==> kube-apiserver [1ce380445cfc1fe8d2cbb405092ab03fd65cb6c2cf8bac3317898266e679c5d3] <==
	I1122 00:59:09.166145       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I1122 00:59:09.166273       1 shared_informer.go:356] "Caches are synced" controller="cluster_authentication_trust_controller"
	I1122 00:59:09.166315       1 cache.go:39] Caches are synced for LocalAvailability controller
	I1122 00:59:09.166547       1 aggregator.go:171] initial CRD sync complete...
	I1122 00:59:09.166556       1 autoregister_controller.go:144] Starting autoregister controller
	I1122 00:59:09.166561       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1122 00:59:09.166566       1 cache.go:39] Caches are synced for autoregister controller
	I1122 00:59:09.166727       1 shared_informer.go:356] "Caches are synced" controller="ipallocator-repair-controller"
	I1122 00:59:09.174446       1 shared_informer.go:356] "Caches are synced" controller="*generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]"
	I1122 00:59:09.174473       1 policy_source.go:240] refreshing policies
	I1122 00:59:09.190300       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1122 00:59:09.197252       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1122 00:59:09.219609       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I1122 00:59:09.428166       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1122 00:59:10.167478       1 controller.go:667] quota admission added evaluator for: namespaces
	I1122 00:59:10.241611       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1122 00:59:10.346097       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1122 00:59:10.418866       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1122 00:59:10.490698       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1122 00:59:10.626123       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.111.206.183"}
	I1122 00:59:10.709293       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.97.232.63"}
	I1122 00:59:12.388266       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I1122 00:59:12.437666       1 controller.go:667] quota admission added evaluator for: endpoints
	I1122 00:59:12.480282       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1122 00:59:12.508830       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	
	
	==> kube-controller-manager [ef5cf3bc0e8a1e84b865765165a5244f97715b14ad4afe6bdecb47483cb802ba] <==
	I1122 00:59:12.282504       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I1122 00:59:12.282612       1 shared_informer.go:356] "Caches are synced" controller="namespace"
	I1122 00:59:12.282671       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I1122 00:59:12.286093       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1122 00:59:12.302419       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I1122 00:59:12.306003       1 shared_informer.go:356] "Caches are synced" controller="taint"
	I1122 00:59:12.318283       1 shared_informer.go:356] "Caches are synced" controller="job"
	I1122 00:59:12.318426       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	I1122 00:59:12.335849       1 shared_informer.go:356] "Caches are synced" controller="ReplicationController"
	I1122 00:59:12.335959       1 shared_informer.go:356] "Caches are synced" controller="validatingadmissionpolicy-status"
	I1122 00:59:12.336010       1 shared_informer.go:356] "Caches are synced" controller="cronjob"
	I1122 00:59:12.336077       1 shared_informer.go:356] "Caches are synced" controller="VAC protection"
	I1122 00:59:12.336117       1 shared_informer.go:356] "Caches are synced" controller="PVC protection"
	I1122 00:59:12.336177       1 shared_informer.go:356] "Caches are synced" controller="service account"
	I1122 00:59:12.336500       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1122 00:59:12.338310       1 shared_informer.go:356] "Caches are synced" controller="disruption"
	I1122 00:59:12.343745       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I1122 00:59:12.349541       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="default-k8s-diff-port-882305"
	I1122 00:59:12.350546       1 node_lifecycle_controller.go:1067] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I1122 00:59:12.343892       1 shared_informer.go:356] "Caches are synced" controller="service-cidr-controller"
	I1122 00:59:12.344600       1 shared_informer.go:356] "Caches are synced" controller="legacy-service-account-token-cleaner"
	I1122 00:59:12.428981       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1122 00:59:12.429270       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1122 00:59:12.429357       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1122 00:59:12.429461       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	
	
	==> kube-proxy [24c64924a669eecc6f41d3f6f2a0935ebe7520e41d8214678fc5533fb88d7dd3] <==
	I1122 00:59:11.471029       1 server_linux.go:53] "Using iptables proxy"
	I1122 00:59:11.632210       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1122 00:59:11.755208       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1122 00:59:11.759137       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.85.2"]
	E1122 00:59:11.766042       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1122 00:59:11.879624       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1122 00:59:11.879675       1 server_linux.go:132] "Using iptables Proxier"
	I1122 00:59:11.889079       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1122 00:59:11.889344       1 server.go:527] "Version info" version="v1.34.1"
	I1122 00:59:11.889359       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1122 00:59:11.891013       1 config.go:200] "Starting service config controller"
	I1122 00:59:11.891037       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1122 00:59:11.891061       1 config.go:106] "Starting endpoint slice config controller"
	I1122 00:59:11.891066       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1122 00:59:11.891081       1 config.go:403] "Starting serviceCIDR config controller"
	I1122 00:59:11.891085       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1122 00:59:11.895682       1 config.go:309] "Starting node config controller"
	I1122 00:59:11.895701       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1122 00:59:11.895709       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1122 00:59:11.993456       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1122 00:59:11.993502       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1122 00:59:11.993541       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [d1d854f1c70c8c8f58aacea7d3bc3bea0c433b6787c467ffaf9f43d30127f3aa] <==
	I1122 00:59:08.780069       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1122 00:59:08.830655       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1122 00:59:08.830692       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1122 00:59:08.832564       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1122 00:59:08.832685       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	E1122 00:59:08.867215       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1122 00:59:08.867293       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1122 00:59:08.867588       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1122 00:59:08.867655       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1122 00:59:08.867694       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1122 00:59:08.867730       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1122 00:59:08.867767       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1122 00:59:08.867807       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1122 00:59:08.867846       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1122 00:59:08.867882       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1122 00:59:08.867916       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1122 00:59:08.867960       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1122 00:59:08.868588       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_arm64.s:1223" type="*v1.ConfigMap"
	E1122 00:59:08.890275       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1122 00:59:08.890338       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1122 00:59:08.890378       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1122 00:59:08.890417       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1122 00:59:08.890454       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1122 00:59:08.890503       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	I1122 00:59:10.033861       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Nov 22 00:59:12 default-k8s-diff-port-882305 kubelet[782]: I1122 00:59:12.768979     782 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-djl87\" (UniqueName: \"kubernetes.io/projected/a27d7302-b089-4adf-a86b-4d6b9bfdb28c-kube-api-access-djl87\") pod \"kubernetes-dashboard-855c9754f9-sx5ls\" (UID: \"a27d7302-b089-4adf-a86b-4d6b9bfdb28c\") " pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-sx5ls"
	Nov 22 00:59:12 default-k8s-diff-port-882305 kubelet[782]: I1122 00:59:12.769492     782 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/a27d7302-b089-4adf-a86b-4d6b9bfdb28c-tmp-volume\") pod \"kubernetes-dashboard-855c9754f9-sx5ls\" (UID: \"a27d7302-b089-4adf-a86b-4d6b9bfdb28c\") " pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-sx5ls"
	Nov 22 00:59:12 default-k8s-diff-port-882305 kubelet[782]: I1122 00:59:12.869986     782 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/de50e910-8206-46b1-918d-353f76a54323-tmp-volume\") pod \"dashboard-metrics-scraper-6ffb444bf9-qhkdl\" (UID: \"de50e910-8206-46b1-918d-353f76a54323\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-qhkdl"
	Nov 22 00:59:12 default-k8s-diff-port-882305 kubelet[782]: I1122 00:59:12.870193     782 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9sc8c\" (UniqueName: \"kubernetes.io/projected/de50e910-8206-46b1-918d-353f76a54323-kube-api-access-9sc8c\") pod \"dashboard-metrics-scraper-6ffb444bf9-qhkdl\" (UID: \"de50e910-8206-46b1-918d-353f76a54323\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-qhkdl"
	Nov 22 00:59:12 default-k8s-diff-port-882305 kubelet[782]: W1122 00:59:12.995860     782 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/3f972239d661c1a7b13b144967a8f04cbcdb72c736bec80a78bae8fab1d357e1/crio-443e84d4af101581c7521ebf3de82847ebb19b32ca053e0d41057f86372641f2 WatchSource:0}: Error finding container 443e84d4af101581c7521ebf3de82847ebb19b32ca053e0d41057f86372641f2: Status 404 returned error can't find the container with id 443e84d4af101581c7521ebf3de82847ebb19b32ca053e0d41057f86372641f2
	Nov 22 00:59:13 default-k8s-diff-port-882305 kubelet[782]: W1122 00:59:13.157355     782 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/3f972239d661c1a7b13b144967a8f04cbcdb72c736bec80a78bae8fab1d357e1/crio-1a453755c2263d3266645840281d1cf1f6ea63efc84761c5e0db4c52c760cb39 WatchSource:0}: Error finding container 1a453755c2263d3266645840281d1cf1f6ea63efc84761c5e0db4c52c760cb39: Status 404 returned error can't find the container with id 1a453755c2263d3266645840281d1cf1f6ea63efc84761c5e0db4c52c760cb39
	Nov 22 00:59:28 default-k8s-diff-port-882305 kubelet[782]: I1122 00:59:28.212297     782 scope.go:117] "RemoveContainer" containerID="2d7935ddb77f1a24c452a280df0ed370bb0576b6eecca2c0445ba137dcac57cc"
	Nov 22 00:59:28 default-k8s-diff-port-882305 kubelet[782]: I1122 00:59:28.245608     782 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-sx5ls" podStartSLOduration=8.221576667 podStartE2EDuration="16.24559078s" podCreationTimestamp="2025-11-22 00:59:12 +0000 UTC" firstStartedPulling="2025-11-22 00:59:13.008829887 +0000 UTC m=+10.496820335" lastFinishedPulling="2025-11-22 00:59:21.032844 +0000 UTC m=+18.520834448" observedRunningTime="2025-11-22 00:59:21.217308446 +0000 UTC m=+18.705298902" watchObservedRunningTime="2025-11-22 00:59:28.24559078 +0000 UTC m=+25.733581236"
	Nov 22 00:59:29 default-k8s-diff-port-882305 kubelet[782]: I1122 00:59:29.216600     782 scope.go:117] "RemoveContainer" containerID="2d7935ddb77f1a24c452a280df0ed370bb0576b6eecca2c0445ba137dcac57cc"
	Nov 22 00:59:29 default-k8s-diff-port-882305 kubelet[782]: I1122 00:59:29.216916     782 scope.go:117] "RemoveContainer" containerID="7fef82247ff2f4e4a77f9f05faa9569d4394bd65ead3af491328f162ebef040d"
	Nov 22 00:59:29 default-k8s-diff-port-882305 kubelet[782]: E1122 00:59:29.217077     782 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-qhkdl_kubernetes-dashboard(de50e910-8206-46b1-918d-353f76a54323)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-qhkdl" podUID="de50e910-8206-46b1-918d-353f76a54323"
	Nov 22 00:59:30 default-k8s-diff-port-882305 kubelet[782]: I1122 00:59:30.222352     782 scope.go:117] "RemoveContainer" containerID="7fef82247ff2f4e4a77f9f05faa9569d4394bd65ead3af491328f162ebef040d"
	Nov 22 00:59:30 default-k8s-diff-port-882305 kubelet[782]: E1122 00:59:30.222994     782 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-qhkdl_kubernetes-dashboard(de50e910-8206-46b1-918d-353f76a54323)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-qhkdl" podUID="de50e910-8206-46b1-918d-353f76a54323"
	Nov 22 00:59:33 default-k8s-diff-port-882305 kubelet[782]: I1122 00:59:33.033921     782 scope.go:117] "RemoveContainer" containerID="7fef82247ff2f4e4a77f9f05faa9569d4394bd65ead3af491328f162ebef040d"
	Nov 22 00:59:33 default-k8s-diff-port-882305 kubelet[782]: E1122 00:59:33.034627     782 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-qhkdl_kubernetes-dashboard(de50e910-8206-46b1-918d-353f76a54323)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-qhkdl" podUID="de50e910-8206-46b1-918d-353f76a54323"
	Nov 22 00:59:41 default-k8s-diff-port-882305 kubelet[782]: I1122 00:59:41.252577     782 scope.go:117] "RemoveContainer" containerID="e34c46d28bc8c466e4b69397894de9dbaf562f334db936138792ea857f7984cf"
	Nov 22 00:59:44 default-k8s-diff-port-882305 kubelet[782]: I1122 00:59:44.812964     782 scope.go:117] "RemoveContainer" containerID="7fef82247ff2f4e4a77f9f05faa9569d4394bd65ead3af491328f162ebef040d"
	Nov 22 00:59:45 default-k8s-diff-port-882305 kubelet[782]: I1122 00:59:45.267301     782 scope.go:117] "RemoveContainer" containerID="7fef82247ff2f4e4a77f9f05faa9569d4394bd65ead3af491328f162ebef040d"
	Nov 22 00:59:45 default-k8s-diff-port-882305 kubelet[782]: I1122 00:59:45.267635     782 scope.go:117] "RemoveContainer" containerID="e6c681247aa417267830503d9c58396605d3706c59d9c0cf213900ef1533158d"
	Nov 22 00:59:45 default-k8s-diff-port-882305 kubelet[782]: E1122 00:59:45.267942     782 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-qhkdl_kubernetes-dashboard(de50e910-8206-46b1-918d-353f76a54323)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-qhkdl" podUID="de50e910-8206-46b1-918d-353f76a54323"
	Nov 22 00:59:53 default-k8s-diff-port-882305 kubelet[782]: I1122 00:59:53.034045     782 scope.go:117] "RemoveContainer" containerID="e6c681247aa417267830503d9c58396605d3706c59d9c0cf213900ef1533158d"
	Nov 22 00:59:53 default-k8s-diff-port-882305 kubelet[782]: E1122 00:59:53.034266     782 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-qhkdl_kubernetes-dashboard(de50e910-8206-46b1-918d-353f76a54323)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-qhkdl" podUID="de50e910-8206-46b1-918d-353f76a54323"
	Nov 22 00:59:58 default-k8s-diff-port-882305 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Nov 22 00:59:58 default-k8s-diff-port-882305 systemd[1]: kubelet.service: Deactivated successfully.
	Nov 22 00:59:58 default-k8s-diff-port-882305 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	
	
	==> kubernetes-dashboard [93cb84e4ab699eb2196717799321e4330d7fd89c7b5847a1367298b8dc5f69b4] <==
	2025/11/22 00:59:21 Starting overwatch
	2025/11/22 00:59:21 Using namespace: kubernetes-dashboard
	2025/11/22 00:59:21 Using in-cluster config to connect to apiserver
	2025/11/22 00:59:21 Using secret token for csrf signing
	2025/11/22 00:59:21 Initializing csrf token from kubernetes-dashboard-csrf secret
	2025/11/22 00:59:21 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2025/11/22 00:59:21 Successful initial request to the apiserver, version: v1.34.1
	2025/11/22 00:59:21 Generating JWE encryption key
	2025/11/22 00:59:21 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2025/11/22 00:59:21 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2025/11/22 00:59:22 Initializing JWE encryption key from synchronized object
	2025/11/22 00:59:22 Creating in-cluster Sidecar client
	2025/11/22 00:59:22 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/11/22 00:59:22 Serving insecurely on HTTP port: 9090
	2025/11/22 00:59:52 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	
	
	==> storage-provisioner [6797e48f56252ab176007011001840ada8a9976acd404dda959a8334d3c46cdb] <==
	I1122 00:59:41.307400       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1122 00:59:41.320439       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1122 00:59:41.320554       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W1122 00:59:41.323028       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1122 00:59:44.779172       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1122 00:59:49.040448       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1122 00:59:52.639304       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1122 00:59:55.692511       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1122 00:59:58.715844       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1122 00:59:58.722690       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1122 00:59:58.722952       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1122 00:59:58.725187       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-882305_f3fb3511-5bfa-4e5e-b5dc-34953193612b!
	I1122 00:59:58.732243       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"5cef023a-e193-4fc5-8350-b0d9fd8c5815", APIVersion:"v1", ResourceVersion:"688", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' default-k8s-diff-port-882305_f3fb3511-5bfa-4e5e-b5dc-34953193612b became leader
	W1122 00:59:58.733603       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1122 00:59:58.759176       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1122 00:59:58.827473       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-882305_f3fb3511-5bfa-4e5e-b5dc-34953193612b!
	W1122 01:00:00.774904       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1122 01:00:00.796597       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1122 01:00:02.799633       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1122 01:00:02.814046       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1122 01:00:04.817366       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1122 01:00:04.828177       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	
	
	==> storage-provisioner [e34c46d28bc8c466e4b69397894de9dbaf562f334db936138792ea857f7984cf] <==
	I1122 00:59:10.917080       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1122 00:59:40.919323       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p default-k8s-diff-port-882305 -n default-k8s-diff-port-882305
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p default-k8s-diff-port-882305 -n default-k8s-diff-port-882305: exit status 2 (560.029866ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:269: (dbg) Run:  kubectl --context default-k8s-diff-port-882305 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/default-k8s-diff-port/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/Pause (9.46s)
E1122 01:06:00.473484  516937 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21934-513600/.minikube/profiles/auto-163229/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1122 01:06:00.479799  516937 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21934-513600/.minikube/profiles/auto-163229/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1122 01:06:00.491184  516937 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21934-513600/.minikube/profiles/auto-163229/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1122 01:06:00.512552  516937 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21934-513600/.minikube/profiles/auto-163229/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1122 01:06:00.553918  516937 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21934-513600/.minikube/profiles/auto-163229/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1122 01:06:00.635377  516937 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21934-513600/.minikube/profiles/auto-163229/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1122 01:06:00.797002  516937 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21934-513600/.minikube/profiles/auto-163229/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1122 01:06:01.118441  516937 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21934-513600/.minikube/profiles/auto-163229/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1122 01:06:01.760718  516937 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21934-513600/.minikube/profiles/auto-163229/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1122 01:06:02.908198  516937 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21934-513600/.minikube/profiles/no-preload-165130/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1122 01:06:03.042768  516937 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21934-513600/.minikube/profiles/auto-163229/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1122 01:06:05.604108  516937 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21934-513600/.minikube/profiles/auto-163229/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1122 01:06:10.725783  516937 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21934-513600/.minikube/profiles/auto-163229/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1122 01:06:12.669562  516937 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21934-513600/.minikube/profiles/default-k8s-diff-port-882305/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1122 01:06:20.967932  516937 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21934-513600/.minikube/profiles/auto-163229/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1122 01:06:30.611225  516937 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21934-513600/.minikube/profiles/no-preload-165130/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1122 01:06:30.752847  516937 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21934-513600/.minikube/profiles/kindnet-163229/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1122 01:06:30.759224  516937 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21934-513600/.minikube/profiles/kindnet-163229/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1122 01:06:30.770733  516937 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21934-513600/.minikube/profiles/kindnet-163229/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1122 01:06:30.792189  516937 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21934-513600/.minikube/profiles/kindnet-163229/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1122 01:06:30.833697  516937 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21934-513600/.minikube/profiles/kindnet-163229/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1122 01:06:30.915827  516937 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21934-513600/.minikube/profiles/kindnet-163229/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1122 01:06:31.077357  516937 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21934-513600/.minikube/profiles/kindnet-163229/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1122 01:06:31.398986  516937 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21934-513600/.minikube/profiles/kindnet-163229/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1122 01:06:32.040357  516937 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21934-513600/.minikube/profiles/kindnet-163229/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1122 01:06:33.322318  516937 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21934-513600/.minikube/profiles/kindnet-163229/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1122 01:06:35.884612  516937 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21934-513600/.minikube/profiles/kindnet-163229/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1122 01:06:41.006212  516937 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21934-513600/.minikube/profiles/kindnet-163229/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1122 01:06:41.449302  516937 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21934-513600/.minikube/profiles/auto-163229/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1122 01:06:51.247703  516937 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21934-513600/.minikube/profiles/kindnet-163229/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"

                                                
                                    

Test pass (258/328)

Order passed test Duration
3 TestDownloadOnly/v1.28.0/json-events 36.24
4 TestDownloadOnly/v1.28.0/preload-exists 0
8 TestDownloadOnly/v1.28.0/LogsDuration 0.09
9 TestDownloadOnly/v1.28.0/DeleteAll 0.22
10 TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds 0.14
12 TestDownloadOnly/v1.34.1/json-events 5.68
13 TestDownloadOnly/v1.34.1/preload-exists 0
17 TestDownloadOnly/v1.34.1/LogsDuration 0.12
18 TestDownloadOnly/v1.34.1/DeleteAll 0.23
19 TestDownloadOnly/v1.34.1/DeleteAlwaysSucceeds 0.13
21 TestBinaryMirror 0.61
25 TestAddons/PreSetup/EnablingAddonOnNonExistingCluster 0.08
26 TestAddons/PreSetup/DisablingAddonOnNonExistingCluster 0.09
27 TestAddons/Setup 172.33
31 TestAddons/serial/GCPAuth/Namespaces 0.19
32 TestAddons/serial/GCPAuth/FakeCredentials 10.85
48 TestAddons/StoppedEnableDisable 12.41
49 TestCertOptions 37.71
50 TestCertExpiration 251.93
52 TestForceSystemdFlag 43.63
53 TestForceSystemdEnv 42.12
58 TestErrorSpam/setup 34.89
59 TestErrorSpam/start 0.8
60 TestErrorSpam/status 1.1
61 TestErrorSpam/pause 7.37
62 TestErrorSpam/unpause 5.97
63 TestErrorSpam/stop 1.55
66 TestFunctional/serial/CopySyncFile 0
67 TestFunctional/serial/StartWithProxy 79.87
68 TestFunctional/serial/AuditLog 0
69 TestFunctional/serial/SoftStart 29.14
70 TestFunctional/serial/KubeContext 0.07
71 TestFunctional/serial/KubectlGetPods 0.09
74 TestFunctional/serial/CacheCmd/cache/add_remote 3.46
75 TestFunctional/serial/CacheCmd/cache/add_local 1.41
76 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.06
77 TestFunctional/serial/CacheCmd/cache/list 0.06
78 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.32
79 TestFunctional/serial/CacheCmd/cache/cache_reload 1.9
80 TestFunctional/serial/CacheCmd/cache/delete 0.12
81 TestFunctional/serial/MinikubeKubectlCmd 0.15
82 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.15
83 TestFunctional/serial/ExtraConfig 32.52
84 TestFunctional/serial/ComponentHealth 0.1
85 TestFunctional/serial/LogsCmd 1.44
86 TestFunctional/serial/LogsFileCmd 1.47
87 TestFunctional/serial/InvalidService 4.64
89 TestFunctional/parallel/ConfigCmd 0.47
90 TestFunctional/parallel/DashboardCmd 12.95
91 TestFunctional/parallel/DryRun 0.46
92 TestFunctional/parallel/InternationalLanguage 0.2
93 TestFunctional/parallel/StatusCmd 1.03
98 TestFunctional/parallel/AddonsCmd 0.2
99 TestFunctional/parallel/PersistentVolumeClaim 25.62
101 TestFunctional/parallel/SSHCmd 0.75
102 TestFunctional/parallel/CpCmd 2.08
104 TestFunctional/parallel/FileSync 0.37
105 TestFunctional/parallel/CertSync 2.09
109 TestFunctional/parallel/NodeLabels 0.09
111 TestFunctional/parallel/NonActiveRuntimeDisabled 0.73
113 TestFunctional/parallel/License 0.34
115 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 0.63
116 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0
118 TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup 8.45
119 TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP 0.1
120 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 0
124 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.11
126 TestFunctional/parallel/ProfileCmd/profile_not_create 0.44
127 TestFunctional/parallel/ProfileCmd/profile_list 0.42
128 TestFunctional/parallel/ProfileCmd/profile_json_output 0.43
129 TestFunctional/parallel/MountCmd/any-port 7.02
130 TestFunctional/parallel/MountCmd/specific-port 1.76
131 TestFunctional/parallel/MountCmd/VerifyCleanup 1.31
132 TestFunctional/parallel/ServiceCmd/List 0.61
133 TestFunctional/parallel/ServiceCmd/JSONOutput 0.61
137 TestFunctional/parallel/Version/short 0.08
138 TestFunctional/parallel/Version/components 1.34
139 TestFunctional/parallel/ImageCommands/ImageListShort 0.25
140 TestFunctional/parallel/ImageCommands/ImageListTable 0.27
141 TestFunctional/parallel/ImageCommands/ImageListJson 0.26
142 TestFunctional/parallel/ImageCommands/ImageListYaml 0.28
143 TestFunctional/parallel/ImageCommands/ImageBuild 4.01
144 TestFunctional/parallel/ImageCommands/Setup 0.63
149 TestFunctional/parallel/ImageCommands/ImageRemove 0.52
152 TestFunctional/parallel/UpdateContextCmd/no_changes 0.18
153 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.16
154 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.2
155 TestFunctional/delete_echo-server_images 0.04
156 TestFunctional/delete_my-image_image 0.02
157 TestFunctional/delete_minikube_cached_images 0.02
162 TestMultiControlPlane/serial/StartCluster 208.87
163 TestMultiControlPlane/serial/DeployApp 6.95
164 TestMultiControlPlane/serial/PingHostFromPods 1.47
165 TestMultiControlPlane/serial/AddWorkerNode 59.46
166 TestMultiControlPlane/serial/NodeLabels 0.12
167 TestMultiControlPlane/serial/HAppyAfterClusterStart 1.09
168 TestMultiControlPlane/serial/CopyFile 19.45
169 TestMultiControlPlane/serial/StopSecondaryNode 12.78
170 TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop 0.8
171 TestMultiControlPlane/serial/RestartSecondaryNode 27.32
172 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart 1.22
176 TestMultiControlPlane/serial/StopCluster 24.44
177 TestMultiControlPlane/serial/RestartCluster 68.73
178 TestMultiControlPlane/serial/DegradedAfterClusterRestart 0.76
179 TestMultiControlPlane/serial/AddSecondaryNode 55.8
180 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd 1.05
185 TestJSONOutput/start/Command 75.87
186 TestJSONOutput/start/Audit 0
188 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
189 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
192 TestJSONOutput/pause/Audit 0
194 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
195 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
198 TestJSONOutput/unpause/Audit 0
200 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
201 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
203 TestJSONOutput/stop/Command 5.79
204 TestJSONOutput/stop/Audit 0
206 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
207 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
208 TestErrorJSONOutput 0.23
210 TestKicCustomNetwork/create_custom_network 41.31
211 TestKicCustomNetwork/use_default_bridge_network 36.38
212 TestKicExistingNetwork 36.2
213 TestKicCustomSubnet 37.83
214 TestKicStaticIP 38.45
215 TestMainNoArgs 0.05
216 TestMinikubeProfile 69.52
219 TestMountStart/serial/StartWithMountFirst 8.71
220 TestMountStart/serial/VerifyMountFirst 0.26
221 TestMountStart/serial/StartWithMountSecond 9.01
222 TestMountStart/serial/VerifyMountSecond 0.28
223 TestMountStart/serial/DeleteFirst 1.72
224 TestMountStart/serial/VerifyMountPostDelete 0.27
225 TestMountStart/serial/Stop 1.3
226 TestMountStart/serial/RestartStopped 7.99
227 TestMountStart/serial/VerifyMountPostStop 0.27
230 TestMultiNode/serial/FreshStart2Nodes 136.19
231 TestMultiNode/serial/DeployApp2Nodes 5.15
232 TestMultiNode/serial/PingHostFrom2Pods 0.89
233 TestMultiNode/serial/AddNode 58.53
234 TestMultiNode/serial/MultiNodeLabels 0.13
235 TestMultiNode/serial/ProfileList 0.72
236 TestMultiNode/serial/CopyFile 10.42
237 TestMultiNode/serial/StopNode 2.41
238 TestMultiNode/serial/StartAfterStop 8.38
239 TestMultiNode/serial/RestartKeepsNodes 76.54
240 TestMultiNode/serial/DeleteNode 5.66
241 TestMultiNode/serial/StopMultiNode 24.07
242 TestMultiNode/serial/RestartMultiNode 54.54
243 TestMultiNode/serial/ValidateNameConflict 37.54
248 TestPreload 150.31
250 TestScheduledStopUnix 107.95
253 TestInsufficientStorage 13.25
254 TestRunningBinaryUpgrade 62.23
256 TestKubernetesUpgrade 349.35
257 TestMissingContainerUpgrade 134.74
259 TestNoKubernetes/serial/StartNoK8sWithVersion 0.1
260 TestNoKubernetes/serial/StartWithK8s 40.33
261 TestNoKubernetes/serial/StartWithStopK8s 10.04
262 TestNoKubernetes/serial/Start 9.16
263 TestNoKubernetes/serial/VerifyNok8sNoK8sDownloads 0
264 TestNoKubernetes/serial/VerifyK8sNotRunning 0.35
265 TestNoKubernetes/serial/ProfileList 1.13
266 TestNoKubernetes/serial/Stop 1.43
267 TestNoKubernetes/serial/StartNoArgs 7.73
268 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.43
269 TestStoppedBinaryUpgrade/Setup 8.07
270 TestStoppedBinaryUpgrade/Upgrade 55.79
271 TestStoppedBinaryUpgrade/MinikubeLogs 1.22
280 TestPause/serial/Start 50.36
281 TestPause/serial/SecondStartNoReconfiguration 25.76
290 TestNetworkPlugins/group/false 4.21
295 TestStartStop/group/old-k8s-version/serial/FirstStart 60.01
296 TestStartStop/group/old-k8s-version/serial/DeployApp 9.43
298 TestStartStop/group/old-k8s-version/serial/Stop 12.01
299 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 0.2
300 TestStartStop/group/old-k8s-version/serial/SecondStart 51.91
301 TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop 6
302 TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop 5.12
303 TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages 0.25
306 TestStartStop/group/no-preload/serial/FirstStart 80.13
308 TestStartStop/group/embed-certs/serial/FirstStart 86.02
309 TestStartStop/group/no-preload/serial/DeployApp 8.32
311 TestStartStop/group/no-preload/serial/Stop 12.01
312 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 0.18
313 TestStartStop/group/no-preload/serial/SecondStart 47.32
314 TestStartStop/group/embed-certs/serial/DeployApp 8.47
316 TestStartStop/group/embed-certs/serial/Stop 12.51
317 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 0.2
318 TestStartStop/group/embed-certs/serial/SecondStart 55.38
319 TestStartStop/group/no-preload/serial/UserAppExistsAfterStop 6
320 TestStartStop/group/no-preload/serial/AddonExistsAfterStop 5.14
321 TestStartStop/group/no-preload/serial/VerifyKubernetesImages 0.28
324 TestStartStop/group/default-k8s-diff-port/serial/FirstStart 52.29
325 TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop 6
326 TestStartStop/group/embed-certs/serial/AddonExistsAfterStop 5.09
327 TestStartStop/group/embed-certs/serial/VerifyKubernetesImages 0.28
330 TestStartStop/group/newest-cni/serial/FirstStart 37.83
331 TestStartStop/group/default-k8s-diff-port/serial/DeployApp 10.41
333 TestStartStop/group/default-k8s-diff-port/serial/Stop 12.25
334 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop 0.2
335 TestStartStop/group/default-k8s-diff-port/serial/SecondStart 52.3
336 TestStartStop/group/newest-cni/serial/DeployApp 0
338 TestStartStop/group/newest-cni/serial/Stop 1.51
339 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 0.27
340 TestStartStop/group/newest-cni/serial/SecondStart 21.79
341 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
342 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
343 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 0.35
345 TestNetworkPlugins/group/auto/Start 85.4
346 TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop 6
347 TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop 5.14
348 TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages 0.28
350 TestNetworkPlugins/group/kindnet/Start 79.83
351 TestNetworkPlugins/group/auto/KubeletFlags 0.32
352 TestNetworkPlugins/group/auto/NetCatPod 11.61
353 TestNetworkPlugins/group/auto/DNS 0.18
354 TestNetworkPlugins/group/auto/Localhost 0.13
355 TestNetworkPlugins/group/auto/HairPin 0.14
356 TestNetworkPlugins/group/kindnet/ControllerPod 6.01
357 TestNetworkPlugins/group/calico/Start 88.87
358 TestNetworkPlugins/group/kindnet/KubeletFlags 0.38
359 TestNetworkPlugins/group/kindnet/NetCatPod 11.47
360 TestNetworkPlugins/group/kindnet/DNS 0.34
361 TestNetworkPlugins/group/kindnet/Localhost 0.17
362 TestNetworkPlugins/group/kindnet/HairPin 0.18
363 TestNetworkPlugins/group/custom-flannel/Start 65.8
364 TestNetworkPlugins/group/calico/ControllerPod 6.01
365 TestNetworkPlugins/group/calico/KubeletFlags 0.32
366 TestNetworkPlugins/group/calico/NetCatPod 10.26
367 TestNetworkPlugins/group/calico/DNS 0.16
368 TestNetworkPlugins/group/calico/Localhost 0.15
369 TestNetworkPlugins/group/calico/HairPin 0.14
370 TestNetworkPlugins/group/custom-flannel/KubeletFlags 0.3
371 TestNetworkPlugins/group/custom-flannel/NetCatPod 11.28
372 TestNetworkPlugins/group/custom-flannel/DNS 0.21
373 TestNetworkPlugins/group/custom-flannel/Localhost 0.18
374 TestNetworkPlugins/group/custom-flannel/HairPin 0.19
375 TestNetworkPlugins/group/enable-default-cni/Start 81.27
376 TestNetworkPlugins/group/flannel/Start 60.11
377 TestNetworkPlugins/group/flannel/ControllerPod 6.01
378 TestNetworkPlugins/group/enable-default-cni/KubeletFlags 0.3
379 TestNetworkPlugins/group/enable-default-cni/NetCatPod 12.29
380 TestNetworkPlugins/group/flannel/KubeletFlags 0.42
381 TestNetworkPlugins/group/flannel/NetCatPod 11.39
382 TestNetworkPlugins/group/enable-default-cni/DNS 0.19
383 TestNetworkPlugins/group/enable-default-cni/Localhost 0.13
384 TestNetworkPlugins/group/enable-default-cni/HairPin 0.13
385 TestNetworkPlugins/group/flannel/DNS 0.23
386 TestNetworkPlugins/group/flannel/Localhost 0.18
387 TestNetworkPlugins/group/flannel/HairPin 0.15
388 TestNetworkPlugins/group/bridge/Start 73.06
389 TestNetworkPlugins/group/bridge/KubeletFlags 0.3
390 TestNetworkPlugins/group/bridge/NetCatPod 9.26
391 TestNetworkPlugins/group/bridge/DNS 0.16
392 TestNetworkPlugins/group/bridge/Localhost 0.13
393 TestNetworkPlugins/group/bridge/HairPin 0.12
x
+
TestDownloadOnly/v1.28.0/json-events (36.24s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/json-events
aaa_download_only_test.go:80: (dbg) Run:  out/minikube-linux-arm64 start -o=json --download-only -p download-only-255607 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=docker  --container-runtime=crio
aaa_download_only_test.go:80: (dbg) Done: out/minikube-linux-arm64 start -o=json --download-only -p download-only-255607 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=docker  --container-runtime=crio: (36.237914469s)
--- PASS: TestDownloadOnly/v1.28.0/json-events (36.24s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/preload-exists
I1121 23:47:39.046644  516937 preload.go:188] Checking if preload exists for k8s version v1.28.0 and runtime crio
I1121 23:47:39.046724  516937 preload.go:203] Found local preload: /home/jenkins/minikube-integration/21934-513600/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-arm64.tar.lz4
--- PASS: TestDownloadOnly/v1.28.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/LogsDuration (0.09s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/LogsDuration
aaa_download_only_test.go:183: (dbg) Run:  out/minikube-linux-arm64 logs -p download-only-255607
aaa_download_only_test.go:183: (dbg) Non-zero exit: out/minikube-linux-arm64 logs -p download-only-255607: exit status 85 (82.335716ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────┬─────────┬─────────┬─────────────────────┬──────────┐
	│ COMMAND │                                                                                   ARGS                                                                                    │       PROFILE        │  USER   │ VERSION │     START TIME      │ END TIME │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────┼─────────┼─────────┼─────────────────────┼──────────┤
	│ start   │ -o=json --download-only -p download-only-255607 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=docker  --container-runtime=crio │ download-only-255607 │ jenkins │ v1.37.0 │ 21 Nov 25 23:47 UTC │          │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────┴─────────┴─────────┴─────────────────────┴──────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/21 23:47:02
	Running on machine: ip-172-31-21-244
	Binary: Built with gc go1.25.3 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1121 23:47:02.850853  516942 out.go:360] Setting OutFile to fd 1 ...
	I1121 23:47:02.851088  516942 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1121 23:47:02.851116  516942 out.go:374] Setting ErrFile to fd 2...
	I1121 23:47:02.851135  516942 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1121 23:47:02.851435  516942 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21934-513600/.minikube/bin
	W1121 23:47:02.851631  516942 root.go:314] Error reading config file at /home/jenkins/minikube-integration/21934-513600/.minikube/config/config.json: open /home/jenkins/minikube-integration/21934-513600/.minikube/config/config.json: no such file or directory
	I1121 23:47:02.852157  516942 out.go:368] Setting JSON to true
	I1121 23:47:02.853091  516942 start.go:133] hostinfo: {"hostname":"ip-172-31-21-244","uptime":16139,"bootTime":1763752684,"procs":153,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"da8ac1fd-6236-412a-a346-95873c98230d"}
	I1121 23:47:02.853187  516942 start.go:143] virtualization:  
	I1121 23:47:02.858484  516942 out.go:99] [download-only-255607] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	W1121 23:47:02.858701  516942 preload.go:354] Failed to list preload files: open /home/jenkins/minikube-integration/21934-513600/.minikube/cache/preloaded-tarball: no such file or directory
	I1121 23:47:02.858830  516942 notify.go:221] Checking for updates...
	I1121 23:47:02.862824  516942 out.go:171] MINIKUBE_LOCATION=21934
	I1121 23:47:02.866038  516942 out.go:171] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1121 23:47:02.869195  516942 out.go:171] KUBECONFIG=/home/jenkins/minikube-integration/21934-513600/kubeconfig
	I1121 23:47:02.872334  516942 out.go:171] MINIKUBE_HOME=/home/jenkins/minikube-integration/21934-513600/.minikube
	I1121 23:47:02.875496  516942 out.go:171] MINIKUBE_BIN=out/minikube-linux-arm64
	W1121 23:47:02.881368  516942 out.go:336] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I1121 23:47:02.881629  516942 driver.go:422] Setting default libvirt URI to qemu:///system
	I1121 23:47:02.908316  516942 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1121 23:47:02.908424  516942 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1121 23:47:02.963998  516942 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:29 OomKillDisable:true NGoroutines:61 SystemTime:2025-11-21 23:47:02.9548062 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1121 23:47:02.964104  516942 docker.go:319] overlay module found
	I1121 23:47:02.967310  516942 out.go:99] Using the docker driver based on user configuration
	I1121 23:47:02.967369  516942 start.go:309] selected driver: docker
	I1121 23:47:02.967381  516942 start.go:930] validating driver "docker" against <nil>
	I1121 23:47:02.967478  516942 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1121 23:47:03.031297  516942 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:29 OomKillDisable:true NGoroutines:61 SystemTime:2025-11-21 23:47:03.021834887 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1121 23:47:03.031466  516942 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1121 23:47:03.031765  516942 start_flags.go:410] Using suggested 3072MB memory alloc based on sys=7834MB, container=7834MB
	I1121 23:47:03.031917  516942 start_flags.go:974] Wait components to verify : map[apiserver:true system_pods:true]
	I1121 23:47:03.035115  516942 out.go:171] Using Docker driver with root privileges
	I1121 23:47:03.038144  516942 cni.go:84] Creating CNI manager for ""
	I1121 23:47:03.038217  516942 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1121 23:47:03.038233  516942 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1121 23:47:03.038308  516942 start.go:353] cluster config:
	{Name:download-only-255607 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763588073-21934@sha256:19d3da0413e1bfa354cbb88004c6796f8e9772a083e0230b0f6e50212ee04c7e Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:download-only-255607 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Co
ntainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1121 23:47:03.041276  516942 out.go:99] Starting "download-only-255607" primary control-plane node in "download-only-255607" cluster
	I1121 23:47:03.041304  516942 cache.go:134] Beginning downloading kic base image for docker with crio
	I1121 23:47:03.044160  516942 out.go:99] Pulling base image v0.0.48-1763588073-21934 ...
	I1121 23:47:03.044215  516942 preload.go:188] Checking if preload exists for k8s version v1.28.0 and runtime crio
	I1121 23:47:03.044376  516942 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763588073-21934@sha256:19d3da0413e1bfa354cbb88004c6796f8e9772a083e0230b0f6e50212ee04c7e in local docker daemon
	I1121 23:47:03.058972  516942 cache.go:163] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763588073-21934@sha256:19d3da0413e1bfa354cbb88004c6796f8e9772a083e0230b0f6e50212ee04c7e to local cache
	I1121 23:47:03.059178  516942 image.go:65] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763588073-21934@sha256:19d3da0413e1bfa354cbb88004c6796f8e9772a083e0230b0f6e50212ee04c7e in local cache directory
	I1121 23:47:03.059283  516942 image.go:150] Writing gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763588073-21934@sha256:19d3da0413e1bfa354cbb88004c6796f8e9772a083e0230b0f6e50212ee04c7e to local cache
	I1121 23:47:03.102177  516942 preload.go:148] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.28.0/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-arm64.tar.lz4
	I1121 23:47:03.102207  516942 cache.go:65] Caching tarball of preloaded images
	I1121 23:47:03.102380  516942 preload.go:188] Checking if preload exists for k8s version v1.28.0 and runtime crio
	I1121 23:47:03.105750  516942 out.go:99] Downloading Kubernetes v1.28.0 preload ...
	I1121 23:47:03.105777  516942 preload.go:318] getting checksum for preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-arm64.tar.lz4 from gcs api...
	I1121 23:47:03.199894  516942 preload.go:295] Got checksum from GCS API "e092595ade89dbfc477bd4cd6b9c633b"
	I1121 23:47:03.200037  516942 download.go:108] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.28.0/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-arm64.tar.lz4?checksum=md5:e092595ade89dbfc477bd4cd6b9c633b -> /home/jenkins/minikube-integration/21934-513600/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-arm64.tar.lz4
	I1121 23:47:08.096365  516942 cache.go:166] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763588073-21934@sha256:19d3da0413e1bfa354cbb88004c6796f8e9772a083e0230b0f6e50212ee04c7e as a tarball
	
	
	* The control-plane node download-only-255607 host does not exist
	  To start a cluster, run: "minikube start -p download-only-255607"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:184: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.28.0/LogsDuration (0.09s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/DeleteAll (0.22s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/DeleteAll
aaa_download_only_test.go:196: (dbg) Run:  out/minikube-linux-arm64 delete --all
--- PASS: TestDownloadOnly/v1.28.0/DeleteAll (0.22s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds (0.14s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:207: (dbg) Run:  out/minikube-linux-arm64 delete -p download-only-255607
--- PASS: TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds (0.14s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/json-events (5.68s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/json-events
aaa_download_only_test.go:80: (dbg) Run:  out/minikube-linux-arm64 start -o=json --download-only -p download-only-454799 --force --alsologtostderr --kubernetes-version=v1.34.1 --container-runtime=crio --driver=docker  --container-runtime=crio
aaa_download_only_test.go:80: (dbg) Done: out/minikube-linux-arm64 start -o=json --download-only -p download-only-454799 --force --alsologtostderr --kubernetes-version=v1.34.1 --container-runtime=crio --driver=docker  --container-runtime=crio: (5.680472293s)
--- PASS: TestDownloadOnly/v1.34.1/json-events (5.68s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/preload-exists
I1121 23:47:45.166330  516937 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
I1121 23:47:45.166377  516937 preload.go:203] Found local preload: /home/jenkins/minikube-integration/21934-513600/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4
--- PASS: TestDownloadOnly/v1.34.1/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/LogsDuration (0.12s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/LogsDuration
aaa_download_only_test.go:183: (dbg) Run:  out/minikube-linux-arm64 logs -p download-only-454799
aaa_download_only_test.go:183: (dbg) Non-zero exit: out/minikube-linux-arm64 logs -p download-only-454799: exit status 85 (120.362951ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                   ARGS                                                                                    │       PROFILE        │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ start   │ -o=json --download-only -p download-only-255607 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=docker  --container-runtime=crio │ download-only-255607 │ jenkins │ v1.37.0 │ 21 Nov 25 23:47 UTC │                     │
	│ delete  │ --all                                                                                                                                                                     │ minikube             │ jenkins │ v1.37.0 │ 21 Nov 25 23:47 UTC │ 21 Nov 25 23:47 UTC │
	│ delete  │ -p download-only-255607                                                                                                                                                   │ download-only-255607 │ jenkins │ v1.37.0 │ 21 Nov 25 23:47 UTC │ 21 Nov 25 23:47 UTC │
	│ start   │ -o=json --download-only -p download-only-454799 --force --alsologtostderr --kubernetes-version=v1.34.1 --container-runtime=crio --driver=docker  --container-runtime=crio │ download-only-454799 │ jenkins │ v1.37.0 │ 21 Nov 25 23:47 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/21 23:47:39
	Running on machine: ip-172-31-21-244
	Binary: Built with gc go1.25.3 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1121 23:47:39.530378  517142 out.go:360] Setting OutFile to fd 1 ...
	I1121 23:47:39.530533  517142 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1121 23:47:39.530545  517142 out.go:374] Setting ErrFile to fd 2...
	I1121 23:47:39.530551  517142 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1121 23:47:39.530794  517142 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21934-513600/.minikube/bin
	I1121 23:47:39.531208  517142 out.go:368] Setting JSON to true
	I1121 23:47:39.532045  517142 start.go:133] hostinfo: {"hostname":"ip-172-31-21-244","uptime":16176,"bootTime":1763752684,"procs":145,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"da8ac1fd-6236-412a-a346-95873c98230d"}
	I1121 23:47:39.532115  517142 start.go:143] virtualization:  
	I1121 23:47:39.535279  517142 out.go:99] [download-only-454799] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1121 23:47:39.535486  517142 notify.go:221] Checking for updates...
	I1121 23:47:39.538272  517142 out.go:171] MINIKUBE_LOCATION=21934
	I1121 23:47:39.541187  517142 out.go:171] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1121 23:47:39.544050  517142 out.go:171] KUBECONFIG=/home/jenkins/minikube-integration/21934-513600/kubeconfig
	I1121 23:47:39.546952  517142 out.go:171] MINIKUBE_HOME=/home/jenkins/minikube-integration/21934-513600/.minikube
	I1121 23:47:39.549772  517142 out.go:171] MINIKUBE_BIN=out/minikube-linux-arm64
	W1121 23:47:39.555621  517142 out.go:336] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I1121 23:47:39.555875  517142 driver.go:422] Setting default libvirt URI to qemu:///system
	I1121 23:47:39.590718  517142 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1121 23:47:39.590871  517142 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1121 23:47:39.648038  517142 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:27 OomKillDisable:true NGoroutines:43 SystemTime:2025-11-21 23:47:39.638238155 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1121 23:47:39.648145  517142 docker.go:319] overlay module found
	I1121 23:47:39.651096  517142 out.go:99] Using the docker driver based on user configuration
	I1121 23:47:39.651136  517142 start.go:309] selected driver: docker
	I1121 23:47:39.651143  517142 start.go:930] validating driver "docker" against <nil>
	I1121 23:47:39.651238  517142 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1121 23:47:39.714178  517142 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:27 OomKillDisable:true NGoroutines:43 SystemTime:2025-11-21 23:47:39.705448637 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1121 23:47:39.714332  517142 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1121 23:47:39.714622  517142 start_flags.go:410] Using suggested 3072MB memory alloc based on sys=7834MB, container=7834MB
	I1121 23:47:39.714785  517142 start_flags.go:974] Wait components to verify : map[apiserver:true system_pods:true]
	I1121 23:47:39.717907  517142 out.go:171] Using Docker driver with root privileges
	I1121 23:47:39.720656  517142 cni.go:84] Creating CNI manager for ""
	I1121 23:47:39.720742  517142 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1121 23:47:39.720755  517142 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1121 23:47:39.720830  517142 start.go:353] cluster config:
	{Name:download-only-454799 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763588073-21934@sha256:19d3da0413e1bfa354cbb88004c6796f8e9772a083e0230b0f6e50212ee04c7e Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:download-only-454799 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Co
ntainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1121 23:47:39.723715  517142 out.go:99] Starting "download-only-454799" primary control-plane node in "download-only-454799" cluster
	I1121 23:47:39.723747  517142 cache.go:134] Beginning downloading kic base image for docker with crio
	I1121 23:47:39.726600  517142 out.go:99] Pulling base image v0.0.48-1763588073-21934 ...
	I1121 23:47:39.726652  517142 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1121 23:47:39.726732  517142 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763588073-21934@sha256:19d3da0413e1bfa354cbb88004c6796f8e9772a083e0230b0f6e50212ee04c7e in local docker daemon
	I1121 23:47:39.742988  517142 cache.go:163] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763588073-21934@sha256:19d3da0413e1bfa354cbb88004c6796f8e9772a083e0230b0f6e50212ee04c7e to local cache
	I1121 23:47:39.743132  517142 image.go:65] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763588073-21934@sha256:19d3da0413e1bfa354cbb88004c6796f8e9772a083e0230b0f6e50212ee04c7e in local cache directory
	I1121 23:47:39.743165  517142 image.go:68] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763588073-21934@sha256:19d3da0413e1bfa354cbb88004c6796f8e9772a083e0230b0f6e50212ee04c7e in local cache directory, skipping pull
	I1121 23:47:39.743173  517142 image.go:137] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763588073-21934@sha256:19d3da0413e1bfa354cbb88004c6796f8e9772a083e0230b0f6e50212ee04c7e exists in cache, skipping pull
	I1121 23:47:39.743180  517142 cache.go:166] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763588073-21934@sha256:19d3da0413e1bfa354cbb88004c6796f8e9772a083e0230b0f6e50212ee04c7e as a tarball
	I1121 23:47:39.804825  517142 preload.go:148] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.34.1/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4
	I1121 23:47:39.804855  517142 cache.go:65] Caching tarball of preloaded images
	I1121 23:47:39.805048  517142 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1121 23:47:39.808216  517142 out.go:99] Downloading Kubernetes v1.34.1 preload ...
	I1121 23:47:39.808248  517142 preload.go:318] getting checksum for preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4 from gcs api...
	I1121 23:47:39.889028  517142 preload.go:295] Got checksum from GCS API "bc3e4aa50814345ef9ba3452bb5efb9f"
	I1121 23:47:39.889083  517142 download.go:108] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.34.1/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4?checksum=md5:bc3e4aa50814345ef9ba3452bb5efb9f -> /home/jenkins/minikube-integration/21934-513600/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4
	I1121 23:47:44.435659  517142 cache.go:68] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1121 23:47:44.436065  517142 profile.go:143] Saving config to /home/jenkins/minikube-integration/21934-513600/.minikube/profiles/download-only-454799/config.json ...
	I1121 23:47:44.436100  517142 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21934-513600/.minikube/profiles/download-only-454799/config.json: {Name:mk6b9111c2ca731cc2a4cd685281500a63759877 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1121 23:47:44.436288  517142 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1121 23:47:44.436456  517142 download.go:108] Downloading: https://dl.k8s.io/release/v1.34.1/bin/linux/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.34.1/bin/linux/arm64/kubectl.sha256 -> /home/jenkins/minikube-integration/21934-513600/.minikube/cache/linux/arm64/v1.34.1/kubectl
	
	
	* The control-plane node download-only-454799 host does not exist
	  To start a cluster, run: "minikube start -p download-only-454799"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:184: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.34.1/LogsDuration (0.12s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/DeleteAll (0.23s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/DeleteAll
aaa_download_only_test.go:196: (dbg) Run:  out/minikube-linux-arm64 delete --all
--- PASS: TestDownloadOnly/v1.34.1/DeleteAll (0.23s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/DeleteAlwaysSucceeds (0.13s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/DeleteAlwaysSucceeds
aaa_download_only_test.go:207: (dbg) Run:  out/minikube-linux-arm64 delete -p download-only-454799
--- PASS: TestDownloadOnly/v1.34.1/DeleteAlwaysSucceeds (0.13s)

                                                
                                    
x
+
TestBinaryMirror (0.61s)

                                                
                                                
=== RUN   TestBinaryMirror
I1121 23:47:46.396777  516937 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.34.1/bin/linux/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.34.1/bin/linux/arm64/kubectl.sha256
aaa_download_only_test.go:309: (dbg) Run:  out/minikube-linux-arm64 start --download-only -p binary-mirror-343381 --alsologtostderr --binary-mirror http://127.0.0.1:44455 --driver=docker  --container-runtime=crio
helpers_test.go:175: Cleaning up "binary-mirror-343381" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p binary-mirror-343381
--- PASS: TestBinaryMirror (0.61s)

                                                
                                    
x
+
TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.08s)

                                                
                                                
=== RUN   TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/EnablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
addons_test.go:1000: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p addons-882841
addons_test.go:1000: (dbg) Non-zero exit: out/minikube-linux-arm64 addons enable dashboard -p addons-882841: exit status 85 (79.552805ms)

                                                
                                                
-- stdout --
	* Profile "addons-882841" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-882841"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.08s)

                                                
                                    
x
+
TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.09s)

                                                
                                                
=== RUN   TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/DisablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
addons_test.go:1011: (dbg) Run:  out/minikube-linux-arm64 addons disable dashboard -p addons-882841
addons_test.go:1011: (dbg) Non-zero exit: out/minikube-linux-arm64 addons disable dashboard -p addons-882841: exit status 85 (89.472338ms)

                                                
                                                
-- stdout --
	* Profile "addons-882841" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-882841"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.09s)

                                                
                                    
x
+
TestAddons/Setup (172.33s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:108: (dbg) Run:  out/minikube-linux-arm64 start -p addons-882841 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=docker  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher
addons_test.go:108: (dbg) Done: out/minikube-linux-arm64 start -p addons-882841 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=docker  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher: (2m52.332339211s)
--- PASS: TestAddons/Setup (172.33s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (0.19s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:630: (dbg) Run:  kubectl --context addons-882841 create ns new-namespace
addons_test.go:644: (dbg) Run:  kubectl --context addons-882841 get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (0.19s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/FakeCredentials (10.85s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/FakeCredentials
addons_test.go:675: (dbg) Run:  kubectl --context addons-882841 create -f testdata/busybox.yaml
addons_test.go:682: (dbg) Run:  kubectl --context addons-882841 create sa gcp-auth-test
addons_test.go:688: (dbg) TestAddons/serial/GCPAuth/FakeCredentials: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [751da731-92a8-4cfb-afe7-538e4c656999] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "busybox" [751da731-92a8-4cfb-afe7-538e4c656999] Running
addons_test.go:688: (dbg) TestAddons/serial/GCPAuth/FakeCredentials: integration-test=busybox healthy within 10.003313777s
addons_test.go:694: (dbg) Run:  kubectl --context addons-882841 exec busybox -- /bin/sh -c "printenv GOOGLE_APPLICATION_CREDENTIALS"
addons_test.go:706: (dbg) Run:  kubectl --context addons-882841 describe sa gcp-auth-test
addons_test.go:720: (dbg) Run:  kubectl --context addons-882841 exec busybox -- /bin/sh -c "cat /google-app-creds.json"
addons_test.go:744: (dbg) Run:  kubectl --context addons-882841 exec busybox -- /bin/sh -c "printenv GOOGLE_CLOUD_PROJECT"
--- PASS: TestAddons/serial/GCPAuth/FakeCredentials (10.85s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (12.41s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:172: (dbg) Run:  out/minikube-linux-arm64 stop -p addons-882841
addons_test.go:172: (dbg) Done: out/minikube-linux-arm64 stop -p addons-882841: (12.113047732s)
addons_test.go:176: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p addons-882841
addons_test.go:180: (dbg) Run:  out/minikube-linux-arm64 addons disable dashboard -p addons-882841
addons_test.go:185: (dbg) Run:  out/minikube-linux-arm64 addons disable gvisor -p addons-882841
--- PASS: TestAddons/StoppedEnableDisable (12.41s)

                                                
                                    
x
+
TestCertOptions (37.71s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-linux-arm64 start -p cert-options-002126 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=crio
cert_options_test.go:49: (dbg) Done: out/minikube-linux-arm64 start -p cert-options-002126 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=crio: (34.883888853s)
cert_options_test.go:60: (dbg) Run:  out/minikube-linux-arm64 -p cert-options-002126 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-002126 config view
cert_options_test.go:100: (dbg) Run:  out/minikube-linux-arm64 ssh -p cert-options-002126 -- "sudo cat /etc/kubernetes/admin.conf"
helpers_test.go:175: Cleaning up "cert-options-002126" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p cert-options-002126
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p cert-options-002126: (2.098803945s)
--- PASS: TestCertOptions (37.71s)

                                                
                                    
x
+
TestCertExpiration (251.93s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-linux-arm64 start -p cert-expiration-621390 --memory=3072 --cert-expiration=3m --driver=docker  --container-runtime=crio
cert_options_test.go:123: (dbg) Done: out/minikube-linux-arm64 start -p cert-expiration-621390 --memory=3072 --cert-expiration=3m --driver=docker  --container-runtime=crio: (35.427908614s)
cert_options_test.go:131: (dbg) Run:  out/minikube-linux-arm64 start -p cert-expiration-621390 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=crio
cert_options_test.go:131: (dbg) Done: out/minikube-linux-arm64 start -p cert-expiration-621390 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=crio: (33.387078335s)
helpers_test.go:175: Cleaning up "cert-expiration-621390" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p cert-expiration-621390
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p cert-expiration-621390: (3.113748909s)
--- PASS: TestCertExpiration (251.93s)

                                                
                                    
x
+
TestForceSystemdFlag (43.63s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-linux-arm64 start -p force-systemd-flag-967086 --memory=3072 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=crio
docker_test.go:91: (dbg) Done: out/minikube-linux-arm64 start -p force-systemd-flag-967086 --memory=3072 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=crio: (40.37928323s)
docker_test.go:132: (dbg) Run:  out/minikube-linux-arm64 -p force-systemd-flag-967086 ssh "cat /etc/crio/crio.conf.d/02-crio.conf"
helpers_test.go:175: Cleaning up "force-systemd-flag-967086" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p force-systemd-flag-967086
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p force-systemd-flag-967086: (2.835287416s)
--- PASS: TestForceSystemdFlag (43.63s)

                                                
                                    
x
+
TestForceSystemdEnv (42.12s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-linux-arm64 start -p force-systemd-env-634519 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio
docker_test.go:155: (dbg) Done: out/minikube-linux-arm64 start -p force-systemd-env-634519 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio: (39.059625627s)
helpers_test.go:175: Cleaning up "force-systemd-env-634519" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p force-systemd-env-634519
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p force-systemd-env-634519: (3.062439555s)
--- PASS: TestForceSystemdEnv (42.12s)

                                                
                                    
x
+
TestErrorSpam/setup (34.89s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-linux-arm64 start -p nospam-774540 -n=1 --memory=3072 --wait=false --log_dir=/tmp/nospam-774540 --driver=docker  --container-runtime=crio
error_spam_test.go:81: (dbg) Done: out/minikube-linux-arm64 start -p nospam-774540 -n=1 --memory=3072 --wait=false --log_dir=/tmp/nospam-774540 --driver=docker  --container-runtime=crio: (34.89338509s)
--- PASS: TestErrorSpam/setup (34.89s)

                                                
                                    
x
+
TestErrorSpam/start (0.8s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:206: Cleaning up 1 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-774540 --log_dir /tmp/nospam-774540 start --dry-run
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-774540 --log_dir /tmp/nospam-774540 start --dry-run
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-arm64 -p nospam-774540 --log_dir /tmp/nospam-774540 start --dry-run
--- PASS: TestErrorSpam/start (0.80s)

                                                
                                    
x
+
TestErrorSpam/status (1.1s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-774540 --log_dir /tmp/nospam-774540 status
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-774540 --log_dir /tmp/nospam-774540 status
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-arm64 -p nospam-774540 --log_dir /tmp/nospam-774540 status
--- PASS: TestErrorSpam/status (1.10s)

                                                
                                    
x
+
TestErrorSpam/pause (7.37s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-774540 --log_dir /tmp/nospam-774540 pause
error_spam_test.go:149: (dbg) Non-zero exit: out/minikube-linux-arm64 -p nospam-774540 --log_dir /tmp/nospam-774540 pause: exit status 80 (2.460791604s)

                                                
                                                
-- stdout --
	* Pausing node nospam-774540 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-21T23:54:50Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_delete_7f6b85125f52d8b6f2676a081a2b9f4eb5a7d9b1_1.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
error_spam_test.go:151: "out/minikube-linux-arm64 -p nospam-774540 --log_dir /tmp/nospam-774540 pause" failed: exit status 80
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-774540 --log_dir /tmp/nospam-774540 pause
error_spam_test.go:149: (dbg) Non-zero exit: out/minikube-linux-arm64 -p nospam-774540 --log_dir /tmp/nospam-774540 pause: exit status 80 (2.508674179s)

                                                
                                                
-- stdout --
	* Pausing node nospam-774540 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-21T23:54:52Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_delete_7f6b85125f52d8b6f2676a081a2b9f4eb5a7d9b1_1.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
error_spam_test.go:151: "out/minikube-linux-arm64 -p nospam-774540 --log_dir /tmp/nospam-774540 pause" failed: exit status 80
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-arm64 -p nospam-774540 --log_dir /tmp/nospam-774540 pause
error_spam_test.go:172: (dbg) Non-zero exit: out/minikube-linux-arm64 -p nospam-774540 --log_dir /tmp/nospam-774540 pause: exit status 80 (2.399378246s)

                                                
                                                
-- stdout --
	* Pausing node nospam-774540 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-21T23:54:54Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_delete_7f6b85125f52d8b6f2676a081a2b9f4eb5a7d9b1_1.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
error_spam_test.go:174: "out/minikube-linux-arm64 -p nospam-774540 --log_dir /tmp/nospam-774540 pause" failed: exit status 80
--- PASS: TestErrorSpam/pause (7.37s)

                                                
                                    
x
+
TestErrorSpam/unpause (5.97s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-774540 --log_dir /tmp/nospam-774540 unpause
error_spam_test.go:149: (dbg) Non-zero exit: out/minikube-linux-arm64 -p nospam-774540 --log_dir /tmp/nospam-774540 unpause: exit status 80 (2.20882717s)

                                                
                                                
-- stdout --
	* Unpausing node nospam-774540 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_UNPAUSE: Pause: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-21T23:54:57Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_delete_7f6b85125f52d8b6f2676a081a2b9f4eb5a7d9b1_1.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
error_spam_test.go:151: "out/minikube-linux-arm64 -p nospam-774540 --log_dir /tmp/nospam-774540 unpause" failed: exit status 80
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-774540 --log_dir /tmp/nospam-774540 unpause
error_spam_test.go:149: (dbg) Non-zero exit: out/minikube-linux-arm64 -p nospam-774540 --log_dir /tmp/nospam-774540 unpause: exit status 80 (2.044767408s)

                                                
                                                
-- stdout --
	* Unpausing node nospam-774540 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_UNPAUSE: Pause: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-21T23:54:59Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_delete_7f6b85125f52d8b6f2676a081a2b9f4eb5a7d9b1_1.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
error_spam_test.go:151: "out/minikube-linux-arm64 -p nospam-774540 --log_dir /tmp/nospam-774540 unpause" failed: exit status 80
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-arm64 -p nospam-774540 --log_dir /tmp/nospam-774540 unpause
error_spam_test.go:172: (dbg) Non-zero exit: out/minikube-linux-arm64 -p nospam-774540 --log_dir /tmp/nospam-774540 unpause: exit status 80 (1.71718463s)

                                                
                                                
-- stdout --
	* Unpausing node nospam-774540 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_UNPAUSE: Pause: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-21T23:55:00Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_delete_7f6b85125f52d8b6f2676a081a2b9f4eb5a7d9b1_1.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
error_spam_test.go:174: "out/minikube-linux-arm64 -p nospam-774540 --log_dir /tmp/nospam-774540 unpause" failed: exit status 80
--- PASS: TestErrorSpam/unpause (5.97s)

                                                
                                    
x
+
TestErrorSpam/stop (1.55s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-774540 --log_dir /tmp/nospam-774540 stop
error_spam_test.go:149: (dbg) Done: out/minikube-linux-arm64 -p nospam-774540 --log_dir /tmp/nospam-774540 stop: (1.339363721s)
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-774540 --log_dir /tmp/nospam-774540 stop
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-arm64 -p nospam-774540 --log_dir /tmp/nospam-774540 stop
--- PASS: TestErrorSpam/stop (1.55s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1860: local sync path: /home/jenkins/minikube-integration/21934-513600/.minikube/files/etc/test/nested/copy/516937/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (79.87s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2239: (dbg) Run:  out/minikube-linux-arm64 start -p functional-354825 --memory=4096 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=crio
E1121 23:55:40.217875  516937 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21934-513600/.minikube/profiles/addons-882841/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1121 23:55:40.224263  516937 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21934-513600/.minikube/profiles/addons-882841/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1121 23:55:40.235618  516937 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21934-513600/.minikube/profiles/addons-882841/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1121 23:55:40.257013  516937 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21934-513600/.minikube/profiles/addons-882841/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1121 23:55:40.298333  516937 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21934-513600/.minikube/profiles/addons-882841/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1121 23:55:40.379684  516937 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21934-513600/.minikube/profiles/addons-882841/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1121 23:55:40.541197  516937 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21934-513600/.minikube/profiles/addons-882841/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1121 23:55:40.862829  516937 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21934-513600/.minikube/profiles/addons-882841/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1121 23:55:41.504819  516937 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21934-513600/.minikube/profiles/addons-882841/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1121 23:55:42.786438  516937 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21934-513600/.minikube/profiles/addons-882841/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1121 23:55:45.348619  516937 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21934-513600/.minikube/profiles/addons-882841/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1121 23:55:50.470070  516937 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21934-513600/.minikube/profiles/addons-882841/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1121 23:56:00.712274  516937 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21934-513600/.minikube/profiles/addons-882841/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1121 23:56:21.193955  516937 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21934-513600/.minikube/profiles/addons-882841/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
functional_test.go:2239: (dbg) Done: out/minikube-linux-arm64 start -p functional-354825 --memory=4096 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=crio: (1m19.866896053s)
--- PASS: TestFunctional/serial/StartWithProxy (79.87s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (29.14s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
I1121 23:56:26.790649  516937 config.go:182] Loaded profile config "functional-354825": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
functional_test.go:674: (dbg) Run:  out/minikube-linux-arm64 start -p functional-354825 --alsologtostderr -v=8
functional_test.go:674: (dbg) Done: out/minikube-linux-arm64 start -p functional-354825 --alsologtostderr -v=8: (29.142384008s)
functional_test.go:678: soft start took 29.142892753s for "functional-354825" cluster.
I1121 23:56:55.933312  516937 config.go:182] Loaded profile config "functional-354825": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
--- PASS: TestFunctional/serial/SoftStart (29.14s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:696: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.07s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.09s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:711: (dbg) Run:  kubectl --context functional-354825 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.09s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (3.46s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1064: (dbg) Run:  out/minikube-linux-arm64 -p functional-354825 cache add registry.k8s.io/pause:3.1
functional_test.go:1064: (dbg) Done: out/minikube-linux-arm64 -p functional-354825 cache add registry.k8s.io/pause:3.1: (1.162598634s)
functional_test.go:1064: (dbg) Run:  out/minikube-linux-arm64 -p functional-354825 cache add registry.k8s.io/pause:3.3
functional_test.go:1064: (dbg) Done: out/minikube-linux-arm64 -p functional-354825 cache add registry.k8s.io/pause:3.3: (1.169310626s)
functional_test.go:1064: (dbg) Run:  out/minikube-linux-arm64 -p functional-354825 cache add registry.k8s.io/pause:latest
functional_test.go:1064: (dbg) Done: out/minikube-linux-arm64 -p functional-354825 cache add registry.k8s.io/pause:latest: (1.12557001s)
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (3.46s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (1.41s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1092: (dbg) Run:  docker build -t minikube-local-cache-test:functional-354825 /tmp/TestFunctionalserialCacheCmdcacheadd_local3302152861/001
functional_test.go:1104: (dbg) Run:  out/minikube-linux-arm64 -p functional-354825 cache add minikube-local-cache-test:functional-354825
functional_test.go:1109: (dbg) Run:  out/minikube-linux-arm64 -p functional-354825 cache delete minikube-local-cache-test:functional-354825
functional_test.go:1098: (dbg) Run:  docker rmi minikube-local-cache-test:functional-354825
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (1.41s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1117: (dbg) Run:  out/minikube-linux-arm64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.06s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1125: (dbg) Run:  out/minikube-linux-arm64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.06s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.32s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1139: (dbg) Run:  out/minikube-linux-arm64 -p functional-354825 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.32s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (1.9s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1162: (dbg) Run:  out/minikube-linux-arm64 -p functional-354825 ssh sudo crictl rmi registry.k8s.io/pause:latest
functional_test.go:1168: (dbg) Run:  out/minikube-linux-arm64 -p functional-354825 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1168: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-354825 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (321.388894ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1173: (dbg) Run:  out/minikube-linux-arm64 -p functional-354825 cache reload
E1121 23:57:02.156189  516937 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21934-513600/.minikube/profiles/addons-882841/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
functional_test.go:1178: (dbg) Run:  out/minikube-linux-arm64 -p functional-354825 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (1.90s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.12s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1187: (dbg) Run:  out/minikube-linux-arm64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1187: (dbg) Run:  out/minikube-linux-arm64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.12s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.15s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:731: (dbg) Run:  out/minikube-linux-arm64 -p functional-354825 kubectl -- --context functional-354825 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.15s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.15s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:756: (dbg) Run:  out/kubectl --context functional-354825 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.15s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (32.52s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:772: (dbg) Run:  out/minikube-linux-arm64 start -p functional-354825 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
functional_test.go:772: (dbg) Done: out/minikube-linux-arm64 start -p functional-354825 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (32.515241473s)
functional_test.go:776: restart took 32.515356916s for "functional-354825" cluster.
I1121 23:57:36.223572  516937 config.go:182] Loaded profile config "functional-354825": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
--- PASS: TestFunctional/serial/ExtraConfig (32.52s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.1s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:825: (dbg) Run:  kubectl --context functional-354825 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:840: etcd phase: Running
functional_test.go:850: etcd status: Ready
functional_test.go:840: kube-apiserver phase: Running
functional_test.go:850: kube-apiserver status: Ready
functional_test.go:840: kube-controller-manager phase: Running
functional_test.go:850: kube-controller-manager status: Ready
functional_test.go:840: kube-scheduler phase: Running
functional_test.go:850: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.10s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (1.44s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1251: (dbg) Run:  out/minikube-linux-arm64 -p functional-354825 logs
functional_test.go:1251: (dbg) Done: out/minikube-linux-arm64 -p functional-354825 logs: (1.437697065s)
--- PASS: TestFunctional/serial/LogsCmd (1.44s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (1.47s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1265: (dbg) Run:  out/minikube-linux-arm64 -p functional-354825 logs --file /tmp/TestFunctionalserialLogsFileCmd856260409/001/logs.txt
functional_test.go:1265: (dbg) Done: out/minikube-linux-arm64 -p functional-354825 logs --file /tmp/TestFunctionalserialLogsFileCmd856260409/001/logs.txt: (1.464494024s)
--- PASS: TestFunctional/serial/LogsFileCmd (1.47s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (4.64s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2326: (dbg) Run:  kubectl --context functional-354825 apply -f testdata/invalidsvc.yaml
functional_test.go:2340: (dbg) Run:  out/minikube-linux-arm64 service invalid-svc -p functional-354825
functional_test.go:2340: (dbg) Non-zero exit: out/minikube-linux-arm64 service invalid-svc -p functional-354825: exit status 115 (375.122715ms)

                                                
                                                
-- stdout --
	┌───────────┬─────────────┬─────────────┬───────────────────────────┐
	│ NAMESPACE │    NAME     │ TARGET PORT │            URL            │
	├───────────┼─────────────┼─────────────┼───────────────────────────┤
	│ default   │ invalid-svc │ 80          │ http://192.168.49.2:32616 │
	└───────────┴─────────────┴─────────────┴───────────────────────────┘
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_96b204199e3191fa1740d4430b018a3c8028d52d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2332: (dbg) Run:  kubectl --context functional-354825 delete -f testdata/invalidsvc.yaml
functional_test.go:2332: (dbg) Done: kubectl --context functional-354825 delete -f testdata/invalidsvc.yaml: (1.014672893s)
--- PASS: TestFunctional/serial/InvalidService (4.64s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.47s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1214: (dbg) Run:  out/minikube-linux-arm64 -p functional-354825 config unset cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-arm64 -p functional-354825 config get cpus
functional_test.go:1214: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-354825 config get cpus: exit status 14 (65.114395ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1214: (dbg) Run:  out/minikube-linux-arm64 -p functional-354825 config set cpus 2
functional_test.go:1214: (dbg) Run:  out/minikube-linux-arm64 -p functional-354825 config get cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-arm64 -p functional-354825 config unset cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-arm64 -p functional-354825 config get cpus
functional_test.go:1214: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-354825 config get cpus: exit status 14 (93.085575ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.47s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (12.95s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:920: (dbg) daemon: [out/minikube-linux-arm64 dashboard --url --port 36195 -p functional-354825 --alsologtostderr -v=1]
functional_test.go:925: (dbg) stopping [out/minikube-linux-arm64 dashboard --url --port 36195 -p functional-354825 --alsologtostderr -v=1] ...
helpers_test.go:525: unable to kill pid 542881: os: process already finished
--- PASS: TestFunctional/parallel/DashboardCmd (12.95s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.46s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:989: (dbg) Run:  out/minikube-linux-arm64 start -p functional-354825 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio
functional_test.go:989: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p functional-354825 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio: exit status 23 (205.064782ms)

                                                
                                                
-- stdout --
	* [functional-354825] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=21934
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21934-513600/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21934-513600/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1122 00:08:10.039579  542407 out.go:360] Setting OutFile to fd 1 ...
	I1122 00:08:10.039790  542407 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1122 00:08:10.039824  542407 out.go:374] Setting ErrFile to fd 2...
	I1122 00:08:10.039844  542407 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1122 00:08:10.040161  542407 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21934-513600/.minikube/bin
	I1122 00:08:10.040601  542407 out.go:368] Setting JSON to false
	I1122 00:08:10.041570  542407 start.go:133] hostinfo: {"hostname":"ip-172-31-21-244","uptime":17406,"bootTime":1763752684,"procs":180,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"da8ac1fd-6236-412a-a346-95873c98230d"}
	I1122 00:08:10.041700  542407 start.go:143] virtualization:  
	I1122 00:08:10.044887  542407 out.go:179] * [functional-354825] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1122 00:08:10.048645  542407 out.go:179]   - MINIKUBE_LOCATION=21934
	I1122 00:08:10.048729  542407 notify.go:221] Checking for updates...
	I1122 00:08:10.054727  542407 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1122 00:08:10.057766  542407 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21934-513600/kubeconfig
	I1122 00:08:10.060887  542407 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21934-513600/.minikube
	I1122 00:08:10.063902  542407 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1122 00:08:10.066737  542407 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1122 00:08:10.070161  542407 config.go:182] Loaded profile config "functional-354825": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1122 00:08:10.070766  542407 driver.go:422] Setting default libvirt URI to qemu:///system
	I1122 00:08:10.101948  542407 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1122 00:08:10.102943  542407 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1122 00:08:10.164833  542407 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-11-22 00:08:10.154690274 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1122 00:08:10.164940  542407 docker.go:319] overlay module found
	I1122 00:08:10.168016  542407 out.go:179] * Using the docker driver based on existing profile
	I1122 00:08:10.170813  542407 start.go:309] selected driver: docker
	I1122 00:08:10.170834  542407 start.go:930] validating driver "docker" against &{Name:functional-354825 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763588073-21934@sha256:19d3da0413e1bfa354cbb88004c6796f8e9772a083e0230b0f6e50212ee04c7e Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:functional-354825 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Moun
tPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1122 00:08:10.170938  542407 start.go:941] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1122 00:08:10.174550  542407 out.go:203] 
	W1122 00:08:10.177479  542407 out.go:285] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I1122 00:08:10.180470  542407 out.go:203] 

                                                
                                                
** /stderr **
functional_test.go:1006: (dbg) Run:  out/minikube-linux-arm64 start -p functional-354825 --dry-run --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
--- PASS: TestFunctional/parallel/DryRun (0.46s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.2s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1035: (dbg) Run:  out/minikube-linux-arm64 start -p functional-354825 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio
functional_test.go:1035: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p functional-354825 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio: exit status 23 (201.983237ms)

                                                
                                                
-- stdout --
	* [functional-354825] minikube v1.37.0 sur Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=21934
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21934-513600/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21934-513600/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote docker basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1122 00:08:09.834480  542359 out.go:360] Setting OutFile to fd 1 ...
	I1122 00:08:09.834709  542359 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1122 00:08:09.834743  542359 out.go:374] Setting ErrFile to fd 2...
	I1122 00:08:09.834764  542359 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1122 00:08:09.835152  542359 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21934-513600/.minikube/bin
	I1122 00:08:09.835588  542359 out.go:368] Setting JSON to false
	I1122 00:08:09.836504  542359 start.go:133] hostinfo: {"hostname":"ip-172-31-21-244","uptime":17406,"bootTime":1763752684,"procs":180,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"da8ac1fd-6236-412a-a346-95873c98230d"}
	I1122 00:08:09.836606  542359 start.go:143] virtualization:  
	I1122 00:08:09.840217  542359 out.go:179] * [functional-354825] minikube v1.37.0 sur Ubuntu 20.04 (arm64)
	I1122 00:08:09.843950  542359 notify.go:221] Checking for updates...
	I1122 00:08:09.847040  542359 out.go:179]   - MINIKUBE_LOCATION=21934
	I1122 00:08:09.850222  542359 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1122 00:08:09.853035  542359 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21934-513600/kubeconfig
	I1122 00:08:09.856002  542359 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21934-513600/.minikube
	I1122 00:08:09.859039  542359 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1122 00:08:09.861964  542359 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1122 00:08:09.865302  542359 config.go:182] Loaded profile config "functional-354825": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1122 00:08:09.866064  542359 driver.go:422] Setting default libvirt URI to qemu:///system
	I1122 00:08:09.900099  542359 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1122 00:08:09.900235  542359 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1122 00:08:09.959380  542359 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-11-22 00:08:09.950104584 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1122 00:08:09.959483  542359 docker.go:319] overlay module found
	I1122 00:08:09.962538  542359 out.go:179] * Utilisation du pilote docker basé sur le profil existant
	I1122 00:08:09.965431  542359 start.go:309] selected driver: docker
	I1122 00:08:09.965448  542359 start.go:930] validating driver "docker" against &{Name:functional-354825 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763588073-21934@sha256:19d3da0413e1bfa354cbb88004c6796f8e9772a083e0230b0f6e50212ee04c7e Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:functional-354825 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Moun
tPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1122 00:08:09.965552  542359 start.go:941] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1122 00:08:09.969289  542359 out.go:203] 
	W1122 00:08:09.972049  542359 out.go:285] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I1122 00:08:09.974891  542359 out.go:203] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.20s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (1.03s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:869: (dbg) Run:  out/minikube-linux-arm64 -p functional-354825 status
functional_test.go:875: (dbg) Run:  out/minikube-linux-arm64 -p functional-354825 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:887: (dbg) Run:  out/minikube-linux-arm64 -p functional-354825 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (1.03s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.2s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1695: (dbg) Run:  out/minikube-linux-arm64 -p functional-354825 addons list
functional_test.go:1707: (dbg) Run:  out/minikube-linux-arm64 -p functional-354825 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.20s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (25.62s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:50: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:352: "storage-provisioner" [769d2586-0909-4bce-b8e3-c4f125035c4a] Running
functional_test_pvc_test.go:50: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 6.004141851s
functional_test_pvc_test.go:55: (dbg) Run:  kubectl --context functional-354825 get storageclass -o=json
functional_test_pvc_test.go:75: (dbg) Run:  kubectl --context functional-354825 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:82: (dbg) Run:  kubectl --context functional-354825 get pvc myclaim -o=json
functional_test_pvc_test.go:131: (dbg) Run:  kubectl --context functional-354825 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:352: "sp-pod" [d6ccfb2b-9e82-4221-b5f7-8bc132fb4ebb] Pending
helpers_test.go:352: "sp-pod" [d6ccfb2b-9e82-4221-b5f7-8bc132fb4ebb] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:352: "sp-pod" [d6ccfb2b-9e82-4221-b5f7-8bc132fb4ebb] Running
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 11.003344404s
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-354825 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:112: (dbg) Run:  kubectl --context functional-354825 delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:131: (dbg) Run:  kubectl --context functional-354825 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:352: "sp-pod" [d37fab07-c2ab-4d89-b2a7-c06a4f25f061] Pending
helpers_test.go:352: "sp-pod" [d37fab07-c2ab-4d89-b2a7-c06a4f25f061] Running
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 7.009735341s
functional_test_pvc_test.go:120: (dbg) Run:  kubectl --context functional-354825 exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (25.62s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.75s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1730: (dbg) Run:  out/minikube-linux-arm64 -p functional-354825 ssh "echo hello"
functional_test.go:1747: (dbg) Run:  out/minikube-linux-arm64 -p functional-354825 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (0.75s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (2.08s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p functional-354825 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p functional-354825 ssh -n functional-354825 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p functional-354825 cp functional-354825:/home/docker/cp-test.txt /tmp/TestFunctionalparallelCpCmd2030449897/001/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p functional-354825 ssh -n functional-354825 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p functional-354825 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p functional-354825 ssh -n functional-354825 "sudo cat /tmp/does/not/exist/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (2.08s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.37s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1934: Checking for existence of /etc/test/nested/copy/516937/hosts within VM
functional_test.go:1936: (dbg) Run:  out/minikube-linux-arm64 -p functional-354825 ssh "sudo cat /etc/test/nested/copy/516937/hosts"
functional_test.go:1941: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.37s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (2.09s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1977: Checking for existence of /etc/ssl/certs/516937.pem within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-arm64 -p functional-354825 ssh "sudo cat /etc/ssl/certs/516937.pem"
functional_test.go:1977: Checking for existence of /usr/share/ca-certificates/516937.pem within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-arm64 -p functional-354825 ssh "sudo cat /usr/share/ca-certificates/516937.pem"
functional_test.go:1977: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-arm64 -p functional-354825 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:2004: Checking for existence of /etc/ssl/certs/5169372.pem within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-arm64 -p functional-354825 ssh "sudo cat /etc/ssl/certs/5169372.pem"
functional_test.go:2004: Checking for existence of /usr/share/ca-certificates/5169372.pem within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-arm64 -p functional-354825 ssh "sudo cat /usr/share/ca-certificates/5169372.pem"
functional_test.go:2004: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-arm64 -p functional-354825 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (2.09s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.09s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:234: (dbg) Run:  kubectl --context functional-354825 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.09s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.73s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2032: (dbg) Run:  out/minikube-linux-arm64 -p functional-354825 ssh "sudo systemctl is-active docker"
functional_test.go:2032: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-354825 ssh "sudo systemctl is-active docker": exit status 1 (366.019141ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
functional_test.go:2032: (dbg) Run:  out/minikube-linux-arm64 -p functional-354825 ssh "sudo systemctl is-active containerd"
functional_test.go:2032: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-354825 ssh "sudo systemctl is-active containerd": exit status 1 (361.868679ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.73s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.34s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2293: (dbg) Run:  out/minikube-linux-arm64 license
--- PASS: TestFunctional/parallel/License (0.34s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.63s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-arm64 -p functional-354825 tunnel --alsologtostderr]
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-arm64 -p functional-354825 tunnel --alsologtostderr]
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-arm64 -p functional-354825 tunnel --alsologtostderr] ...
helpers_test.go:507: unable to find parent, assuming dead: process does not exist
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-arm64 -p functional-354825 tunnel --alsologtostderr] ...
helpers_test.go:525: unable to kill pid 539050: os: process already finished
--- PASS: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.63s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:129: (dbg) daemon: [out/minikube-linux-arm64 -p functional-354825 tunnel --alsologtostderr]
--- PASS: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (8.45s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup
functional_test_tunnel_test.go:212: (dbg) Run:  kubectl --context functional-354825 apply -f testdata/testsvc.yaml
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: waiting 4m0s for pods matching "run=nginx-svc" in namespace "default" ...
helpers_test.go:352: "nginx-svc" [c16bb675-ea93-4c44-bc37-b53688095ef9] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:352: "nginx-svc" [c16bb675-ea93-4c44-bc37-b53688095ef9] Running
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: run=nginx-svc healthy within 8.003389089s
I1121 23:57:53.425980  516937 kapi.go:150] Service nginx-svc in namespace default found.
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (8.45s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.1s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP
functional_test_tunnel_test.go:234: (dbg) Run:  kubectl --context functional-354825 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.10s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:299: tunnel at http://10.109.83.242 is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:434: (dbg) stopping [out/minikube-linux-arm64 -p functional-354825 tunnel --alsologtostderr] ...
functional_test_tunnel_test.go:437: failed to stop process: signal: terminated
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.44s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1285: (dbg) Run:  out/minikube-linux-arm64 profile lis
functional_test.go:1290: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.44s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.42s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1325: (dbg) Run:  out/minikube-linux-arm64 profile list
functional_test.go:1330: Took "361.513467ms" to run "out/minikube-linux-arm64 profile list"
functional_test.go:1339: (dbg) Run:  out/minikube-linux-arm64 profile list -l
functional_test.go:1344: Took "62.442528ms" to run "out/minikube-linux-arm64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.42s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.43s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1376: (dbg) Run:  out/minikube-linux-arm64 profile list -o json
functional_test.go:1381: Took "376.11251ms" to run "out/minikube-linux-arm64 profile list -o json"
functional_test.go:1389: (dbg) Run:  out/minikube-linux-arm64 profile list -o json --light
functional_test.go:1394: Took "54.867121ms" to run "out/minikube-linux-arm64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.43s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (7.02s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-354825 /tmp/TestFunctionalparallelMountCmdany-port3663288260/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1763770078657706829" to /tmp/TestFunctionalparallelMountCmdany-port3663288260/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1763770078657706829" to /tmp/TestFunctionalparallelMountCmdany-port3663288260/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1763770078657706829" to /tmp/TestFunctionalparallelMountCmdany-port3663288260/001/test-1763770078657706829
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-arm64 -p functional-354825 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-354825 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (348.166836ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1122 00:07:59.006182  516937 retry.go:31] will retry after 403.07904ms: exit status 1
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-arm64 -p functional-354825 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:129: (dbg) Run:  out/minikube-linux-arm64 -p functional-354825 ssh -- ls -la /mount-9p
functional_test_mount_test.go:133: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Nov 22 00:07 created-by-test
-rw-r--r-- 1 docker docker 24 Nov 22 00:07 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Nov 22 00:07 test-1763770078657706829
functional_test_mount_test.go:137: (dbg) Run:  out/minikube-linux-arm64 -p functional-354825 ssh cat /mount-9p/test-1763770078657706829
functional_test_mount_test.go:148: (dbg) Run:  kubectl --context functional-354825 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:352: "busybox-mount" [bc788ff4-5611-4676-aad8-857e9dd3a998] Pending
helpers_test.go:352: "busybox-mount" [bc788ff4-5611-4676-aad8-857e9dd3a998] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])
helpers_test.go:352: "busybox-mount" [bc788ff4-5611-4676-aad8-857e9dd3a998] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:352: "busybox-mount" [bc788ff4-5611-4676-aad8-857e9dd3a998] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 4.003896084s
functional_test_mount_test.go:169: (dbg) Run:  kubectl --context functional-354825 logs busybox-mount
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-arm64 -p functional-354825 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-arm64 -p functional-354825 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-linux-arm64 -p functional-354825 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-354825 /tmp/TestFunctionalparallelMountCmdany-port3663288260/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctional/parallel/MountCmd/any-port (7.02s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (1.76s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-354825 /tmp/TestFunctionalparallelMountCmdspecific-port2815829419/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-arm64 -p functional-354825 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-354825 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (370.644578ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1122 00:08:06.049545  516937 retry.go:31] will retry after 316.008666ms: exit status 1
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-arm64 -p functional-354825 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:257: (dbg) Run:  out/minikube-linux-arm64 -p functional-354825 ssh -- ls -la /mount-9p
functional_test_mount_test.go:261: guest mount directory contents
total 0
functional_test_mount_test.go:263: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-354825 /tmp/TestFunctionalparallelMountCmdspecific-port2815829419/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:264: reading mount text
functional_test_mount_test.go:278: done reading mount text
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-linux-arm64 -p functional-354825 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-354825 ssh "sudo umount -f /mount-9p": exit status 1 (306.150679ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:232: "out/minikube-linux-arm64 -p functional-354825 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-354825 /tmp/TestFunctionalparallelMountCmdspecific-port2815829419/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctional/parallel/MountCmd/specific-port (1.76s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/VerifyCleanup (1.31s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-354825 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1874520101/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-354825 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1874520101/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-354825 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1874520101/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-354825 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-354825 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-354825 ssh "findmnt -T" /mount3
functional_test_mount_test.go:370: (dbg) Run:  out/minikube-linux-arm64 mount -p functional-354825 --kill=true
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-354825 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1874520101/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:507: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-354825 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1874520101/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:507: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-354825 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1874520101/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:507: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/MountCmd/VerifyCleanup (1.31s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (0.61s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1469: (dbg) Run:  out/minikube-linux-arm64 -p functional-354825 service list
--- PASS: TestFunctional/parallel/ServiceCmd/List (0.61s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (0.61s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1499: (dbg) Run:  out/minikube-linux-arm64 -p functional-354825 service list -o json
functional_test.go:1504: Took "606.790746ms" to run "out/minikube-linux-arm64 -p functional-354825 service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (0.61s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.08s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2261: (dbg) Run:  out/minikube-linux-arm64 -p functional-354825 version --short
--- PASS: TestFunctional/parallel/Version/short (0.08s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (1.34s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2275: (dbg) Run:  out/minikube-linux-arm64 -p functional-354825 version -o=json --components
functional_test.go:2275: (dbg) Done: out/minikube-linux-arm64 -p functional-354825 version -o=json --components: (1.336616775s)
--- PASS: TestFunctional/parallel/Version/components (1.34s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.25s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:276: (dbg) Run:  out/minikube-linux-arm64 -p functional-354825 image ls --format short --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-arm64 -p functional-354825 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.10.1
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.34.1
registry.k8s.io/kube-proxy:v1.34.1
registry.k8s.io/kube-controller-manager:v1.34.1
registry.k8s.io/kube-apiserver:v1.34.1
registry.k8s.io/etcd:3.6.4-0
registry.k8s.io/coredns/coredns:v1.12.1
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28.4-glibc
docker.io/library/nginx:latest
docker.io/library/nginx:alpine
docker.io/kindest/kindnetd:v20250512-df8de77b
functional_test.go:284: (dbg) Stderr: out/minikube-linux-arm64 -p functional-354825 image ls --format short --alsologtostderr:
I1122 00:08:26.423925  545336 out.go:360] Setting OutFile to fd 1 ...
I1122 00:08:26.424039  545336 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1122 00:08:26.424049  545336 out.go:374] Setting ErrFile to fd 2...
I1122 00:08:26.424055  545336 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1122 00:08:26.424326  545336 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21934-513600/.minikube/bin
I1122 00:08:26.424920  545336 config.go:182] Loaded profile config "functional-354825": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1122 00:08:26.425040  545336 config.go:182] Loaded profile config "functional-354825": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1122 00:08:26.425548  545336 cli_runner.go:164] Run: docker container inspect functional-354825 --format={{.State.Status}}
I1122 00:08:26.449605  545336 ssh_runner.go:195] Run: systemctl --version
I1122 00:08:26.449657  545336 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-354825
I1122 00:08:26.470373  545336 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33505 SSHKeyPath:/home/jenkins/minikube-integration/21934-513600/.minikube/machines/functional-354825/id_rsa Username:docker}
I1122 00:08:26.576571  545336 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (0.25s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.27s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:276: (dbg) Run:  out/minikube-linux-arm64 -p functional-354825 image ls --format table --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-arm64 -p functional-354825 image ls --format table --alsologtostderr:
┌─────────────────────────────────────────┬────────────────────┬───────────────┬────────┐
│                  IMAGE                  │        TAG         │   IMAGE ID    │  SIZE  │
├─────────────────────────────────────────┼────────────────────┼───────────────┼────────┤
│ registry.k8s.io/coredns/coredns         │ v1.12.1            │ 138784d87c9c5 │ 73.2MB │
│ registry.k8s.io/kube-scheduler          │ v1.34.1            │ b5f57ec6b9867 │ 51.6MB │
│ registry.k8s.io/pause                   │ 3.10.1             │ d7b100cd9a77b │ 520kB  │
│ gcr.io/k8s-minikube/busybox             │ 1.28.4-glibc       │ 1611cd07b61d5 │ 3.77MB │
│ gcr.io/k8s-minikube/storage-provisioner │ v5                 │ ba04bb24b9575 │ 29MB   │
│ registry.k8s.io/kube-apiserver          │ v1.34.1            │ 43911e833d64d │ 84.8MB │
│ docker.io/kindest/kindnetd              │ v20250512-df8de77b │ b1a8c6f707935 │ 111MB  │
│ registry.k8s.io/kube-controller-manager │ v1.34.1            │ 7eb2c6ff0c5a7 │ 72.6MB │
│ registry.k8s.io/pause                   │ 3.3                │ 3d18732f8686c │ 487kB  │
│ registry.k8s.io/etcd                    │ 3.6.4-0            │ a1894772a478e │ 206MB  │
│ registry.k8s.io/kube-proxy              │ v1.34.1            │ 05baa95f5142d │ 75.9MB │
│ registry.k8s.io/pause                   │ 3.1                │ 8057e0500773a │ 529kB  │
│ registry.k8s.io/pause                   │ latest             │ 8cb2091f603e7 │ 246kB  │
│ docker.io/library/nginx                 │ alpine             │ cbad6347cca28 │ 54.8MB │
│ docker.io/library/nginx                 │ latest             │ bb747ca923a5e │ 176MB  │
└─────────────────────────────────────────┴────────────────────┴───────────────┴────────┘
functional_test.go:284: (dbg) Stderr: out/minikube-linux-arm64 -p functional-354825 image ls --format table --alsologtostderr:
I1122 00:08:26.843568  545438 out.go:360] Setting OutFile to fd 1 ...
I1122 00:08:26.843772  545438 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1122 00:08:26.843804  545438 out.go:374] Setting ErrFile to fd 2...
I1122 00:08:26.843824  545438 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1122 00:08:26.845080  545438 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21934-513600/.minikube/bin
I1122 00:08:26.845707  545438 config.go:182] Loaded profile config "functional-354825": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1122 00:08:26.845941  545438 config.go:182] Loaded profile config "functional-354825": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1122 00:08:26.846488  545438 cli_runner.go:164] Run: docker container inspect functional-354825 --format={{.State.Status}}
I1122 00:08:26.870507  545438 ssh_runner.go:195] Run: systemctl --version
I1122 00:08:26.870559  545438 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-354825
I1122 00:08:26.891945  545438 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33505 SSHKeyPath:/home/jenkins/minikube-integration/21934-513600/.minikube/machines/functional-354825/id_rsa Username:docker}
I1122 00:08:26.996863  545438 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.27s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.26s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:276: (dbg) Run:  out/minikube-linux-arm64 -p functional-354825 image ls --format json --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-arm64 -p functional-354825 image ls --format json --alsologtostderr:
[{"id":"20b332c9a70d8516d849d1ac23eff5800cbb2f263d379f0ec11ee908db6b25a8","repoDigests":["docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93","docker.io/kubernetesui/dashboard@sha256:5c52c60663b473628bd98e4ffee7a747ef1f88d8c7bcee957b089fb3f61bdedf"],"repoTags":[],"size":"247562353"},{"id":"cbad6347cca28a6ee7b08793856bc6fcb2c2c7a377a62a5e6d785895c4194ac1","repoDigests":["docker.io/library/nginx@sha256:7391b3732e7f7ccd23ff1d02fbeadcde496f374d7460ad8e79260f8f6d2c9f90","docker.io/library/nginx@sha256:b3c656d55d7ad751196f21b7fd2e8d4da9cb430e32f646adcf92441b72f82b14"],"repoTags":["docker.io/library/nginx:alpine"],"size":"54837949"},{"id":"bb747ca923a5e1139baddd6f4743e0c0c74df58f4ad8ddbc10ab183b92f5a5c7","repoDigests":["docker.io/library/nginx@sha256:553f64aecdc31b5bf944521731cd70e35da4faed96b2b7548a3d8e2598c52a42","docker.io/library/nginx@sha256:7de350c1fbb1f7b119a1d08f69fef5c92624cb01e03bc25c0ae11072b8969712"],"repoTags":["docker.io/library/nginx:latest
"],"size":"175943180"},{"id":"8057e0500773a37cde2cff041eb13ebd68c748419a2fbfd1dfb5bf38696cc8e5","repoDigests":["registry.k8s.io/pause@sha256:b0602c9f938379133ff8017007894b48c1112681c9468f82a1e4cbf8a4498b67"],"repoTags":["registry.k8s.io/pause:3.1"],"size":"528622"},{"id":"8cb2091f603e75187e2f6226c5901d12e00b1d1f778c6471ae4578e8a1c4724a","repoDigests":["registry.k8s.io/pause@sha256:f5e31d44aa14d5669e030380b656463a7e45934c03994e72e3dbf83d4a645cca"],"repoTags":["registry.k8s.io/pause:latest"],"size":"246070"},{"id":"b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c","repoDigests":["docker.io/kindest/kindnetd@sha256:07a4b3fe0077a0ae606cc0a200fc25a28fa64dcc30b8d311b461089969449f9a","docker.io/kindest/kindnetd@sha256:2bdc3188f2ddc8e54841f69ef900a8dde1280057c97500f966a7ef31364021f1"],"repoTags":["docker.io/kindest/kindnetd:v20250512-df8de77b"],"size":"111333938"},{"id":"43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196","repoDigests":["registry.k8s.io/kube-apiserver@sha256:b9d7c117f8ac5
2bed4b13aeed973dc5198f9d93a926e6fe9e0b384f155baa902","registry.k8s.io/kube-apiserver@sha256:ffe89a0fe39dd71bb6eee7066c95512bd4a8365cb6df23eaf60e70209fe79645"],"repoTags":["registry.k8s.io/kube-apiserver:v1.34.1"],"size":"84753391"},{"id":"05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9","repoDigests":["registry.k8s.io/kube-proxy@sha256:90d560a712188ee40c7d03b070c8f2cbcb3097081e62306bc7e68e438cceb9a6","registry.k8s.io/kube-proxy@sha256:913cc83ca0b5588a81d86ce8eedeb3ed1e9c1326e81852a1ea4f622b74ff749a"],"repoTags":["registry.k8s.io/kube-proxy:v1.34.1"],"size":"75938711"},{"id":"a422e0e982356f6c1cf0e5bb7b733363caae3992a07c99951fbcc73e58ed656a","repoDigests":["docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c","docker.io/kubernetesui/metrics-scraper@sha256:853c43f3cced687cb211708aa0024304a5adb33ec45ebf5915d318358822e09a"],"repoTags":[],"size":"42263767"},{"id":"1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c","repoDigest
s":["gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e","gcr.io/k8s-minikube/busybox@sha256:580b0aa58b210f512f818b7b7ef4f63c803f7a8cd6baf571b1462b79f7b7719e"],"repoTags":["gcr.io/k8s-minikube/busybox:1.28.4-glibc"],"size":"3774172"},{"id":"ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6","repoDigests":["gcr.io/k8s-minikube/storage-provisioner@sha256:0ba370588274b88531ab311a5d2e645d240a853555c1e58fd1dd428fc333c9d2","gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944"],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"29037500"},{"id":"138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc","repoDigests":["registry.k8s.io/coredns/coredns@sha256:4779e7517f375a597f100524db6f7f8b5b8499a6ccd14aacfa65432d4cfd5789","registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c"],"repoTags":["registry.k8s.io/coredns/coredns:v
1.12.1"],"size":"73195387"},{"id":"a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e","repoDigests":["registry.k8s.io/etcd@sha256:5db83f9e7ee85732a647f5cf5fbdf85652afa8561b66c99f20756080ebd82ea5","registry.k8s.io/etcd@sha256:e36c081683425b5b3bc1425bc508b37e7107bb65dfa9367bf5a80125d431fa19"],"repoTags":["registry.k8s.io/etcd:3.6.4-0"],"size":"205987068"},{"id":"7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a","repoDigests":["registry.k8s.io/kube-controller-manager@sha256:1276f2ef2e44c06f37d7c3cccaa3f0100d5f4e939e5cfde42343962da346857f","registry.k8s.io/kube-controller-manager@sha256:2bf47c1b01f51e8963bf2327390883c9fa4ed03ea1b284500a2cba17ce303e89"],"repoTags":["registry.k8s.io/kube-controller-manager:v1.34.1"],"size":"72629077"},{"id":"b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0","repoDigests":["registry.k8s.io/kube-scheduler@sha256:6e9fbc4e25a576483e6a233976353a66e4d77eb5d0530e9118e94b7d46fb3500","registry.k8s.io/kube-scheduler@sha256:d69ae11adb4233d440c302
583adee9e3a37cf3626484476fe18ec821953e951e"],"repoTags":["registry.k8s.io/kube-scheduler:v1.34.1"],"size":"51592017"},{"id":"3d18732f8686cc3c878055d99a05fa80289502fa496b36b6a0fe0f77206a7300","repoDigests":["registry.k8s.io/pause@sha256:e59730b14890252c14f85976e22ab1c47ec28b111ffed407f34bca1b44447476"],"repoTags":["registry.k8s.io/pause:3.3"],"size":"487479"},{"id":"d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd","repoDigests":["registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c","registry.k8s.io/pause@sha256:e9c466420bcaeede00f46ecfa0ca8cd854c549f2f13330e2723173d88f2de70f"],"repoTags":["registry.k8s.io/pause:3.10.1"],"size":"519884"}]
functional_test.go:284: (dbg) Stderr: out/minikube-linux-arm64 -p functional-354825 image ls --format json --alsologtostderr:
I1122 00:08:27.403868  545586 out.go:360] Setting OutFile to fd 1 ...
I1122 00:08:27.404491  545586 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1122 00:08:27.404534  545586 out.go:374] Setting ErrFile to fd 2...
I1122 00:08:27.404555  545586 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1122 00:08:27.404904  545586 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21934-513600/.minikube/bin
I1122 00:08:27.405594  545586 config.go:182] Loaded profile config "functional-354825": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1122 00:08:27.405758  545586 config.go:182] Loaded profile config "functional-354825": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1122 00:08:27.406364  545586 cli_runner.go:164] Run: docker container inspect functional-354825 --format={{.State.Status}}
I1122 00:08:27.430795  545586 ssh_runner.go:195] Run: systemctl --version
I1122 00:08:27.430855  545586 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-354825
I1122 00:08:27.462299  545586 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33505 SSHKeyPath:/home/jenkins/minikube-integration/21934-513600/.minikube/machines/functional-354825/id_rsa Username:docker}
I1122 00:08:27.566885  545586 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.26s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.28s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:276: (dbg) Run:  out/minikube-linux-arm64 -p functional-354825 image ls --format yaml --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-arm64 -p functional-354825 image ls --format yaml --alsologtostderr:
- id: a422e0e982356f6c1cf0e5bb7b733363caae3992a07c99951fbcc73e58ed656a
repoDigests:
- docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c
- docker.io/kubernetesui/metrics-scraper@sha256:853c43f3cced687cb211708aa0024304a5adb33ec45ebf5915d318358822e09a
repoTags: []
size: "42263767"
- id: cbad6347cca28a6ee7b08793856bc6fcb2c2c7a377a62a5e6d785895c4194ac1
repoDigests:
- docker.io/library/nginx@sha256:7391b3732e7f7ccd23ff1d02fbeadcde496f374d7460ad8e79260f8f6d2c9f90
- docker.io/library/nginx@sha256:b3c656d55d7ad751196f21b7fd2e8d4da9cb430e32f646adcf92441b72f82b14
repoTags:
- docker.io/library/nginx:alpine
size: "54837949"
- id: bb747ca923a5e1139baddd6f4743e0c0c74df58f4ad8ddbc10ab183b92f5a5c7
repoDigests:
- docker.io/library/nginx@sha256:553f64aecdc31b5bf944521731cd70e35da4faed96b2b7548a3d8e2598c52a42
- docker.io/library/nginx@sha256:7de350c1fbb1f7b119a1d08f69fef5c92624cb01e03bc25c0ae11072b8969712
repoTags:
- docker.io/library/nginx:latest
size: "175943180"
- id: d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd
repoDigests:
- registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c
- registry.k8s.io/pause@sha256:e9c466420bcaeede00f46ecfa0ca8cd854c549f2f13330e2723173d88f2de70f
repoTags:
- registry.k8s.io/pause:3.10.1
size: "519884"
- id: 3d18732f8686cc3c878055d99a05fa80289502fa496b36b6a0fe0f77206a7300
repoDigests:
- registry.k8s.io/pause@sha256:e59730b14890252c14f85976e22ab1c47ec28b111ffed407f34bca1b44447476
repoTags:
- registry.k8s.io/pause:3.3
size: "487479"
- id: 8cb2091f603e75187e2f6226c5901d12e00b1d1f778c6471ae4578e8a1c4724a
repoDigests:
- registry.k8s.io/pause@sha256:f5e31d44aa14d5669e030380b656463a7e45934c03994e72e3dbf83d4a645cca
repoTags:
- registry.k8s.io/pause:latest
size: "246070"
- id: 20b332c9a70d8516d849d1ac23eff5800cbb2f263d379f0ec11ee908db6b25a8
repoDigests:
- docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93
- docker.io/kubernetesui/dashboard@sha256:5c52c60663b473628bd98e4ffee7a747ef1f88d8c7bcee957b089fb3f61bdedf
repoTags: []
size: "247562353"
- id: 1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c
repoDigests:
- gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e
- gcr.io/k8s-minikube/busybox@sha256:580b0aa58b210f512f818b7b7ef4f63c803f7a8cd6baf571b1462b79f7b7719e
repoTags:
- gcr.io/k8s-minikube/busybox:1.28.4-glibc
size: "3774172"
- id: ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6
repoDigests:
- gcr.io/k8s-minikube/storage-provisioner@sha256:0ba370588274b88531ab311a5d2e645d240a853555c1e58fd1dd428fc333c9d2
- gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "29037500"
- id: 138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc
repoDigests:
- registry.k8s.io/coredns/coredns@sha256:4779e7517f375a597f100524db6f7f8b5b8499a6ccd14aacfa65432d4cfd5789
- registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c
repoTags:
- registry.k8s.io/coredns/coredns:v1.12.1
size: "73195387"
- id: b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c
repoDigests:
- docker.io/kindest/kindnetd@sha256:07a4b3fe0077a0ae606cc0a200fc25a28fa64dcc30b8d311b461089969449f9a
- docker.io/kindest/kindnetd@sha256:2bdc3188f2ddc8e54841f69ef900a8dde1280057c97500f966a7ef31364021f1
repoTags:
- docker.io/kindest/kindnetd:v20250512-df8de77b
size: "111333938"
- id: a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e
repoDigests:
- registry.k8s.io/etcd@sha256:5db83f9e7ee85732a647f5cf5fbdf85652afa8561b66c99f20756080ebd82ea5
- registry.k8s.io/etcd@sha256:e36c081683425b5b3bc1425bc508b37e7107bb65dfa9367bf5a80125d431fa19
repoTags:
- registry.k8s.io/etcd:3.6.4-0
size: "205987068"
- id: 05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9
repoDigests:
- registry.k8s.io/kube-proxy@sha256:90d560a712188ee40c7d03b070c8f2cbcb3097081e62306bc7e68e438cceb9a6
- registry.k8s.io/kube-proxy@sha256:913cc83ca0b5588a81d86ce8eedeb3ed1e9c1326e81852a1ea4f622b74ff749a
repoTags:
- registry.k8s.io/kube-proxy:v1.34.1
size: "75938711"
- id: 8057e0500773a37cde2cff041eb13ebd68c748419a2fbfd1dfb5bf38696cc8e5
repoDigests:
- registry.k8s.io/pause@sha256:b0602c9f938379133ff8017007894b48c1112681c9468f82a1e4cbf8a4498b67
repoTags:
- registry.k8s.io/pause:3.1
size: "528622"
- id: 43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196
repoDigests:
- registry.k8s.io/kube-apiserver@sha256:b9d7c117f8ac52bed4b13aeed973dc5198f9d93a926e6fe9e0b384f155baa902
- registry.k8s.io/kube-apiserver@sha256:ffe89a0fe39dd71bb6eee7066c95512bd4a8365cb6df23eaf60e70209fe79645
repoTags:
- registry.k8s.io/kube-apiserver:v1.34.1
size: "84753391"
- id: 7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a
repoDigests:
- registry.k8s.io/kube-controller-manager@sha256:1276f2ef2e44c06f37d7c3cccaa3f0100d5f4e939e5cfde42343962da346857f
- registry.k8s.io/kube-controller-manager@sha256:2bf47c1b01f51e8963bf2327390883c9fa4ed03ea1b284500a2cba17ce303e89
repoTags:
- registry.k8s.io/kube-controller-manager:v1.34.1
size: "72629077"
- id: b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0
repoDigests:
- registry.k8s.io/kube-scheduler@sha256:6e9fbc4e25a576483e6a233976353a66e4d77eb5d0530e9118e94b7d46fb3500
- registry.k8s.io/kube-scheduler@sha256:d69ae11adb4233d440c302583adee9e3a37cf3626484476fe18ec821953e951e
repoTags:
- registry.k8s.io/kube-scheduler:v1.34.1
size: "51592017"

                                                
                                                
functional_test.go:284: (dbg) Stderr: out/minikube-linux-arm64 -p functional-354825 image ls --format yaml --alsologtostderr:
I1122 00:08:27.128862  545492 out.go:360] Setting OutFile to fd 1 ...
I1122 00:08:27.128992  545492 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1122 00:08:27.129007  545492 out.go:374] Setting ErrFile to fd 2...
I1122 00:08:27.129012  545492 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1122 00:08:27.129305  545492 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21934-513600/.minikube/bin
I1122 00:08:27.135443  545492 config.go:182] Loaded profile config "functional-354825": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1122 00:08:27.135595  545492 config.go:182] Loaded profile config "functional-354825": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1122 00:08:27.136138  545492 cli_runner.go:164] Run: docker container inspect functional-354825 --format={{.State.Status}}
I1122 00:08:27.158121  545492 ssh_runner.go:195] Run: systemctl --version
I1122 00:08:27.158201  545492 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-354825
I1122 00:08:27.185309  545492 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33505 SSHKeyPath:/home/jenkins/minikube-integration/21934-513600/.minikube/machines/functional-354825/id_rsa Username:docker}
I1122 00:08:27.306667  545492 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (0.28s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (4.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:323: (dbg) Run:  out/minikube-linux-arm64 -p functional-354825 ssh pgrep buildkitd
functional_test.go:323: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-354825 ssh pgrep buildkitd: exit status 1 (399.880696ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:330: (dbg) Run:  out/minikube-linux-arm64 -p functional-354825 image build -t localhost/my-image:functional-354825 testdata/build --alsologtostderr
functional_test.go:330: (dbg) Done: out/minikube-linux-arm64 -p functional-354825 image build -t localhost/my-image:functional-354825 testdata/build --alsologtostderr: (3.379302598s)
functional_test.go:335: (dbg) Stdout: out/minikube-linux-arm64 -p functional-354825 image build -t localhost/my-image:functional-354825 testdata/build --alsologtostderr:
STEP 1/3: FROM gcr.io/k8s-minikube/busybox
STEP 2/3: RUN true
--> 151230b1942
STEP 3/3: ADD content.txt /
COMMIT localhost/my-image:functional-354825
--> f436cc594dc
Successfully tagged localhost/my-image:functional-354825
f436cc594dcf462dfd35f728892b4b82311be99091c1db85082f20c87c206626
functional_test.go:338: (dbg) Stderr: out/minikube-linux-arm64 -p functional-354825 image build -t localhost/my-image:functional-354825 testdata/build --alsologtostderr:
I1122 00:08:27.466576  545592 out.go:360] Setting OutFile to fd 1 ...
I1122 00:08:27.467372  545592 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1122 00:08:27.467418  545592 out.go:374] Setting ErrFile to fd 2...
I1122 00:08:27.467438  545592 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1122 00:08:27.474909  545592 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21934-513600/.minikube/bin
I1122 00:08:27.476082  545592 config.go:182] Loaded profile config "functional-354825": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1122 00:08:27.478156  545592 config.go:182] Loaded profile config "functional-354825": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1122 00:08:27.478837  545592 cli_runner.go:164] Run: docker container inspect functional-354825 --format={{.State.Status}}
I1122 00:08:27.500164  545592 ssh_runner.go:195] Run: systemctl --version
I1122 00:08:27.500214  545592 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-354825
I1122 00:08:27.525519  545592 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33505 SSHKeyPath:/home/jenkins/minikube-integration/21934-513600/.minikube/machines/functional-354825/id_rsa Username:docker}
I1122 00:08:27.624951  545592 build_images.go:162] Building image from path: /tmp/build.202770076.tar
I1122 00:08:27.625031  545592 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I1122 00:08:27.636952  545592 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.202770076.tar
I1122 00:08:27.641929  545592 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.202770076.tar: stat -c "%s %y" /var/lib/minikube/build/build.202770076.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.202770076.tar': No such file or directory
I1122 00:08:27.641961  545592 ssh_runner.go:362] scp /tmp/build.202770076.tar --> /var/lib/minikube/build/build.202770076.tar (3072 bytes)
I1122 00:08:27.667113  545592 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.202770076
I1122 00:08:27.676268  545592 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.202770076 -xf /var/lib/minikube/build/build.202770076.tar
I1122 00:08:27.686222  545592 crio.go:315] Building image: /var/lib/minikube/build/build.202770076
I1122 00:08:27.686296  545592 ssh_runner.go:195] Run: sudo podman build -t localhost/my-image:functional-354825 /var/lib/minikube/build/build.202770076 --cgroup-manager=cgroupfs
Trying to pull gcr.io/k8s-minikube/busybox:latest...
Getting image source signatures
Copying blob sha256:a01966dde7f8d5ba10b6d87e776c7c8fb5a5f6bfa678874bd28b33b1fc6dba34
Copying config sha256:71a676dd070f4b701c3272e566d84951362f1326ea07d5bbad119d1c4f6b3d02
Writing manifest to image destination
Storing signatures
I1122 00:08:30.750412  545592 ssh_runner.go:235] Completed: sudo podman build -t localhost/my-image:functional-354825 /var/lib/minikube/build/build.202770076 --cgroup-manager=cgroupfs: (3.064094768s)
I1122 00:08:30.750489  545592 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.202770076
I1122 00:08:30.757979  545592 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.202770076.tar
I1122 00:08:30.765544  545592 build_images.go:218] Built localhost/my-image:functional-354825 from /tmp/build.202770076.tar
I1122 00:08:30.765576  545592 build_images.go:134] succeeded building to: functional-354825
I1122 00:08:30.765582  545592 build_images.go:135] failed building to: 
functional_test.go:466: (dbg) Run:  out/minikube-linux-arm64 -p functional-354825 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (4.01s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (0.63s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:357: (dbg) Run:  docker pull kicbase/echo-server:1.0
functional_test.go:362: (dbg) Run:  docker tag kicbase/echo-server:1.0 kicbase/echo-server:functional-354825
--- PASS: TestFunctional/parallel/ImageCommands/Setup (0.63s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (0.52s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:407: (dbg) Run:  out/minikube-linux-arm64 -p functional-354825 image rm kicbase/echo-server:functional-354825 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-arm64 -p functional-354825 image ls
2025/11/22 00:08:23 [DEBUG] GET http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (0.52s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.18s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2124: (dbg) Run:  out/minikube-linux-arm64 -p functional-354825 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.18s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.16s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2124: (dbg) Run:  out/minikube-linux-arm64 -p functional-354825 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.16s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.2s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2124: (dbg) Run:  out/minikube-linux-arm64 -p functional-354825 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.20s)

                                                
                                    
x
+
TestFunctional/delete_echo-server_images (0.04s)

                                                
                                                
=== RUN   TestFunctional/delete_echo-server_images
functional_test.go:205: (dbg) Run:  docker rmi -f kicbase/echo-server:1.0
functional_test.go:205: (dbg) Run:  docker rmi -f kicbase/echo-server:functional-354825
--- PASS: TestFunctional/delete_echo-server_images (0.04s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:213: (dbg) Run:  docker rmi -f localhost/my-image:functional-354825
--- PASS: TestFunctional/delete_my-image_image (0.02s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:221: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-354825
--- PASS: TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StartCluster (208.87s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StartCluster
ha_test.go:101: (dbg) Run:  out/minikube-linux-arm64 -p ha-561110 start --ha --memory 3072 --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=crio
E1122 00:10:40.217186  516937 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21934-513600/.minikube/profiles/addons-882841/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:101: (dbg) Done: out/minikube-linux-arm64 -p ha-561110 start --ha --memory 3072 --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=crio: (3m27.965312189s)
ha_test.go:107: (dbg) Run:  out/minikube-linux-arm64 -p ha-561110 status --alsologtostderr -v 5
--- PASS: TestMultiControlPlane/serial/StartCluster (208.87s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeployApp (6.95s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeployApp
ha_test.go:128: (dbg) Run:  out/minikube-linux-arm64 -p ha-561110 kubectl -- apply -f ./testdata/ha/ha-pod-dns-test.yaml
ha_test.go:133: (dbg) Run:  out/minikube-linux-arm64 -p ha-561110 kubectl -- rollout status deployment/busybox
E1122 00:12:03.282970  516937 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21934-513600/.minikube/profiles/addons-882841/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:133: (dbg) Done: out/minikube-linux-arm64 -p ha-561110 kubectl -- rollout status deployment/busybox: (4.149394375s)
ha_test.go:140: (dbg) Run:  out/minikube-linux-arm64 -p ha-561110 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:163: (dbg) Run:  out/minikube-linux-arm64 -p ha-561110 kubectl -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:171: (dbg) Run:  out/minikube-linux-arm64 -p ha-561110 kubectl -- exec busybox-7b57f96db7-dx9nw -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-arm64 -p ha-561110 kubectl -- exec busybox-7b57f96db7-fbtrb -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-arm64 -p ha-561110 kubectl -- exec busybox-7b57f96db7-jnjz9 -- nslookup kubernetes.io
ha_test.go:181: (dbg) Run:  out/minikube-linux-arm64 -p ha-561110 kubectl -- exec busybox-7b57f96db7-dx9nw -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-arm64 -p ha-561110 kubectl -- exec busybox-7b57f96db7-fbtrb -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-arm64 -p ha-561110 kubectl -- exec busybox-7b57f96db7-jnjz9 -- nslookup kubernetes.default
ha_test.go:189: (dbg) Run:  out/minikube-linux-arm64 -p ha-561110 kubectl -- exec busybox-7b57f96db7-dx9nw -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-arm64 -p ha-561110 kubectl -- exec busybox-7b57f96db7-fbtrb -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-arm64 -p ha-561110 kubectl -- exec busybox-7b57f96db7-jnjz9 -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiControlPlane/serial/DeployApp (6.95s)

                                                
                                    
x
+
TestMultiControlPlane/serial/PingHostFromPods (1.47s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/PingHostFromPods
ha_test.go:199: (dbg) Run:  out/minikube-linux-arm64 -p ha-561110 kubectl -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:207: (dbg) Run:  out/minikube-linux-arm64 -p ha-561110 kubectl -- exec busybox-7b57f96db7-dx9nw -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-arm64 -p ha-561110 kubectl -- exec busybox-7b57f96db7-dx9nw -- sh -c "ping -c 1 192.168.49.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-arm64 -p ha-561110 kubectl -- exec busybox-7b57f96db7-fbtrb -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-arm64 -p ha-561110 kubectl -- exec busybox-7b57f96db7-fbtrb -- sh -c "ping -c 1 192.168.49.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-arm64 -p ha-561110 kubectl -- exec busybox-7b57f96db7-jnjz9 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-arm64 -p ha-561110 kubectl -- exec busybox-7b57f96db7-jnjz9 -- sh -c "ping -c 1 192.168.49.1"
--- PASS: TestMultiControlPlane/serial/PingHostFromPods (1.47s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddWorkerNode (59.46s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddWorkerNode
ha_test.go:228: (dbg) Run:  out/minikube-linux-arm64 -p ha-561110 node add --alsologtostderr -v 5
E1122 00:12:44.616114  516937 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21934-513600/.minikube/profiles/functional-354825/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1122 00:12:44.623047  516937 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21934-513600/.minikube/profiles/functional-354825/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1122 00:12:44.634610  516937 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21934-513600/.minikube/profiles/functional-354825/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1122 00:12:44.656100  516937 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21934-513600/.minikube/profiles/functional-354825/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1122 00:12:44.697444  516937 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21934-513600/.minikube/profiles/functional-354825/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1122 00:12:44.778849  516937 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21934-513600/.minikube/profiles/functional-354825/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1122 00:12:44.940365  516937 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21934-513600/.minikube/profiles/functional-354825/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1122 00:12:45.262549  516937 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21934-513600/.minikube/profiles/functional-354825/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1122 00:12:45.905566  516937 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21934-513600/.minikube/profiles/functional-354825/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1122 00:12:47.186977  516937 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21934-513600/.minikube/profiles/functional-354825/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1122 00:12:49.748349  516937 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21934-513600/.minikube/profiles/functional-354825/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1122 00:12:54.869762  516937 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21934-513600/.minikube/profiles/functional-354825/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1122 00:13:05.111411  516937 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21934-513600/.minikube/profiles/functional-354825/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:228: (dbg) Done: out/minikube-linux-arm64 -p ha-561110 node add --alsologtostderr -v 5: (58.387793196s)
ha_test.go:234: (dbg) Run:  out/minikube-linux-arm64 -p ha-561110 status --alsologtostderr -v 5
ha_test.go:234: (dbg) Done: out/minikube-linux-arm64 -p ha-561110 status --alsologtostderr -v 5: (1.071211292s)
--- PASS: TestMultiControlPlane/serial/AddWorkerNode (59.46s)

                                                
                                    
x
+
TestMultiControlPlane/serial/NodeLabels (0.12s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/NodeLabels
ha_test.go:255: (dbg) Run:  kubectl --context ha-561110 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiControlPlane/serial/NodeLabels (0.12s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterClusterStart (1.09s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterClusterStart
ha_test.go:281: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
ha_test.go:281: (dbg) Done: out/minikube-linux-arm64 profile list --output json: (1.085112088s)
--- PASS: TestMultiControlPlane/serial/HAppyAfterClusterStart (1.09s)

                                                
                                    
x
+
TestMultiControlPlane/serial/CopyFile (19.45s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/CopyFile
ha_test.go:328: (dbg) Run:  out/minikube-linux-arm64 -p ha-561110 status --output json --alsologtostderr -v 5
ha_test.go:328: (dbg) Done: out/minikube-linux-arm64 -p ha-561110 status --output json --alsologtostderr -v 5: (1.014915614s)
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-561110 cp testdata/cp-test.txt ha-561110:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-561110 ssh -n ha-561110 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-561110 cp ha-561110:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile2616405813/001/cp-test_ha-561110.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-561110 ssh -n ha-561110 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-561110 cp ha-561110:/home/docker/cp-test.txt ha-561110-m02:/home/docker/cp-test_ha-561110_ha-561110-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-561110 ssh -n ha-561110 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-561110 ssh -n ha-561110-m02 "sudo cat /home/docker/cp-test_ha-561110_ha-561110-m02.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-561110 cp ha-561110:/home/docker/cp-test.txt ha-561110-m03:/home/docker/cp-test_ha-561110_ha-561110-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-561110 ssh -n ha-561110 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-561110 ssh -n ha-561110-m03 "sudo cat /home/docker/cp-test_ha-561110_ha-561110-m03.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-561110 cp ha-561110:/home/docker/cp-test.txt ha-561110-m04:/home/docker/cp-test_ha-561110_ha-561110-m04.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-561110 ssh -n ha-561110 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-561110 ssh -n ha-561110-m04 "sudo cat /home/docker/cp-test_ha-561110_ha-561110-m04.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-561110 cp testdata/cp-test.txt ha-561110-m02:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-561110 ssh -n ha-561110-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-561110 cp ha-561110-m02:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile2616405813/001/cp-test_ha-561110-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-561110 ssh -n ha-561110-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-561110 cp ha-561110-m02:/home/docker/cp-test.txt ha-561110:/home/docker/cp-test_ha-561110-m02_ha-561110.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-561110 ssh -n ha-561110-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-561110 ssh -n ha-561110 "sudo cat /home/docker/cp-test_ha-561110-m02_ha-561110.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-561110 cp ha-561110-m02:/home/docker/cp-test.txt ha-561110-m03:/home/docker/cp-test_ha-561110-m02_ha-561110-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-561110 ssh -n ha-561110-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-561110 ssh -n ha-561110-m03 "sudo cat /home/docker/cp-test_ha-561110-m02_ha-561110-m03.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-561110 cp ha-561110-m02:/home/docker/cp-test.txt ha-561110-m04:/home/docker/cp-test_ha-561110-m02_ha-561110-m04.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-561110 ssh -n ha-561110-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-561110 ssh -n ha-561110-m04 "sudo cat /home/docker/cp-test_ha-561110-m02_ha-561110-m04.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-561110 cp testdata/cp-test.txt ha-561110-m03:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-561110 ssh -n ha-561110-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-561110 cp ha-561110-m03:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile2616405813/001/cp-test_ha-561110-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-561110 ssh -n ha-561110-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-561110 cp ha-561110-m03:/home/docker/cp-test.txt ha-561110:/home/docker/cp-test_ha-561110-m03_ha-561110.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-561110 ssh -n ha-561110-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-561110 ssh -n ha-561110 "sudo cat /home/docker/cp-test_ha-561110-m03_ha-561110.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-561110 cp ha-561110-m03:/home/docker/cp-test.txt ha-561110-m02:/home/docker/cp-test_ha-561110-m03_ha-561110-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-561110 ssh -n ha-561110-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-561110 ssh -n ha-561110-m02 "sudo cat /home/docker/cp-test_ha-561110-m03_ha-561110-m02.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-561110 cp ha-561110-m03:/home/docker/cp-test.txt ha-561110-m04:/home/docker/cp-test_ha-561110-m03_ha-561110-m04.txt
E1122 00:13:25.593626  516937 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21934-513600/.minikube/profiles/functional-354825/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-561110 ssh -n ha-561110-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-561110 ssh -n ha-561110-m04 "sudo cat /home/docker/cp-test_ha-561110-m03_ha-561110-m04.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-561110 cp testdata/cp-test.txt ha-561110-m04:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-561110 ssh -n ha-561110-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-561110 cp ha-561110-m04:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile2616405813/001/cp-test_ha-561110-m04.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-561110 ssh -n ha-561110-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-561110 cp ha-561110-m04:/home/docker/cp-test.txt ha-561110:/home/docker/cp-test_ha-561110-m04_ha-561110.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-561110 ssh -n ha-561110-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-561110 ssh -n ha-561110 "sudo cat /home/docker/cp-test_ha-561110-m04_ha-561110.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-561110 cp ha-561110-m04:/home/docker/cp-test.txt ha-561110-m02:/home/docker/cp-test_ha-561110-m04_ha-561110-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-561110 ssh -n ha-561110-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-561110 ssh -n ha-561110-m02 "sudo cat /home/docker/cp-test_ha-561110-m04_ha-561110-m02.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-561110 cp ha-561110-m04:/home/docker/cp-test.txt ha-561110-m03:/home/docker/cp-test_ha-561110-m04_ha-561110-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-561110 ssh -n ha-561110-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-561110 ssh -n ha-561110-m03 "sudo cat /home/docker/cp-test_ha-561110-m04_ha-561110-m03.txt"
--- PASS: TestMultiControlPlane/serial/CopyFile (19.45s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopSecondaryNode (12.78s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopSecondaryNode
ha_test.go:365: (dbg) Run:  out/minikube-linux-arm64 -p ha-561110 node stop m02 --alsologtostderr -v 5
ha_test.go:365: (dbg) Done: out/minikube-linux-arm64 -p ha-561110 node stop m02 --alsologtostderr -v 5: (12.029534726s)
ha_test.go:371: (dbg) Run:  out/minikube-linux-arm64 -p ha-561110 status --alsologtostderr -v 5
ha_test.go:371: (dbg) Non-zero exit: out/minikube-linux-arm64 -p ha-561110 status --alsologtostderr -v 5: exit status 7 (750.612189ms)

                                                
                                                
-- stdout --
	ha-561110
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-561110-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-561110-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-561110-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1122 00:13:43.303744  560493 out.go:360] Setting OutFile to fd 1 ...
	I1122 00:13:43.303899  560493 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1122 00:13:43.303912  560493 out.go:374] Setting ErrFile to fd 2...
	I1122 00:13:43.303917  560493 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1122 00:13:43.304184  560493 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21934-513600/.minikube/bin
	I1122 00:13:43.304373  560493 out.go:368] Setting JSON to false
	I1122 00:13:43.304402  560493 mustload.go:66] Loading cluster: ha-561110
	I1122 00:13:43.304442  560493 notify.go:221] Checking for updates...
	I1122 00:13:43.304821  560493 config.go:182] Loaded profile config "ha-561110": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1122 00:13:43.304848  560493 status.go:174] checking status of ha-561110 ...
	I1122 00:13:43.305691  560493 cli_runner.go:164] Run: docker container inspect ha-561110 --format={{.State.Status}}
	I1122 00:13:43.331655  560493 status.go:371] ha-561110 host status = "Running" (err=<nil>)
	I1122 00:13:43.331687  560493 host.go:66] Checking if "ha-561110" exists ...
	I1122 00:13:43.331989  560493 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-561110
	I1122 00:13:43.351603  560493 host.go:66] Checking if "ha-561110" exists ...
	I1122 00:13:43.351949  560493 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1122 00:13:43.352001  560493 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-561110
	I1122 00:13:43.375707  560493 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33510 SSHKeyPath:/home/jenkins/minikube-integration/21934-513600/.minikube/machines/ha-561110/id_rsa Username:docker}
	I1122 00:13:43.484621  560493 ssh_runner.go:195] Run: systemctl --version
	I1122 00:13:43.491155  560493 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1122 00:13:43.503943  560493 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1122 00:13:43.562158  560493 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:67 OomKillDisable:true NGoroutines:72 SystemTime:2025-11-22 00:13:43.551134055 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1122 00:13:43.562693  560493 kubeconfig.go:125] found "ha-561110" server: "https://192.168.49.254:8443"
	I1122 00:13:43.562738  560493 api_server.go:166] Checking apiserver status ...
	I1122 00:13:43.562784  560493 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1122 00:13:43.575181  560493 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1229/cgroup
	I1122 00:13:43.583930  560493 api_server.go:182] apiserver freezer: "4:freezer:/docker/b491a219f5f6a6da2c04a012513c1a2266c783068f6131eddf3365f4209ece96/crio/crio-9d532c0285521ab24b03480c816f410d2cd8b13acc0fc4d80d97ccdd9062310d"
	I1122 00:13:43.584003  560493 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/b491a219f5f6a6da2c04a012513c1a2266c783068f6131eddf3365f4209ece96/crio/crio-9d532c0285521ab24b03480c816f410d2cd8b13acc0fc4d80d97ccdd9062310d/freezer.state
	I1122 00:13:43.592518  560493 api_server.go:204] freezer state: "THAWED"
	I1122 00:13:43.592546  560493 api_server.go:253] Checking apiserver healthz at https://192.168.49.254:8443/healthz ...
	I1122 00:13:43.600808  560493 api_server.go:279] https://192.168.49.254:8443/healthz returned 200:
	ok
	I1122 00:13:43.600846  560493 status.go:463] ha-561110 apiserver status = Running (err=<nil>)
	I1122 00:13:43.600857  560493 status.go:176] ha-561110 status: &{Name:ha-561110 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1122 00:13:43.600873  560493 status.go:174] checking status of ha-561110-m02 ...
	I1122 00:13:43.601182  560493 cli_runner.go:164] Run: docker container inspect ha-561110-m02 --format={{.State.Status}}
	I1122 00:13:43.618444  560493 status.go:371] ha-561110-m02 host status = "Stopped" (err=<nil>)
	I1122 00:13:43.618471  560493 status.go:384] host is not running, skipping remaining checks
	I1122 00:13:43.618478  560493 status.go:176] ha-561110-m02 status: &{Name:ha-561110-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1122 00:13:43.618498  560493 status.go:174] checking status of ha-561110-m03 ...
	I1122 00:13:43.618810  560493 cli_runner.go:164] Run: docker container inspect ha-561110-m03 --format={{.State.Status}}
	I1122 00:13:43.635984  560493 status.go:371] ha-561110-m03 host status = "Running" (err=<nil>)
	I1122 00:13:43.636007  560493 host.go:66] Checking if "ha-561110-m03" exists ...
	I1122 00:13:43.636308  560493 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-561110-m03
	I1122 00:13:43.655565  560493 host.go:66] Checking if "ha-561110-m03" exists ...
	I1122 00:13:43.655983  560493 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1122 00:13:43.656033  560493 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-561110-m03
	I1122 00:13:43.673335  560493 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33520 SSHKeyPath:/home/jenkins/minikube-integration/21934-513600/.minikube/machines/ha-561110-m03/id_rsa Username:docker}
	I1122 00:13:43.771385  560493 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1122 00:13:43.785007  560493 kubeconfig.go:125] found "ha-561110" server: "https://192.168.49.254:8443"
	I1122 00:13:43.785037  560493 api_server.go:166] Checking apiserver status ...
	I1122 00:13:43.785088  560493 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1122 00:13:43.796651  560493 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1202/cgroup
	I1122 00:13:43.805578  560493 api_server.go:182] apiserver freezer: "4:freezer:/docker/6cdfe9f470115481048318a0968e691ddf7a1692a259e5f162538eca1b205a10/crio/crio-3c55cb73c14263cd3add473314c3c06c160b74c70772a560f614d7fe29c08a7a"
	I1122 00:13:43.805655  560493 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/6cdfe9f470115481048318a0968e691ddf7a1692a259e5f162538eca1b205a10/crio/crio-3c55cb73c14263cd3add473314c3c06c160b74c70772a560f614d7fe29c08a7a/freezer.state
	I1122 00:13:43.813691  560493 api_server.go:204] freezer state: "THAWED"
	I1122 00:13:43.813736  560493 api_server.go:253] Checking apiserver healthz at https://192.168.49.254:8443/healthz ...
	I1122 00:13:43.823545  560493 api_server.go:279] https://192.168.49.254:8443/healthz returned 200:
	ok
	I1122 00:13:43.823622  560493 status.go:463] ha-561110-m03 apiserver status = Running (err=<nil>)
	I1122 00:13:43.823638  560493 status.go:176] ha-561110-m03 status: &{Name:ha-561110-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1122 00:13:43.823666  560493 status.go:174] checking status of ha-561110-m04 ...
	I1122 00:13:43.824036  560493 cli_runner.go:164] Run: docker container inspect ha-561110-m04 --format={{.State.Status}}
	I1122 00:13:43.840744  560493 status.go:371] ha-561110-m04 host status = "Running" (err=<nil>)
	I1122 00:13:43.840772  560493 host.go:66] Checking if "ha-561110-m04" exists ...
	I1122 00:13:43.841069  560493 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-561110-m04
	I1122 00:13:43.860815  560493 host.go:66] Checking if "ha-561110-m04" exists ...
	I1122 00:13:43.861166  560493 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1122 00:13:43.861218  560493 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-561110-m04
	I1122 00:13:43.878290  560493 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33525 SSHKeyPath:/home/jenkins/minikube-integration/21934-513600/.minikube/machines/ha-561110-m04/id_rsa Username:docker}
	I1122 00:13:43.975391  560493 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1122 00:13:43.998322  560493 status.go:176] ha-561110-m04 status: &{Name:ha-561110-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopSecondaryNode (12.78s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.8s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop
ha_test.go:392: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.80s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartSecondaryNode (27.32s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartSecondaryNode
ha_test.go:422: (dbg) Run:  out/minikube-linux-arm64 -p ha-561110 node start m02 --alsologtostderr -v 5
E1122 00:14:06.554908  516937 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21934-513600/.minikube/profiles/functional-354825/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:422: (dbg) Done: out/minikube-linux-arm64 -p ha-561110 node start m02 --alsologtostderr -v 5: (25.911579954s)
ha_test.go:430: (dbg) Run:  out/minikube-linux-arm64 -p ha-561110 status --alsologtostderr -v 5
ha_test.go:430: (dbg) Done: out/minikube-linux-arm64 -p ha-561110 status --alsologtostderr -v 5: (1.264096413s)
ha_test.go:450: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiControlPlane/serial/RestartSecondaryNode (27.32s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (1.22s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart
ha_test.go:281: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
ha_test.go:281: (dbg) Done: out/minikube-linux-arm64 profile list --output json: (1.222876029s)
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (1.22s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopCluster (24.44s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopCluster
ha_test.go:533: (dbg) Run:  out/minikube-linux-arm64 -p ha-561110 stop --alsologtostderr -v 5
ha_test.go:533: (dbg) Done: out/minikube-linux-arm64 -p ha-561110 stop --alsologtostderr -v 5: (24.307959932s)
ha_test.go:539: (dbg) Run:  out/minikube-linux-arm64 -p ha-561110 status --alsologtostderr -v 5
ha_test.go:539: (dbg) Non-zero exit: out/minikube-linux-arm64 -p ha-561110 status --alsologtostderr -v 5: exit status 7 (127.928201ms)

                                                
                                                
-- stdout --
	ha-561110
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-561110-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-561110-m04
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1122 00:23:41.966156  571729 out.go:360] Setting OutFile to fd 1 ...
	I1122 00:23:41.966383  571729 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1122 00:23:41.966415  571729 out.go:374] Setting ErrFile to fd 2...
	I1122 00:23:41.966435  571729 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1122 00:23:41.966742  571729 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21934-513600/.minikube/bin
	I1122 00:23:41.966958  571729 out.go:368] Setting JSON to false
	I1122 00:23:41.967016  571729 mustload.go:66] Loading cluster: ha-561110
	I1122 00:23:41.967101  571729 notify.go:221] Checking for updates...
	I1122 00:23:41.967497  571729 config.go:182] Loaded profile config "ha-561110": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1122 00:23:41.967537  571729 status.go:174] checking status of ha-561110 ...
	I1122 00:23:41.968087  571729 cli_runner.go:164] Run: docker container inspect ha-561110 --format={{.State.Status}}
	I1122 00:23:41.987496  571729 status.go:371] ha-561110 host status = "Stopped" (err=<nil>)
	I1122 00:23:41.987515  571729 status.go:384] host is not running, skipping remaining checks
	I1122 00:23:41.987521  571729 status.go:176] ha-561110 status: &{Name:ha-561110 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1122 00:23:41.987550  571729 status.go:174] checking status of ha-561110-m02 ...
	I1122 00:23:41.987861  571729 cli_runner.go:164] Run: docker container inspect ha-561110-m02 --format={{.State.Status}}
	I1122 00:23:42.024440  571729 status.go:371] ha-561110-m02 host status = "Stopped" (err=<nil>)
	I1122 00:23:42.024466  571729 status.go:384] host is not running, skipping remaining checks
	I1122 00:23:42.024473  571729 status.go:176] ha-561110-m02 status: &{Name:ha-561110-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1122 00:23:42.024506  571729 status.go:174] checking status of ha-561110-m04 ...
	I1122 00:23:42.024841  571729 cli_runner.go:164] Run: docker container inspect ha-561110-m04 --format={{.State.Status}}
	I1122 00:23:42.043684  571729 status.go:371] ha-561110-m04 host status = "Stopped" (err=<nil>)
	I1122 00:23:42.043706  571729 status.go:384] host is not running, skipping remaining checks
	I1122 00:23:42.043727  571729 status.go:176] ha-561110-m04 status: &{Name:ha-561110-m04 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopCluster (24.44s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartCluster (68.73s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartCluster
ha_test.go:562: (dbg) Run:  out/minikube-linux-arm64 -p ha-561110 start --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=crio
ha_test.go:562: (dbg) Done: out/minikube-linux-arm64 -p ha-561110 start --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=crio: (1m7.739901728s)
ha_test.go:568: (dbg) Run:  out/minikube-linux-arm64 -p ha-561110 status --alsologtostderr -v 5
ha_test.go:586: (dbg) Run:  kubectl get nodes
ha_test.go:594: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/RestartCluster (68.73s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.76s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterClusterRestart
ha_test.go:392: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.76s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddSecondaryNode (55.8s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddSecondaryNode
ha_test.go:607: (dbg) Run:  out/minikube-linux-arm64 -p ha-561110 node add --control-plane --alsologtostderr -v 5
E1122 00:25:40.217903  516937 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21934-513600/.minikube/profiles/addons-882841/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:607: (dbg) Done: out/minikube-linux-arm64 -p ha-561110 node add --control-plane --alsologtostderr -v 5: (54.715268521s)
ha_test.go:613: (dbg) Run:  out/minikube-linux-arm64 -p ha-561110 status --alsologtostderr -v 5
ha_test.go:613: (dbg) Done: out/minikube-linux-arm64 -p ha-561110 status --alsologtostderr -v 5: (1.083321258s)
--- PASS: TestMultiControlPlane/serial/AddSecondaryNode (55.80s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (1.05s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd
ha_test.go:281: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
ha_test.go:281: (dbg) Done: out/minikube-linux-arm64 profile list --output json: (1.048592736s)
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (1.05s)

                                                
                                    
x
+
TestJSONOutput/start/Command (75.87s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 start -p json-output-557707 --output=json --user=testUser --memory=3072 --wait=true --driver=docker  --container-runtime=crio
json_output_test.go:63: (dbg) Done: out/minikube-linux-arm64 start -p json-output-557707 --output=json --user=testUser --memory=3072 --wait=true --driver=docker  --container-runtime=crio: (1m15.855056082s)
--- PASS: TestJSONOutput/start/Command (75.87s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (5.79s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 stop -p json-output-557707 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-linux-arm64 stop -p json-output-557707 --output=json --user=testUser: (5.785655227s)
--- PASS: TestJSONOutput/stop/Command (5.79s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.23s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-linux-arm64 start -p json-output-error-989405 --memory=3072 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p json-output-error-989405 --memory=3072 --output=json --wait=true --driver=fail: exit status 56 (90.3509ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"56891412-bb09-4cc5-a6dc-f9a9155260da","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-989405] minikube v1.37.0 on Ubuntu 20.04 (arm64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"246ae343-a17f-46ff-afa4-7612028e4162","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=21934"}}
	{"specversion":"1.0","id":"55489220-687f-41f1-86b3-fed53e707f15","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"7c0b4097-c8da-4d2b-a46d-4a35c23d60e5","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/21934-513600/kubeconfig"}}
	{"specversion":"1.0","id":"4791b501-1f55-4b0c-a394-9b75ac524152","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/21934-513600/.minikube"}}
	{"specversion":"1.0","id":"85f2b374-55fe-4956-b740-434d071e0f70","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-arm64"}}
	{"specversion":"1.0","id":"37a82aba-925e-4faa-ae13-3b8d42c5575a","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"b32a0b53-a414-4903-aef6-f6eed2914279","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on linux/arm64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "json-output-error-989405" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p json-output-error-989405
--- PASS: TestErrorJSONOutput (0.23s)

                                                
                                    
x
+
TestKicCustomNetwork/create_custom_network (41.31s)

                                                
                                                
=== RUN   TestKicCustomNetwork/create_custom_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-arm64 start -p docker-network-325747 --network=
E1122 00:27:44.619418  516937 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21934-513600/.minikube/profiles/functional-354825/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-arm64 start -p docker-network-325747 --network=: (38.988637511s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-325747" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p docker-network-325747
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p docker-network-325747: (2.286156285s)
--- PASS: TestKicCustomNetwork/create_custom_network (41.31s)

                                                
                                    
x
+
TestKicCustomNetwork/use_default_bridge_network (36.38s)

                                                
                                                
=== RUN   TestKicCustomNetwork/use_default_bridge_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-arm64 start -p docker-network-940667 --network=bridge
E1122 00:28:43.285931  516937 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21934-513600/.minikube/profiles/addons-882841/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-arm64 start -p docker-network-940667 --network=bridge: (34.254948806s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-940667" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p docker-network-940667
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p docker-network-940667: (2.105902889s)
--- PASS: TestKicCustomNetwork/use_default_bridge_network (36.38s)

                                                
                                    
x
+
TestKicExistingNetwork (36.2s)

                                                
                                                
=== RUN   TestKicExistingNetwork
I1122 00:28:47.597014  516937 cli_runner.go:164] Run: docker network inspect existing-network --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
W1122 00:28:47.613771  516937 cli_runner.go:211] docker network inspect existing-network --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
I1122 00:28:47.614798  516937 network_create.go:284] running [docker network inspect existing-network] to gather additional debugging logs...
I1122 00:28:47.614839  516937 cli_runner.go:164] Run: docker network inspect existing-network
W1122 00:28:47.630508  516937 cli_runner.go:211] docker network inspect existing-network returned with exit code 1
I1122 00:28:47.630536  516937 network_create.go:287] error running [docker network inspect existing-network]: docker network inspect existing-network: exit status 1
stdout:
[]

                                                
                                                
stderr:
Error response from daemon: network existing-network not found
I1122 00:28:47.630553  516937 network_create.go:289] output of [docker network inspect existing-network]: -- stdout --
[]

                                                
                                                
-- /stdout --
** stderr ** 
Error response from daemon: network existing-network not found

                                                
                                                
** /stderr **
I1122 00:28:47.630650  516937 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
I1122 00:28:47.648695  516937 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-b16c782e3da8 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:ba:82:00:9d:45:d0} reservation:<nil>}
I1122 00:28:47.650352  516937 network.go:206] using free private subnet 192.168.58.0/24: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x400142d400}
I1122 00:28:47.650391  516937 network_create.go:124] attempt to create docker network existing-network 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500 ...
I1122 00:28:47.650443  516937 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=existing-network existing-network
I1122 00:28:47.709936  516937 network_create.go:108] docker network existing-network 192.168.58.0/24 created
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
kic_custom_network_test.go:93: (dbg) Run:  out/minikube-linux-arm64 start -p existing-network-960188 --network=existing-network
E1122 00:29:07.681950  516937 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21934-513600/.minikube/profiles/functional-354825/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
kic_custom_network_test.go:93: (dbg) Done: out/minikube-linux-arm64 start -p existing-network-960188 --network=existing-network: (33.923022166s)
helpers_test.go:175: Cleaning up "existing-network-960188" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p existing-network-960188
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p existing-network-960188: (2.123749793s)
I1122 00:29:23.773109  516937 cli_runner.go:164] Run: docker network ls --filter=label=existing-network --format {{.Name}}
--- PASS: TestKicExistingNetwork (36.20s)

                                                
                                    
x
+
TestKicCustomSubnet (37.83s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p custom-subnet-569754 --subnet=192.168.60.0/24
kic_custom_network_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p custom-subnet-569754 --subnet=192.168.60.0/24: (35.215710344s)
kic_custom_network_test.go:161: (dbg) Run:  docker network inspect custom-subnet-569754 --format "{{(index .IPAM.Config 0).Subnet}}"
helpers_test.go:175: Cleaning up "custom-subnet-569754" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p custom-subnet-569754
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p custom-subnet-569754: (2.591327734s)
--- PASS: TestKicCustomSubnet (37.83s)

                                                
                                    
x
+
TestKicStaticIP (38.45s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:132: (dbg) Run:  out/minikube-linux-arm64 start -p static-ip-675369 --static-ip=192.168.200.200
kic_custom_network_test.go:132: (dbg) Done: out/minikube-linux-arm64 start -p static-ip-675369 --static-ip=192.168.200.200: (36.127095658s)
kic_custom_network_test.go:138: (dbg) Run:  out/minikube-linux-arm64 -p static-ip-675369 ip
helpers_test.go:175: Cleaning up "static-ip-675369" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p static-ip-675369
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p static-ip-675369: (2.16191018s)
--- PASS: TestKicStaticIP (38.45s)

                                                
                                    
x
+
TestMainNoArgs (0.05s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:70: (dbg) Run:  out/minikube-linux-arm64
--- PASS: TestMainNoArgs (0.05s)

                                                
                                    
x
+
TestMinikubeProfile (69.52s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-arm64 start -p first-026503 --driver=docker  --container-runtime=crio
E1122 00:30:40.217688  516937 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21934-513600/.minikube/profiles/addons-882841/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-arm64 start -p first-026503 --driver=docker  --container-runtime=crio: (32.462831365s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-arm64 start -p second-029096 --driver=docker  --container-runtime=crio
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-arm64 start -p second-029096 --driver=docker  --container-runtime=crio: (31.301269637s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-arm64 profile first-026503
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-arm64 profile list -ojson
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-arm64 profile second-029096
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-arm64 profile list -ojson
helpers_test.go:175: Cleaning up "second-029096" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p second-029096
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p second-029096: (2.124170223s)
helpers_test.go:175: Cleaning up "first-026503" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p first-026503
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p first-026503: (2.031135776s)
--- PASS: TestMinikubeProfile (69.52s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (8.71s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:118: (dbg) Run:  out/minikube-linux-arm64 start -p mount-start-1-125218 --memory=3072 --mount-string /tmp/TestMountStartserial1189461682/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio
mount_start_test.go:118: (dbg) Done: out/minikube-linux-arm64 start -p mount-start-1-125218 --memory=3072 --mount-string /tmp/TestMountStartserial1189461682/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio: (7.705655988s)
--- PASS: TestMountStart/serial/StartWithMountFirst (8.71s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountFirst (0.26s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountFirst
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-1-125218 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountFirst (0.26s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountSecond (9.01s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountSecond
mount_start_test.go:118: (dbg) Run:  out/minikube-linux-arm64 start -p mount-start-2-127294 --memory=3072 --mount-string /tmp/TestMountStartserial1189461682/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio
mount_start_test.go:118: (dbg) Done: out/minikube-linux-arm64 start -p mount-start-2-127294 --memory=3072 --mount-string /tmp/TestMountStartserial1189461682/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio: (8.010294129s)
--- PASS: TestMountStart/serial/StartWithMountSecond (9.01s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountSecond (0.28s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountSecond
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-2-127294 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountSecond (0.28s)

                                                
                                    
x
+
TestMountStart/serial/DeleteFirst (1.72s)

                                                
                                                
=== RUN   TestMountStart/serial/DeleteFirst
pause_test.go:132: (dbg) Run:  out/minikube-linux-arm64 delete -p mount-start-1-125218 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-linux-arm64 delete -p mount-start-1-125218 --alsologtostderr -v=5: (1.718838972s)
--- PASS: TestMountStart/serial/DeleteFirst (1.72s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostDelete (0.27s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostDelete
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-2-127294 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostDelete (0.27s)

                                                
                                    
x
+
TestMountStart/serial/Stop (1.3s)

                                                
                                                
=== RUN   TestMountStart/serial/Stop
mount_start_test.go:196: (dbg) Run:  out/minikube-linux-arm64 stop -p mount-start-2-127294
mount_start_test.go:196: (dbg) Done: out/minikube-linux-arm64 stop -p mount-start-2-127294: (1.302007462s)
--- PASS: TestMountStart/serial/Stop (1.30s)

                                                
                                    
x
+
TestMountStart/serial/RestartStopped (7.99s)

                                                
                                                
=== RUN   TestMountStart/serial/RestartStopped
mount_start_test.go:207: (dbg) Run:  out/minikube-linux-arm64 start -p mount-start-2-127294
mount_start_test.go:207: (dbg) Done: out/minikube-linux-arm64 start -p mount-start-2-127294: (6.985811244s)
--- PASS: TestMountStart/serial/RestartStopped (7.99s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostStop (0.27s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostStop
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-2-127294 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostStop (0.27s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (136.19s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:96: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-571094 --wait=true --memory=3072 --nodes=2 -v=5 --alsologtostderr --driver=docker  --container-runtime=crio
E1122 00:32:44.616358  516937 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21934-513600/.minikube/profiles/functional-354825/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:96: (dbg) Done: out/minikube-linux-arm64 start -p multinode-571094 --wait=true --memory=3072 --nodes=2 -v=5 --alsologtostderr --driver=docker  --container-runtime=crio: (2m15.674230593s)
multinode_test.go:102: (dbg) Run:  out/minikube-linux-arm64 -p multinode-571094 status --alsologtostderr
--- PASS: TestMultiNode/serial/FreshStart2Nodes (136.19s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (5.15s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:493: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-571094 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:498: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-571094 -- rollout status deployment/busybox
multinode_test.go:498: (dbg) Done: out/minikube-linux-arm64 kubectl -p multinode-571094 -- rollout status deployment/busybox: (3.218124993s)
multinode_test.go:505: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-571094 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:528: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-571094 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:536: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-571094 -- exec busybox-7b57f96db7-6rtgc -- nslookup kubernetes.io
multinode_test.go:536: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-571094 -- exec busybox-7b57f96db7-lnnfw -- nslookup kubernetes.io
multinode_test.go:546: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-571094 -- exec busybox-7b57f96db7-6rtgc -- nslookup kubernetes.default
multinode_test.go:546: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-571094 -- exec busybox-7b57f96db7-lnnfw -- nslookup kubernetes.default
multinode_test.go:554: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-571094 -- exec busybox-7b57f96db7-6rtgc -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:554: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-571094 -- exec busybox-7b57f96db7-lnnfw -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiNode/serial/DeployApp2Nodes (5.15s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (0.89s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:564: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-571094 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:572: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-571094 -- exec busybox-7b57f96db7-6rtgc -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-571094 -- exec busybox-7b57f96db7-6rtgc -- sh -c "ping -c 1 192.168.67.1"
multinode_test.go:572: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-571094 -- exec busybox-7b57f96db7-lnnfw -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-571094 -- exec busybox-7b57f96db7-lnnfw -- sh -c "ping -c 1 192.168.67.1"
--- PASS: TestMultiNode/serial/PingHostFrom2Pods (0.89s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (58.53s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:121: (dbg) Run:  out/minikube-linux-arm64 node add -p multinode-571094 -v=5 --alsologtostderr
E1122 00:35:40.218003  516937 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21934-513600/.minikube/profiles/addons-882841/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:121: (dbg) Done: out/minikube-linux-arm64 node add -p multinode-571094 -v=5 --alsologtostderr: (57.825859239s)
multinode_test.go:127: (dbg) Run:  out/minikube-linux-arm64 -p multinode-571094 status --alsologtostderr
--- PASS: TestMultiNode/serial/AddNode (58.53s)

                                                
                                    
x
+
TestMultiNode/serial/MultiNodeLabels (0.13s)

                                                
                                                
=== RUN   TestMultiNode/serial/MultiNodeLabels
multinode_test.go:221: (dbg) Run:  kubectl --context multinode-571094 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiNode/serial/MultiNodeLabels (0.13s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.72s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:143: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiNode/serial/ProfileList (0.72s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (10.42s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:184: (dbg) Run:  out/minikube-linux-arm64 -p multinode-571094 status --output json --alsologtostderr
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p multinode-571094 cp testdata/cp-test.txt multinode-571094:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-571094 ssh -n multinode-571094 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p multinode-571094 cp multinode-571094:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile1074080616/001/cp-test_multinode-571094.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-571094 ssh -n multinode-571094 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p multinode-571094 cp multinode-571094:/home/docker/cp-test.txt multinode-571094-m02:/home/docker/cp-test_multinode-571094_multinode-571094-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-571094 ssh -n multinode-571094 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-571094 ssh -n multinode-571094-m02 "sudo cat /home/docker/cp-test_multinode-571094_multinode-571094-m02.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p multinode-571094 cp multinode-571094:/home/docker/cp-test.txt multinode-571094-m03:/home/docker/cp-test_multinode-571094_multinode-571094-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-571094 ssh -n multinode-571094 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-571094 ssh -n multinode-571094-m03 "sudo cat /home/docker/cp-test_multinode-571094_multinode-571094-m03.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p multinode-571094 cp testdata/cp-test.txt multinode-571094-m02:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-571094 ssh -n multinode-571094-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p multinode-571094 cp multinode-571094-m02:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile1074080616/001/cp-test_multinode-571094-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-571094 ssh -n multinode-571094-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p multinode-571094 cp multinode-571094-m02:/home/docker/cp-test.txt multinode-571094:/home/docker/cp-test_multinode-571094-m02_multinode-571094.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-571094 ssh -n multinode-571094-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-571094 ssh -n multinode-571094 "sudo cat /home/docker/cp-test_multinode-571094-m02_multinode-571094.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p multinode-571094 cp multinode-571094-m02:/home/docker/cp-test.txt multinode-571094-m03:/home/docker/cp-test_multinode-571094-m02_multinode-571094-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-571094 ssh -n multinode-571094-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-571094 ssh -n multinode-571094-m03 "sudo cat /home/docker/cp-test_multinode-571094-m02_multinode-571094-m03.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p multinode-571094 cp testdata/cp-test.txt multinode-571094-m03:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-571094 ssh -n multinode-571094-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p multinode-571094 cp multinode-571094-m03:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile1074080616/001/cp-test_multinode-571094-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-571094 ssh -n multinode-571094-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p multinode-571094 cp multinode-571094-m03:/home/docker/cp-test.txt multinode-571094:/home/docker/cp-test_multinode-571094-m03_multinode-571094.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-571094 ssh -n multinode-571094-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-571094 ssh -n multinode-571094 "sudo cat /home/docker/cp-test_multinode-571094-m03_multinode-571094.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p multinode-571094 cp multinode-571094-m03:/home/docker/cp-test.txt multinode-571094-m02:/home/docker/cp-test_multinode-571094-m03_multinode-571094-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-571094 ssh -n multinode-571094-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-571094 ssh -n multinode-571094-m02 "sudo cat /home/docker/cp-test_multinode-571094-m03_multinode-571094-m02.txt"
--- PASS: TestMultiNode/serial/CopyFile (10.42s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (2.41s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:248: (dbg) Run:  out/minikube-linux-arm64 -p multinode-571094 node stop m03
multinode_test.go:248: (dbg) Done: out/minikube-linux-arm64 -p multinode-571094 node stop m03: (1.320312986s)
multinode_test.go:254: (dbg) Run:  out/minikube-linux-arm64 -p multinode-571094 status
multinode_test.go:254: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-571094 status: exit status 7 (556.82791ms)

                                                
                                                
-- stdout --
	multinode-571094
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-571094-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-571094-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:261: (dbg) Run:  out/minikube-linux-arm64 -p multinode-571094 status --alsologtostderr
multinode_test.go:261: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-571094 status --alsologtostderr: exit status 7 (529.530612ms)

                                                
                                                
-- stdout --
	multinode-571094
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-571094-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-571094-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1122 00:35:55.264978  622493 out.go:360] Setting OutFile to fd 1 ...
	I1122 00:35:55.265098  622493 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1122 00:35:55.265110  622493 out.go:374] Setting ErrFile to fd 2...
	I1122 00:35:55.265122  622493 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1122 00:35:55.265463  622493 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21934-513600/.minikube/bin
	I1122 00:35:55.265677  622493 out.go:368] Setting JSON to false
	I1122 00:35:55.265705  622493 mustload.go:66] Loading cluster: multinode-571094
	I1122 00:35:55.266397  622493 config.go:182] Loaded profile config "multinode-571094": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1122 00:35:55.266417  622493 status.go:174] checking status of multinode-571094 ...
	I1122 00:35:55.267127  622493 cli_runner.go:164] Run: docker container inspect multinode-571094 --format={{.State.Status}}
	I1122 00:35:55.267391  622493 notify.go:221] Checking for updates...
	I1122 00:35:55.286732  622493 status.go:371] multinode-571094 host status = "Running" (err=<nil>)
	I1122 00:35:55.286757  622493 host.go:66] Checking if "multinode-571094" exists ...
	I1122 00:35:55.287054  622493 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-571094
	I1122 00:35:55.319456  622493 host.go:66] Checking if "multinode-571094" exists ...
	I1122 00:35:55.319769  622493 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1122 00:35:55.319817  622493 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-571094
	I1122 00:35:55.338727  622493 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33625 SSHKeyPath:/home/jenkins/minikube-integration/21934-513600/.minikube/machines/multinode-571094/id_rsa Username:docker}
	I1122 00:35:55.435058  622493 ssh_runner.go:195] Run: systemctl --version
	I1122 00:35:55.441440  622493 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1122 00:35:55.454017  622493 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1122 00:35:55.512885  622493 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:3 ContainersRunning:2 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:50 OomKillDisable:true NGoroutines:62 SystemTime:2025-11-22 00:35:55.503200792 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1122 00:35:55.513434  622493 kubeconfig.go:125] found "multinode-571094" server: "https://192.168.67.2:8443"
	I1122 00:35:55.513474  622493 api_server.go:166] Checking apiserver status ...
	I1122 00:35:55.513517  622493 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1122 00:35:55.524920  622493 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1218/cgroup
	I1122 00:35:55.533193  622493 api_server.go:182] apiserver freezer: "4:freezer:/docker/8e1054aeadd80b4a6e55120cd56f00e0f5a79973c4f419c5a048f7ff6f228e1a/crio/crio-ae7d0213ca56e926d35385fb78a0c67d76aad3564d1adc2955ce65dabbba5e93"
	I1122 00:35:55.533262  622493 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/8e1054aeadd80b4a6e55120cd56f00e0f5a79973c4f419c5a048f7ff6f228e1a/crio/crio-ae7d0213ca56e926d35385fb78a0c67d76aad3564d1adc2955ce65dabbba5e93/freezer.state
	I1122 00:35:55.541386  622493 api_server.go:204] freezer state: "THAWED"
	I1122 00:35:55.541414  622493 api_server.go:253] Checking apiserver healthz at https://192.168.67.2:8443/healthz ...
	I1122 00:35:55.549569  622493 api_server.go:279] https://192.168.67.2:8443/healthz returned 200:
	ok
	I1122 00:35:55.549601  622493 status.go:463] multinode-571094 apiserver status = Running (err=<nil>)
	I1122 00:35:55.549621  622493 status.go:176] multinode-571094 status: &{Name:multinode-571094 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1122 00:35:55.549651  622493 status.go:174] checking status of multinode-571094-m02 ...
	I1122 00:35:55.550008  622493 cli_runner.go:164] Run: docker container inspect multinode-571094-m02 --format={{.State.Status}}
	I1122 00:35:55.568679  622493 status.go:371] multinode-571094-m02 host status = "Running" (err=<nil>)
	I1122 00:35:55.568708  622493 host.go:66] Checking if "multinode-571094-m02" exists ...
	I1122 00:35:55.569026  622493 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-571094-m02
	I1122 00:35:55.591189  622493 host.go:66] Checking if "multinode-571094-m02" exists ...
	I1122 00:35:55.591534  622493 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1122 00:35:55.591580  622493 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-571094-m02
	I1122 00:35:55.609647  622493 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33630 SSHKeyPath:/home/jenkins/minikube-integration/21934-513600/.minikube/machines/multinode-571094-m02/id_rsa Username:docker}
	I1122 00:35:55.711229  622493 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1122 00:35:55.723906  622493 status.go:176] multinode-571094-m02 status: &{Name:multinode-571094-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I1122 00:35:55.723988  622493 status.go:174] checking status of multinode-571094-m03 ...
	I1122 00:35:55.724315  622493 cli_runner.go:164] Run: docker container inspect multinode-571094-m03 --format={{.State.Status}}
	I1122 00:35:55.741951  622493 status.go:371] multinode-571094-m03 host status = "Stopped" (err=<nil>)
	I1122 00:35:55.741975  622493 status.go:384] host is not running, skipping remaining checks
	I1122 00:35:55.741983  622493 status.go:176] multinode-571094-m03 status: &{Name:multinode-571094-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopNode (2.41s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (8.38s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:282: (dbg) Run:  out/minikube-linux-arm64 -p multinode-571094 node start m03 -v=5 --alsologtostderr
multinode_test.go:282: (dbg) Done: out/minikube-linux-arm64 -p multinode-571094 node start m03 -v=5 --alsologtostderr: (7.584126954s)
multinode_test.go:290: (dbg) Run:  out/minikube-linux-arm64 -p multinode-571094 status -v=5 --alsologtostderr
multinode_test.go:306: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiNode/serial/StartAfterStop (8.38s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (76.54s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:314: (dbg) Run:  out/minikube-linux-arm64 node list -p multinode-571094
multinode_test.go:321: (dbg) Run:  out/minikube-linux-arm64 stop -p multinode-571094
multinode_test.go:321: (dbg) Done: out/minikube-linux-arm64 stop -p multinode-571094: (25.012429609s)
multinode_test.go:326: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-571094 --wait=true -v=5 --alsologtostderr
multinode_test.go:326: (dbg) Done: out/minikube-linux-arm64 start -p multinode-571094 --wait=true -v=5 --alsologtostderr: (51.39555572s)
multinode_test.go:331: (dbg) Run:  out/minikube-linux-arm64 node list -p multinode-571094
--- PASS: TestMultiNode/serial/RestartKeepsNodes (76.54s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (5.66s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:416: (dbg) Run:  out/minikube-linux-arm64 -p multinode-571094 node delete m03
multinode_test.go:416: (dbg) Done: out/minikube-linux-arm64 -p multinode-571094 node delete m03: (4.953149384s)
multinode_test.go:422: (dbg) Run:  out/minikube-linux-arm64 -p multinode-571094 status --alsologtostderr
multinode_test.go:436: (dbg) Run:  kubectl get nodes
multinode_test.go:444: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/DeleteNode (5.66s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (24.07s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:345: (dbg) Run:  out/minikube-linux-arm64 -p multinode-571094 stop
E1122 00:37:44.618050  516937 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21934-513600/.minikube/profiles/functional-354825/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:345: (dbg) Done: out/minikube-linux-arm64 -p multinode-571094 stop: (23.872866167s)
multinode_test.go:351: (dbg) Run:  out/minikube-linux-arm64 -p multinode-571094 status
multinode_test.go:351: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-571094 status: exit status 7 (90.441278ms)

                                                
                                                
-- stdout --
	multinode-571094
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-571094-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:358: (dbg) Run:  out/minikube-linux-arm64 -p multinode-571094 status --alsologtostderr
multinode_test.go:358: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-571094 status --alsologtostderr: exit status 7 (106.83233ms)

                                                
                                                
-- stdout --
	multinode-571094
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-571094-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1122 00:37:50.343384  630330 out.go:360] Setting OutFile to fd 1 ...
	I1122 00:37:50.343538  630330 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1122 00:37:50.343549  630330 out.go:374] Setting ErrFile to fd 2...
	I1122 00:37:50.343555  630330 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1122 00:37:50.343814  630330 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21934-513600/.minikube/bin
	I1122 00:37:50.344020  630330 out.go:368] Setting JSON to false
	I1122 00:37:50.344051  630330 mustload.go:66] Loading cluster: multinode-571094
	I1122 00:37:50.344143  630330 notify.go:221] Checking for updates...
	I1122 00:37:50.344499  630330 config.go:182] Loaded profile config "multinode-571094": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1122 00:37:50.344519  630330 status.go:174] checking status of multinode-571094 ...
	I1122 00:37:50.345041  630330 cli_runner.go:164] Run: docker container inspect multinode-571094 --format={{.State.Status}}
	I1122 00:37:50.364684  630330 status.go:371] multinode-571094 host status = "Stopped" (err=<nil>)
	I1122 00:37:50.364704  630330 status.go:384] host is not running, skipping remaining checks
	I1122 00:37:50.364712  630330 status.go:176] multinode-571094 status: &{Name:multinode-571094 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1122 00:37:50.364744  630330 status.go:174] checking status of multinode-571094-m02 ...
	I1122 00:37:50.365058  630330 cli_runner.go:164] Run: docker container inspect multinode-571094-m02 --format={{.State.Status}}
	I1122 00:37:50.396279  630330 status.go:371] multinode-571094-m02 host status = "Stopped" (err=<nil>)
	I1122 00:37:50.396300  630330 status.go:384] host is not running, skipping remaining checks
	I1122 00:37:50.396306  630330 status.go:176] multinode-571094-m02 status: &{Name:multinode-571094-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopMultiNode (24.07s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (54.54s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:376: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-571094 --wait=true -v=5 --alsologtostderr --driver=docker  --container-runtime=crio
multinode_test.go:376: (dbg) Done: out/minikube-linux-arm64 start -p multinode-571094 --wait=true -v=5 --alsologtostderr --driver=docker  --container-runtime=crio: (53.822848154s)
multinode_test.go:382: (dbg) Run:  out/minikube-linux-arm64 -p multinode-571094 status --alsologtostderr
multinode_test.go:396: (dbg) Run:  kubectl get nodes
multinode_test.go:404: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/RestartMultiNode (54.54s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (37.54s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:455: (dbg) Run:  out/minikube-linux-arm64 node list -p multinode-571094
multinode_test.go:464: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-571094-m02 --driver=docker  --container-runtime=crio
multinode_test.go:464: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p multinode-571094-m02 --driver=docker  --container-runtime=crio: exit status 14 (156.574439ms)

                                                
                                                
-- stdout --
	* [multinode-571094-m02] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=21934
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21934-513600/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21934-513600/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Profile name 'multinode-571094-m02' is duplicated with machine name 'multinode-571094-m02' in profile 'multinode-571094'
	X Exiting due to MK_USAGE: Profile name should be unique

                                                
                                                
** /stderr **
multinode_test.go:472: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-571094-m03 --driver=docker  --container-runtime=crio
multinode_test.go:472: (dbg) Done: out/minikube-linux-arm64 start -p multinode-571094-m03 --driver=docker  --container-runtime=crio: (34.940308569s)
multinode_test.go:479: (dbg) Run:  out/minikube-linux-arm64 node add -p multinode-571094
multinode_test.go:479: (dbg) Non-zero exit: out/minikube-linux-arm64 node add -p multinode-571094: exit status 80 (320.495181ms)

                                                
                                                
-- stdout --
	* Adding node m03 to cluster multinode-571094 as [worker]
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_ADD: failed to add node: Node multinode-571094-m03 already exists in multinode-571094-m03 profile
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_node_040ea7097fd6ed71e65be9a474587f81f0ccd21d_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:484: (dbg) Run:  out/minikube-linux-arm64 delete -p multinode-571094-m03
multinode_test.go:484: (dbg) Done: out/minikube-linux-arm64 delete -p multinode-571094-m03: (2.051149355s)
--- PASS: TestMultiNode/serial/ValidateNameConflict (37.54s)

                                                
                                    
x
+
TestPreload (150.31s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:43: (dbg) Run:  out/minikube-linux-arm64 start -p test-preload-311169 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.32.0
preload_test.go:43: (dbg) Done: out/minikube-linux-arm64 start -p test-preload-311169 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.32.0: (57.553418681s)
preload_test.go:51: (dbg) Run:  out/minikube-linux-arm64 -p test-preload-311169 image pull gcr.io/k8s-minikube/busybox
preload_test.go:51: (dbg) Done: out/minikube-linux-arm64 -p test-preload-311169 image pull gcr.io/k8s-minikube/busybox: (2.170228808s)
preload_test.go:57: (dbg) Run:  out/minikube-linux-arm64 stop -p test-preload-311169
preload_test.go:57: (dbg) Done: out/minikube-linux-arm64 stop -p test-preload-311169: (5.926418137s)
preload_test.go:65: (dbg) Run:  out/minikube-linux-arm64 start -p test-preload-311169 --memory=3072 --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=crio
E1122 00:40:40.217082  516937 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21934-513600/.minikube/profiles/addons-882841/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
preload_test.go:65: (dbg) Done: out/minikube-linux-arm64 start -p test-preload-311169 --memory=3072 --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=crio: (1m22.003378696s)
preload_test.go:70: (dbg) Run:  out/minikube-linux-arm64 -p test-preload-311169 image list
helpers_test.go:175: Cleaning up "test-preload-311169" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p test-preload-311169
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p test-preload-311169: (2.42210038s)
--- PASS: TestPreload (150.31s)

                                                
                                    
x
+
TestScheduledStopUnix (107.95s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-linux-arm64 start -p scheduled-stop-936900 --memory=3072 --driver=docker  --container-runtime=crio
scheduled_stop_test.go:128: (dbg) Done: out/minikube-linux-arm64 start -p scheduled-stop-936900 --memory=3072 --driver=docker  --container-runtime=crio: (31.721102363s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-936900 --schedule 5m -v=5 --alsologtostderr
minikube stop output:

                                                
                                                
** stderr ** 
	I1122 00:42:28.886044  644327 out.go:360] Setting OutFile to fd 1 ...
	I1122 00:42:28.886295  644327 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1122 00:42:28.886327  644327 out.go:374] Setting ErrFile to fd 2...
	I1122 00:42:28.886350  644327 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1122 00:42:28.886631  644327 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21934-513600/.minikube/bin
	I1122 00:42:28.886934  644327 out.go:368] Setting JSON to false
	I1122 00:42:28.887103  644327 mustload.go:66] Loading cluster: scheduled-stop-936900
	I1122 00:42:28.887527  644327 config.go:182] Loaded profile config "scheduled-stop-936900": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1122 00:42:28.887651  644327 profile.go:143] Saving config to /home/jenkins/minikube-integration/21934-513600/.minikube/profiles/scheduled-stop-936900/config.json ...
	I1122 00:42:28.887877  644327 mustload.go:66] Loading cluster: scheduled-stop-936900
	I1122 00:42:28.888071  644327 config.go:182] Loaded profile config "scheduled-stop-936900": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1

                                                
                                                
** /stderr **
scheduled_stop_test.go:204: (dbg) Run:  out/minikube-linux-arm64 status --format={{.TimeToStop}} -p scheduled-stop-936900 -n scheduled-stop-936900
scheduled_stop_test.go:172: signal error was:  <nil>
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-936900 --schedule 15s -v=5 --alsologtostderr
minikube stop output:

                                                
                                                
** stderr ** 
	I1122 00:42:29.302662  644416 out.go:360] Setting OutFile to fd 1 ...
	I1122 00:42:29.304370  644416 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1122 00:42:29.304420  644416 out.go:374] Setting ErrFile to fd 2...
	I1122 00:42:29.304439  644416 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1122 00:42:29.304754  644416 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21934-513600/.minikube/bin
	I1122 00:42:29.305108  644416 out.go:368] Setting JSON to false
	I1122 00:42:29.306400  644416 daemonize_unix.go:73] killing process 644346 as it is an old scheduled stop
	I1122 00:42:29.306523  644416 mustload.go:66] Loading cluster: scheduled-stop-936900
	I1122 00:42:29.307023  644416 config.go:182] Loaded profile config "scheduled-stop-936900": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1122 00:42:29.307119  644416 profile.go:143] Saving config to /home/jenkins/minikube-integration/21934-513600/.minikube/profiles/scheduled-stop-936900/config.json ...
	I1122 00:42:29.307305  644416 mustload.go:66] Loading cluster: scheduled-stop-936900
	I1122 00:42:29.307437  644416 config.go:182] Loaded profile config "scheduled-stop-936900": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1

                                                
                                                
** /stderr **
scheduled_stop_test.go:172: signal error was:  os: process already finished
I1122 00:42:29.316353  516937 retry.go:31] will retry after 80.494µs: open /home/jenkins/minikube-integration/21934-513600/.minikube/profiles/scheduled-stop-936900/pid: no such file or directory
I1122 00:42:29.317486  516937 retry.go:31] will retry after 133.916µs: open /home/jenkins/minikube-integration/21934-513600/.minikube/profiles/scheduled-stop-936900/pid: no such file or directory
I1122 00:42:29.318613  516937 retry.go:31] will retry after 166.953µs: open /home/jenkins/minikube-integration/21934-513600/.minikube/profiles/scheduled-stop-936900/pid: no such file or directory
I1122 00:42:29.319744  516937 retry.go:31] will retry after 379.89µs: open /home/jenkins/minikube-integration/21934-513600/.minikube/profiles/scheduled-stop-936900/pid: no such file or directory
I1122 00:42:29.320863  516937 retry.go:31] will retry after 706.545µs: open /home/jenkins/minikube-integration/21934-513600/.minikube/profiles/scheduled-stop-936900/pid: no such file or directory
I1122 00:42:29.321985  516937 retry.go:31] will retry after 1.047765ms: open /home/jenkins/minikube-integration/21934-513600/.minikube/profiles/scheduled-stop-936900/pid: no such file or directory
I1122 00:42:29.323101  516937 retry.go:31] will retry after 1.63999ms: open /home/jenkins/minikube-integration/21934-513600/.minikube/profiles/scheduled-stop-936900/pid: no such file or directory
I1122 00:42:29.325235  516937 retry.go:31] will retry after 1.560595ms: open /home/jenkins/minikube-integration/21934-513600/.minikube/profiles/scheduled-stop-936900/pid: no such file or directory
I1122 00:42:29.327416  516937 retry.go:31] will retry after 1.725005ms: open /home/jenkins/minikube-integration/21934-513600/.minikube/profiles/scheduled-stop-936900/pid: no such file or directory
I1122 00:42:29.329596  516937 retry.go:31] will retry after 3.015476ms: open /home/jenkins/minikube-integration/21934-513600/.minikube/profiles/scheduled-stop-936900/pid: no such file or directory
I1122 00:42:29.332682  516937 retry.go:31] will retry after 7.252177ms: open /home/jenkins/minikube-integration/21934-513600/.minikube/profiles/scheduled-stop-936900/pid: no such file or directory
I1122 00:42:29.340986  516937 retry.go:31] will retry after 6.825216ms: open /home/jenkins/minikube-integration/21934-513600/.minikube/profiles/scheduled-stop-936900/pid: no such file or directory
I1122 00:42:29.348209  516937 retry.go:31] will retry after 9.057871ms: open /home/jenkins/minikube-integration/21934-513600/.minikube/profiles/scheduled-stop-936900/pid: no such file or directory
I1122 00:42:29.357552  516937 retry.go:31] will retry after 16.156859ms: open /home/jenkins/minikube-integration/21934-513600/.minikube/profiles/scheduled-stop-936900/pid: no such file or directory
I1122 00:42:29.374761  516937 retry.go:31] will retry after 43.604277ms: open /home/jenkins/minikube-integration/21934-513600/.minikube/profiles/scheduled-stop-936900/pid: no such file or directory
I1122 00:42:29.419120  516937 retry.go:31] will retry after 26.668886ms: open /home/jenkins/minikube-integration/21934-513600/.minikube/profiles/scheduled-stop-936900/pid: no such file or directory
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-936900 --cancel-scheduled
minikube stop output:

                                                
                                                
-- stdout --
	* All existing scheduled stops cancelled

                                                
                                                
-- /stdout --
E1122 00:42:44.618914  516937 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21934-513600/.minikube/profiles/functional-354825/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
scheduled_stop_test.go:189: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p scheduled-stop-936900 -n scheduled-stop-936900
scheduled_stop_test.go:218: (dbg) Run:  out/minikube-linux-arm64 status -p scheduled-stop-936900
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-936900 --schedule 15s -v=5 --alsologtostderr
minikube stop output:

                                                
                                                
** stderr ** 
	I1122 00:42:55.228825  644780 out.go:360] Setting OutFile to fd 1 ...
	I1122 00:42:55.228943  644780 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1122 00:42:55.228955  644780 out.go:374] Setting ErrFile to fd 2...
	I1122 00:42:55.228961  644780 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1122 00:42:55.229236  644780 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21934-513600/.minikube/bin
	I1122 00:42:55.229509  644780 out.go:368] Setting JSON to false
	I1122 00:42:55.229613  644780 mustload.go:66] Loading cluster: scheduled-stop-936900
	I1122 00:42:55.230071  644780 config.go:182] Loaded profile config "scheduled-stop-936900": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1122 00:42:55.230164  644780 profile.go:143] Saving config to /home/jenkins/minikube-integration/21934-513600/.minikube/profiles/scheduled-stop-936900/config.json ...
	I1122 00:42:55.230380  644780 mustload.go:66] Loading cluster: scheduled-stop-936900
	I1122 00:42:55.230516  644780 config.go:182] Loaded profile config "scheduled-stop-936900": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1

                                                
                                                
** /stderr **
scheduled_stop_test.go:172: signal error was:  os: process already finished
scheduled_stop_test.go:218: (dbg) Run:  out/minikube-linux-arm64 status -p scheduled-stop-936900
scheduled_stop_test.go:218: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p scheduled-stop-936900: exit status 7 (65.041364ms)

                                                
                                                
-- stdout --
	scheduled-stop-936900
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
scheduled_stop_test.go:189: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p scheduled-stop-936900 -n scheduled-stop-936900
scheduled_stop_test.go:189: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p scheduled-stop-936900 -n scheduled-stop-936900: exit status 7 (64.408233ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
scheduled_stop_test.go:189: status error: exit status 7 (may be ok)
helpers_test.go:175: Cleaning up "scheduled-stop-936900" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p scheduled-stop-936900
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p scheduled-stop-936900: (4.698657927s)
--- PASS: TestScheduledStopUnix (107.95s)

                                                
                                    
x
+
TestInsufficientStorage (13.25s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:50: (dbg) Run:  out/minikube-linux-arm64 start -p insufficient-storage-445064 --memory=3072 --output=json --wait=true --driver=docker  --container-runtime=crio
status_test.go:50: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p insufficient-storage-445064 --memory=3072 --output=json --wait=true --driver=docker  --container-runtime=crio: exit status 26 (10.681798292s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"b3e5c425-535d-43b3-94c9-31282cc7c7b8","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[insufficient-storage-445064] minikube v1.37.0 on Ubuntu 20.04 (arm64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"38eba856-d901-4244-a26c-a2583479ecba","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=21934"}}
	{"specversion":"1.0","id":"d6923a4f-ed46-4da2-85e1-0f5aaabdc8e0","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"2c983759-e118-42b7-9522-4c042076cb20","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/21934-513600/kubeconfig"}}
	{"specversion":"1.0","id":"33b8f389-ee90-46b7-8221-9f831f11069a","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/21934-513600/.minikube"}}
	{"specversion":"1.0","id":"283ac7b7-0440-4b7f-81df-562cf7f230bb","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-arm64"}}
	{"specversion":"1.0","id":"fde54e4a-6925-4591-82c7-ab7bf669c508","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"dee9ba08-8e2d-4228-996d-d4723a674e5c","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_STORAGE_CAPACITY=100"}}
	{"specversion":"1.0","id":"3d2d381b-9268-4786-b2c7-2c4169173447","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_AVAILABLE_STORAGE=19"}}
	{"specversion":"1.0","id":"3c4c9fe3-db2c-4cdc-baf5-9289f3850548","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"1","message":"Using the docker driver based on user configuration","name":"Selecting Driver","totalsteps":"19"}}
	{"specversion":"1.0","id":"0a459d0a-0671-48cb-9aa5-951b80db1be6","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"Using Docker driver with root privileges"}}
	{"specversion":"1.0","id":"e7f7d86f-4cf4-4591-ab48-4fa9a3e9768d","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"3","message":"Starting \"insufficient-storage-445064\" primary control-plane node in \"insufficient-storage-445064\" cluster","name":"Starting Node","totalsteps":"19"}}
	{"specversion":"1.0","id":"4fe3aba6-0461-43b4-aae6-49c627c203af","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"5","message":"Pulling base image v0.0.48-1763588073-21934 ...","name":"Pulling Base Image","totalsteps":"19"}}
	{"specversion":"1.0","id":"7142b8c3-5375-477a-988d-7f680ed7ec4e","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"8","message":"Creating docker container (CPUs=2, Memory=3072MB) ...","name":"Creating Container","totalsteps":"19"}}
	{"specversion":"1.0","id":"e6bebc00-7567-477d-8e43-6fd6eb489ee2","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"Try one or more of the following to free up space on the device:\n\n\t\t\t1. Run \"docker system prune\" to remove unused Docker data (optionally with \"-a\")\n\t\t\t2. Increase the storage allocated to Docker for Desktop by clicking on:\n\t\t\t\tDocker icon \u003e Preferences \u003e Resources \u003e Disk Image Size\n\t\t\t3. Run \"minikube ssh -- docker system prune\" if using the Docker container runtime","exitcode":"26","issues":"https://github.com/kubernetes/minikube/issues/9024","message":"Docker is out of disk space! (/var is at 100% of capacity). You can pass '--force' to skip this check.","name":"RSRC_DOCKER_STORAGE","url":""}}

                                                
                                                
-- /stdout --
status_test.go:76: (dbg) Run:  out/minikube-linux-arm64 status -p insufficient-storage-445064 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p insufficient-storage-445064 --output=json --layout=cluster: exit status 7 (286.749008ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-445064","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","Step":"Creating Container","StepDetail":"Creating docker container (CPUs=2, Memory=3072MB) ...","BinaryVersion":"v1.37.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-445064","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E1122 00:43:56.017583  646495 status.go:458] kubeconfig endpoint: get endpoint: "insufficient-storage-445064" does not appear in /home/jenkins/minikube-integration/21934-513600/kubeconfig

                                                
                                                
** /stderr **
status_test.go:76: (dbg) Run:  out/minikube-linux-arm64 status -p insufficient-storage-445064 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p insufficient-storage-445064 --output=json --layout=cluster: exit status 7 (296.427492ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-445064","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","BinaryVersion":"v1.37.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-445064","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E1122 00:43:56.314448  646561 status.go:458] kubeconfig endpoint: get endpoint: "insufficient-storage-445064" does not appear in /home/jenkins/minikube-integration/21934-513600/kubeconfig
	E1122 00:43:56.324502  646561 status.go:258] unable to read event log: stat: stat /home/jenkins/minikube-integration/21934-513600/.minikube/profiles/insufficient-storage-445064/events.json: no such file or directory

                                                
                                                
** /stderr **
helpers_test.go:175: Cleaning up "insufficient-storage-445064" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p insufficient-storage-445064
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p insufficient-storage-445064: (1.979889071s)
--- PASS: TestInsufficientStorage (13.25s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (62.23s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:120: (dbg) Run:  /tmp/minikube-v1.32.0.2890686607 start -p running-upgrade-234956 --memory=3072 --vm-driver=docker  --container-runtime=crio
E1122 00:47:44.616351  516937 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21934-513600/.minikube/profiles/functional-354825/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
version_upgrade_test.go:120: (dbg) Done: /tmp/minikube-v1.32.0.2890686607 start -p running-upgrade-234956 --memory=3072 --vm-driver=docker  --container-runtime=crio: (33.705833318s)
version_upgrade_test.go:130: (dbg) Run:  out/minikube-linux-arm64 start -p running-upgrade-234956 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:130: (dbg) Done: out/minikube-linux-arm64 start -p running-upgrade-234956 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (18.335814966s)
helpers_test.go:175: Cleaning up "running-upgrade-234956" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p running-upgrade-234956
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p running-upgrade-234956: (1.965146535s)
--- PASS: TestRunningBinaryUpgrade (62.23s)

                                                
                                    
x
+
TestKubernetesUpgrade (349.35s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:222: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-134864 --memory=3072 --kubernetes-version=v1.28.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:222: (dbg) Done: out/minikube-linux-arm64 start -p kubernetes-upgrade-134864 --memory=3072 --kubernetes-version=v1.28.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (38.477383053s)
version_upgrade_test.go:227: (dbg) Run:  out/minikube-linux-arm64 stop -p kubernetes-upgrade-134864
version_upgrade_test.go:227: (dbg) Done: out/minikube-linux-arm64 stop -p kubernetes-upgrade-134864: (1.451629382s)
version_upgrade_test.go:232: (dbg) Run:  out/minikube-linux-arm64 -p kubernetes-upgrade-134864 status --format={{.Host}}
version_upgrade_test.go:232: (dbg) Non-zero exit: out/minikube-linux-arm64 -p kubernetes-upgrade-134864 status --format={{.Host}}: exit status 7 (110.749536ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:234: status error: exit status 7 (may be ok)
version_upgrade_test.go:243: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-134864 --memory=3072 --kubernetes-version=v1.34.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:243: (dbg) Done: out/minikube-linux-arm64 start -p kubernetes-upgrade-134864 --memory=3072 --kubernetes-version=v1.34.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (4m38.714106804s)
version_upgrade_test.go:248: (dbg) Run:  kubectl --context kubernetes-upgrade-134864 version --output=json
version_upgrade_test.go:267: Attempting to downgrade Kubernetes (should fail)
version_upgrade_test.go:269: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-134864 --memory=3072 --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=crio
version_upgrade_test.go:269: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p kubernetes-upgrade-134864 --memory=3072 --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=crio: exit status 106 (109.674543ms)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-134864] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=21934
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21934-513600/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21934-513600/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to K8S_DOWNGRADE_UNSUPPORTED: Unable to safely downgrade existing Kubernetes v1.34.1 cluster to v1.28.0
	* Suggestion: 
	
	    1) Recreate the cluster with Kubernetes 1.28.0, by running:
	    
	    minikube delete -p kubernetes-upgrade-134864
	    minikube start -p kubernetes-upgrade-134864 --kubernetes-version=v1.28.0
	    
	    2) Create a second cluster with Kubernetes 1.28.0, by running:
	    
	    minikube start -p kubernetes-upgrade-1348642 --kubernetes-version=v1.28.0
	    
	    3) Use the existing cluster at version Kubernetes 1.34.1, by running:
	    
	    minikube start -p kubernetes-upgrade-134864 --kubernetes-version=v1.34.1
	    

                                                
                                                
** /stderr **
version_upgrade_test.go:273: Attempting restart after unsuccessful downgrade
version_upgrade_test.go:275: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-134864 --memory=3072 --kubernetes-version=v1.34.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:275: (dbg) Done: out/minikube-linux-arm64 start -p kubernetes-upgrade-134864 --memory=3072 --kubernetes-version=v1.34.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (27.648904863s)
helpers_test.go:175: Cleaning up "kubernetes-upgrade-134864" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p kubernetes-upgrade-134864
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p kubernetes-upgrade-134864: (2.636794568s)
--- PASS: TestKubernetesUpgrade (349.35s)

                                                
                                    
x
+
TestMissingContainerUpgrade (134.74s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
=== PAUSE TestMissingContainerUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestMissingContainerUpgrade
version_upgrade_test.go:309: (dbg) Run:  /tmp/minikube-v1.32.0.3594704177 start -p missing-upgrade-264026 --memory=3072 --driver=docker  --container-runtime=crio
version_upgrade_test.go:309: (dbg) Done: /tmp/minikube-v1.32.0.3594704177 start -p missing-upgrade-264026 --memory=3072 --driver=docker  --container-runtime=crio: (1m16.023128528s)
version_upgrade_test.go:318: (dbg) Run:  docker stop missing-upgrade-264026
version_upgrade_test.go:323: (dbg) Run:  docker rm missing-upgrade-264026
version_upgrade_test.go:329: (dbg) Run:  out/minikube-linux-arm64 start -p missing-upgrade-264026 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
E1122 00:45:23.287301  516937 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21934-513600/.minikube/profiles/addons-882841/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1122 00:45:40.218630  516937 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21934-513600/.minikube/profiles/addons-882841/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1122 00:45:47.683234  516937 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21934-513600/.minikube/profiles/functional-354825/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
version_upgrade_test.go:329: (dbg) Done: out/minikube-linux-arm64 start -p missing-upgrade-264026 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (47.733388194s)
helpers_test.go:175: Cleaning up "missing-upgrade-264026" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p missing-upgrade-264026
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p missing-upgrade-264026: (2.281691447s)
--- PASS: TestMissingContainerUpgrade (134.74s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.1s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:108: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-307118 --no-kubernetes --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=crio
no_kubernetes_test.go:108: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p NoKubernetes-307118 --no-kubernetes --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=crio: exit status 14 (100.305416ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-307118] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=21934
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21934-513600/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21934-513600/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.10s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (40.33s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:120: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-307118 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio
no_kubernetes_test.go:120: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-307118 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio: (39.971479338s)
no_kubernetes_test.go:225: (dbg) Run:  out/minikube-linux-arm64 -p NoKubernetes-307118 status -o json
--- PASS: TestNoKubernetes/serial/StartWithK8s (40.33s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (10.04s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:137: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-307118 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio
no_kubernetes_test.go:137: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-307118 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio: (5.722226924s)
no_kubernetes_test.go:225: (dbg) Run:  out/minikube-linux-arm64 -p NoKubernetes-307118 status -o json
no_kubernetes_test.go:225: (dbg) Non-zero exit: out/minikube-linux-arm64 -p NoKubernetes-307118 status -o json: exit status 2 (403.059395ms)

                                                
                                                
-- stdout --
	{"Name":"NoKubernetes-307118","Host":"Running","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Configured","Worker":false}

                                                
                                                
-- /stdout --
no_kubernetes_test.go:149: (dbg) Run:  out/minikube-linux-arm64 delete -p NoKubernetes-307118
no_kubernetes_test.go:149: (dbg) Done: out/minikube-linux-arm64 delete -p NoKubernetes-307118: (3.911892017s)
--- PASS: TestNoKubernetes/serial/StartWithStopK8s (10.04s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (9.16s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:161: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-307118 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio
no_kubernetes_test.go:161: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-307118 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio: (9.155638239s)
--- PASS: TestNoKubernetes/serial/Start (9.16s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyNok8sNoK8sDownloads (0s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyNok8sNoK8sDownloads
no_kubernetes_test.go:89: Checking cache directory: /home/jenkins/minikube-integration/21934-513600/.minikube/cache/linux/arm64/v0.0.0
--- PASS: TestNoKubernetes/serial/VerifyNok8sNoK8sDownloads (0.00s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.35s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:172: (dbg) Run:  out/minikube-linux-arm64 ssh -p NoKubernetes-307118 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:172: (dbg) Non-zero exit: out/minikube-linux-arm64 ssh -p NoKubernetes-307118 "sudo systemctl is-active --quiet service kubelet": exit status 1 (346.943561ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.35s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (1.13s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:194: (dbg) Run:  out/minikube-linux-arm64 profile list
no_kubernetes_test.go:204: (dbg) Run:  out/minikube-linux-arm64 profile list --output=json
--- PASS: TestNoKubernetes/serial/ProfileList (1.13s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (1.43s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:183: (dbg) Run:  out/minikube-linux-arm64 stop -p NoKubernetes-307118
no_kubernetes_test.go:183: (dbg) Done: out/minikube-linux-arm64 stop -p NoKubernetes-307118: (1.430301598s)
--- PASS: TestNoKubernetes/serial/Stop (1.43s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (7.73s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:216: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-307118 --driver=docker  --container-runtime=crio
no_kubernetes_test.go:216: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-307118 --driver=docker  --container-runtime=crio: (7.728993713s)
--- PASS: TestNoKubernetes/serial/StartNoArgs (7.73s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.43s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:172: (dbg) Run:  out/minikube-linux-arm64 ssh -p NoKubernetes-307118 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:172: (dbg) Non-zero exit: out/minikube-linux-arm64 ssh -p NoKubernetes-307118 "sudo systemctl is-active --quiet service kubelet": exit status 1 (434.528413ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.43s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (8.07s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (8.07s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (55.79s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:183: (dbg) Run:  /tmp/minikube-v1.32.0.2090831124 start -p stopped-upgrade-070222 --memory=3072 --vm-driver=docker  --container-runtime=crio
version_upgrade_test.go:183: (dbg) Done: /tmp/minikube-v1.32.0.2090831124 start -p stopped-upgrade-070222 --memory=3072 --vm-driver=docker  --container-runtime=crio: (32.86230121s)
version_upgrade_test.go:192: (dbg) Run:  /tmp/minikube-v1.32.0.2090831124 -p stopped-upgrade-070222 stop
version_upgrade_test.go:192: (dbg) Done: /tmp/minikube-v1.32.0.2090831124 -p stopped-upgrade-070222 stop: (1.256290287s)
version_upgrade_test.go:198: (dbg) Run:  out/minikube-linux-arm64 start -p stopped-upgrade-070222 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:198: (dbg) Done: out/minikube-linux-arm64 start -p stopped-upgrade-070222 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (21.667370699s)
--- PASS: TestStoppedBinaryUpgrade/Upgrade (55.79s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (1.22s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:206: (dbg) Run:  out/minikube-linux-arm64 logs -p stopped-upgrade-070222
version_upgrade_test.go:206: (dbg) Done: out/minikube-linux-arm64 logs -p stopped-upgrade-070222: (1.223571325s)
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (1.22s)

                                                
                                    
x
+
TestPause/serial/Start (50.36s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-linux-arm64 start -p pause-028559 --memory=3072 --install-addons=false --wait=all --driver=docker  --container-runtime=crio
pause_test.go:80: (dbg) Done: out/minikube-linux-arm64 start -p pause-028559 --memory=3072 --install-addons=false --wait=all --driver=docker  --container-runtime=crio: (50.364533251s)
--- PASS: TestPause/serial/Start (50.36s)

                                                
                                    
x
+
TestPause/serial/SecondStartNoReconfiguration (25.76s)

                                                
                                                
=== RUN   TestPause/serial/SecondStartNoReconfiguration
pause_test.go:92: (dbg) Run:  out/minikube-linux-arm64 start -p pause-028559 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
pause_test.go:92: (dbg) Done: out/minikube-linux-arm64 start -p pause-028559 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (25.743457609s)
--- PASS: TestPause/serial/SecondStartNoReconfiguration (25.76s)

                                                
                                    
x
+
TestNetworkPlugins/group/false (4.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false
net_test.go:246: (dbg) Run:  out/minikube-linux-arm64 start -p false-163229 --memory=3072 --alsologtostderr --cni=false --driver=docker  --container-runtime=crio
net_test.go:246: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p false-163229 --memory=3072 --alsologtostderr --cni=false --driver=docker  --container-runtime=crio: exit status 14 (271.83338ms)

                                                
                                                
-- stdout --
	* [false-163229] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=21934
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21934-513600/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21934-513600/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on user configuration
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1122 00:50:37.363359  683904 out.go:360] Setting OutFile to fd 1 ...
	I1122 00:50:37.363908  683904 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1122 00:50:37.363944  683904 out.go:374] Setting ErrFile to fd 2...
	I1122 00:50:37.363963  683904 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1122 00:50:37.364264  683904 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21934-513600/.minikube/bin
	I1122 00:50:37.364714  683904 out.go:368] Setting JSON to false
	I1122 00:50:37.365658  683904 start.go:133] hostinfo: {"hostname":"ip-172-31-21-244","uptime":19954,"bootTime":1763752684,"procs":171,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"da8ac1fd-6236-412a-a346-95873c98230d"}
	I1122 00:50:37.365753  683904 start.go:143] virtualization:  
	I1122 00:50:37.369500  683904 out.go:179] * [false-163229] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1122 00:50:37.372586  683904 out.go:179]   - MINIKUBE_LOCATION=21934
	I1122 00:50:37.372647  683904 notify.go:221] Checking for updates...
	I1122 00:50:37.379115  683904 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1122 00:50:37.382104  683904 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21934-513600/kubeconfig
	I1122 00:50:37.385169  683904 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21934-513600/.minikube
	I1122 00:50:37.388091  683904 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1122 00:50:37.391041  683904 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1122 00:50:37.394698  683904 config.go:182] Loaded profile config "kubernetes-upgrade-134864": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1122 00:50:37.394802  683904 driver.go:422] Setting default libvirt URI to qemu:///system
	I1122 00:50:37.431634  683904 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1122 00:50:37.431757  683904 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1122 00:50:37.544441  683904 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:38 OomKillDisable:true NGoroutines:53 SystemTime:2025-11-22 00:50:37.533929338 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1122 00:50:37.544552  683904 docker.go:319] overlay module found
	I1122 00:50:37.547794  683904 out.go:179] * Using the docker driver based on user configuration
	I1122 00:50:37.550641  683904 start.go:309] selected driver: docker
	I1122 00:50:37.550663  683904 start.go:930] validating driver "docker" against <nil>
	I1122 00:50:37.550676  683904 start.go:941] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1122 00:50:37.554088  683904 out.go:203] 
	W1122 00:50:37.556990  683904 out.go:285] X Exiting due to MK_USAGE: The "crio" container runtime requires CNI
	X Exiting due to MK_USAGE: The "crio" container runtime requires CNI
	I1122 00:50:37.559846  683904 out.go:203] 

                                                
                                                
** /stderr **
E1122 00:50:40.217566  516937 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21934-513600/.minikube/profiles/addons-882841/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:88: 
----------------------- debugLogs start: false-163229 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: false-163229

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: false-163229

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: false-163229

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: false-163229

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: false-163229

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: false-163229

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: false-163229

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: false-163229

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: false-163229

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: false-163229

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "false-163229" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-163229"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "false-163229" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-163229"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "false-163229" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-163229"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: false-163229

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "false-163229" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-163229"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "false-163229" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-163229"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "false-163229" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "false-163229" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "false-163229" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "false-163229" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "false-163229" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "false-163229" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "false-163229" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "false-163229" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "false-163229" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-163229"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "false-163229" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-163229"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "false-163229" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-163229"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "false-163229" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-163229"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "false-163229" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-163229"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "false-163229" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "false-163229" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "false-163229" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "false-163229" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-163229"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "false-163229" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-163229"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "false-163229" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-163229"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "false-163229" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-163229"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "false-163229" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-163229"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/21934-513600/.minikube/ca.crt
extensions:
- extension:
last-update: Sat, 22 Nov 2025 00:50:37 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: cluster_info
server: https://192.168.76.2:8443
name: kubernetes-upgrade-134864
contexts:
- context:
cluster: kubernetes-upgrade-134864
extensions:
- extension:
last-update: Sat, 22 Nov 2025 00:50:37 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: context_info
namespace: default
user: kubernetes-upgrade-134864
name: kubernetes-upgrade-134864
current-context: kubernetes-upgrade-134864
kind: Config
preferences: {}
users:
- name: kubernetes-upgrade-134864
user:
client-certificate: /home/jenkins/minikube-integration/21934-513600/.minikube/profiles/kubernetes-upgrade-134864/client.crt
client-key: /home/jenkins/minikube-integration/21934-513600/.minikube/profiles/kubernetes-upgrade-134864/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: false-163229

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "false-163229" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-163229"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "false-163229" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-163229"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "false-163229" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-163229"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "false-163229" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-163229"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "false-163229" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-163229"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "false-163229" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-163229"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "false-163229" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-163229"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "false-163229" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-163229"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "false-163229" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-163229"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "false-163229" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-163229"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "false-163229" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-163229"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "false-163229" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-163229"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "false-163229" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-163229"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "false-163229" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-163229"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "false-163229" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-163229"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "false-163229" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-163229"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "false-163229" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-163229"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "false-163229" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-163229"

                                                
                                                
----------------------- debugLogs end: false-163229 [took: 3.777628076s] --------------------------------
helpers_test.go:175: Cleaning up "false-163229" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p false-163229
--- PASS: TestNetworkPlugins/group/false (4.21s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (60.01s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-arm64 start -p old-k8s-version-625837 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0
E1122 00:52:44.616372  516937 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21934-513600/.minikube/profiles/functional-354825/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-arm64 start -p old-k8s-version-625837 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0: (1m0.005151425s)
--- PASS: TestStartStop/group/old-k8s-version/serial/FirstStart (60.01s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (9.43s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context old-k8s-version-625837 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [770c3edc-3b43-4aa9-b57c-6884dc11b4dc] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "busybox" [770c3edc-3b43-4aa9-b57c-6884dc11b4dc] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: integration-test=busybox healthy within 9.00416913s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context old-k8s-version-625837 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/old-k8s-version/serial/DeployApp (9.43s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (12.01s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-arm64 stop -p old-k8s-version-625837 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-arm64 stop -p old-k8s-version-625837 --alsologtostderr -v=3: (12.005878078s)
--- PASS: TestStartStop/group/old-k8s-version/serial/Stop (12.01s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.2s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-625837 -n old-k8s-version-625837
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-625837 -n old-k8s-version-625837: exit status 7 (79.025188ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p old-k8s-version-625837 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.20s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (51.91s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-arm64 start -p old-k8s-version-625837 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-arm64 start -p old-k8s-version-625837 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0: (51.488079937s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-625837 -n old-k8s-version-625837
--- PASS: TestStartStop/group/old-k8s-version/serial/SecondStart (51.91s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (6s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-8694d4445c-kp26b" [8bf88fab-10f4-4b9e-9866-f2cc0cade558] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.003432804s
--- PASS: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (6.00s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.12s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-8694d4445c-kp26b" [8bf88fab-10f4-4b9e-9866-f2cc0cade558] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.003006929s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context old-k8s-version-625837 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.12s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.25s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-arm64 -p old-k8s-version-625837 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20230511-dc714da8
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
--- PASS: TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.25s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (80.13s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-arm64 start -p no-preload-165130 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-arm64 start -p no-preload-165130 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1: (1m20.130563106s)
--- PASS: TestStartStop/group/no-preload/serial/FirstStart (80.13s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (86.02s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-arm64 start -p embed-certs-879000 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1
E1122 00:55:40.217567  516937 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21934-513600/.minikube/profiles/addons-882841/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-arm64 start -p embed-certs-879000 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1: (1m26.016371972s)
--- PASS: TestStartStop/group/embed-certs/serial/FirstStart (86.02s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (8.32s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context no-preload-165130 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/no-preload/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [e6b0f65d-f761-4d8a-b568-8eb439d4ec02] Pending
helpers_test.go:352: "busybox" [e6b0f65d-f761-4d8a-b568-8eb439d4ec02] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "busybox" [e6b0f65d-f761-4d8a-b568-8eb439d4ec02] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/no-preload/serial/DeployApp: integration-test=busybox healthy within 8.003358769s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context no-preload-165130 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/no-preload/serial/DeployApp (8.32s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (12.01s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-arm64 stop -p no-preload-165130 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-arm64 stop -p no-preload-165130 --alsologtostderr -v=3: (12.010862041s)
--- PASS: TestStartStop/group/no-preload/serial/Stop (12.01s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.18s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-165130 -n no-preload-165130
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-165130 -n no-preload-165130: exit status 7 (69.414189ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p no-preload-165130 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.18s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (47.32s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-arm64 start -p no-preload-165130 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-arm64 start -p no-preload-165130 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1: (46.768491785s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-165130 -n no-preload-165130
--- PASS: TestStartStop/group/no-preload/serial/SecondStart (47.32s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (8.47s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context embed-certs-879000 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [9c85fc23-1f39-430f-a828-390ca91fd200] Pending
helpers_test.go:352: "busybox" [9c85fc23-1f39-430f-a828-390ca91fd200] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "busybox" [9c85fc23-1f39-430f-a828-390ca91fd200] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: integration-test=busybox healthy within 8.004235195s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context embed-certs-879000 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/embed-certs/serial/DeployApp (8.47s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (12.51s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-arm64 stop -p embed-certs-879000 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-arm64 stop -p embed-certs-879000 --alsologtostderr -v=3: (12.506609754s)
--- PASS: TestStartStop/group/embed-certs/serial/Stop (12.51s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.2s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-879000 -n embed-certs-879000
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-879000 -n embed-certs-879000: exit status 7 (82.604122ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p embed-certs-879000 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.20s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (55.38s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-arm64 start -p embed-certs-879000 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-arm64 start -p embed-certs-879000 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1: (54.927443384s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-879000 -n embed-certs-879000
--- PASS: TestStartStop/group/embed-certs/serial/SecondStart (55.38s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (6s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-xsqns" [fc63b412-889b-418a-a30a-c1de29e57030] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.003505059s
--- PASS: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (6.00s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.14s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-xsqns" [fc63b412-889b-418a-a30a-c1de29e57030] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.003857746s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context no-preload-165130 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.14s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.28s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-arm64 -p no-preload-165130 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.28s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/FirstStart (52.29s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-arm64 start -p default-k8s-diff-port-882305 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1
E1122 00:57:44.616372  516937 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21934-513600/.minikube/profiles/functional-354825/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-arm64 start -p default-k8s-diff-port-882305 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1: (52.287084127s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/FirstStart (52.29s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (6s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-mrcpd" [10d444b4-3695-440b-8e1b-8ddb92023d36] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.003477054s
--- PASS: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (6.00s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.09s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-mrcpd" [10d444b4-3695-440b-8e1b-8ddb92023d36] Running
E1122 00:58:05.775488  516937 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21934-513600/.minikube/profiles/old-k8s-version-625837/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1122 00:58:05.781913  516937 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21934-513600/.minikube/profiles/old-k8s-version-625837/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1122 00:58:05.793212  516937 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21934-513600/.minikube/profiles/old-k8s-version-625837/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1122 00:58:05.814584  516937 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21934-513600/.minikube/profiles/old-k8s-version-625837/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1122 00:58:05.856005  516937 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21934-513600/.minikube/profiles/old-k8s-version-625837/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1122 00:58:05.938029  516937 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21934-513600/.minikube/profiles/old-k8s-version-625837/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1122 00:58:06.099990  516937 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21934-513600/.minikube/profiles/old-k8s-version-625837/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1122 00:58:06.421774  516937 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21934-513600/.minikube/profiles/old-k8s-version-625837/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1122 00:58:07.064038  516937 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21934-513600/.minikube/profiles/old-k8s-version-625837/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1122 00:58:08.345340  516937 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21934-513600/.minikube/profiles/old-k8s-version-625837/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:285: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.004076374s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context embed-certs-879000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.09s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.28s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-arm64 -p embed-certs-879000 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.28s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (37.83s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-arm64 start -p newest-cni-683181 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1
E1122 00:58:26.269740  516937 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21934-513600/.minikube/profiles/old-k8s-version-625837/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-arm64 start -p newest-cni-683181 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1: (37.825564147s)
--- PASS: TestStartStop/group/newest-cni/serial/FirstStart (37.83s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/DeployApp (10.41s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context default-k8s-diff-port-882305 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [c3ec38da-dbd6-47f5-acb8-b65445289488] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "busybox" [c3ec38da-dbd6-47f5-acb8-b65445289488] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: integration-test=busybox healthy within 10.003873097s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context default-k8s-diff-port-882305 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/DeployApp (10.41s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Stop (12.25s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-arm64 stop -p default-k8s-diff-port-882305 --alsologtostderr -v=3
E1122 00:58:46.751046  516937 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21934-513600/.minikube/profiles/old-k8s-version-625837/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-arm64 stop -p default-k8s-diff-port-882305 --alsologtostderr -v=3: (12.249081349s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Stop (12.25s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.2s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-882305 -n default-k8s-diff-port-882305
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-882305 -n default-k8s-diff-port-882305: exit status 7 (72.179238ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p default-k8s-diff-port-882305 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.20s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/SecondStart (52.3s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-arm64 start -p default-k8s-diff-port-882305 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-arm64 start -p default-k8s-diff-port-882305 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1: (51.819061776s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-882305 -n default-k8s-diff-port-882305
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/SecondStart (52.30s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (1.51s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-arm64 stop -p newest-cni-683181 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-arm64 stop -p newest-cni-683181 --alsologtostderr -v=3: (1.505125932s)
--- PASS: TestStartStop/group/newest-cni/serial/Stop (1.51s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.27s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-683181 -n newest-cni-683181
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-683181 -n newest-cni-683181: exit status 7 (118.039697ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p newest-cni-683181 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.27s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (21.79s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-arm64 start -p newest-cni-683181 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-arm64 start -p newest-cni-683181 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1: (21.165185782s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-683181 -n newest-cni-683181
--- PASS: TestStartStop/group/newest-cni/serial/SecondStart (21.79s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:271: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:282: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.35s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-arm64 -p newest-cni-683181 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
--- PASS: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.35s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (85.4s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p auto-163229 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p auto-163229 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=crio: (1m25.393734769s)
--- PASS: TestNetworkPlugins/group/auto/Start (85.40s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (6s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-sx5ls" [a27d7302-b089-4adf-a86b-4d6b9bfdb28c] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.003986286s
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (6.00s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.14s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-sx5ls" [a27d7302-b089-4adf-a86b-4d6b9bfdb28c] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.004349213s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context default-k8s-diff-port-882305 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.14s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.28s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-arm64 -p default-k8s-diff-port-882305 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.28s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (79.83s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p kindnet-163229 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker  --container-runtime=crio
E1122 01:00:40.218002  516937 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21934-513600/.minikube/profiles/addons-882841/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1122 01:00:49.634237  516937 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21934-513600/.minikube/profiles/old-k8s-version-625837/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p kindnet-163229 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker  --container-runtime=crio: (1m19.83286102s)
--- PASS: TestNetworkPlugins/group/kindnet/Start (79.83s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/KubeletFlags (0.32s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p auto-163229 "pgrep -a kubelet"
I1122 01:00:59.913685  516937 config.go:182] Loaded profile config "auto-163229": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
--- PASS: TestNetworkPlugins/group/auto/KubeletFlags (0.32s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/NetCatPod (11.61s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context auto-163229 replace --force -f testdata/netcat-deployment.yaml
I1122 01:01:00.515246  516937 kapi.go:136] Waiting for deployment netcat to stabilize, generation 1 observed generation 1 spec.replicas 1 status.replicas 0
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-rqkwq" [9e185da1-714b-43ec-ba21-aabc98c54f51] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E1122 01:01:02.908480  516937 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21934-513600/.minikube/profiles/no-preload-165130/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1122 01:01:02.914784  516937 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21934-513600/.minikube/profiles/no-preload-165130/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1122 01:01:02.926151  516937 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21934-513600/.minikube/profiles/no-preload-165130/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1122 01:01:02.947511  516937 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21934-513600/.minikube/profiles/no-preload-165130/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1122 01:01:02.988844  516937 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21934-513600/.minikube/profiles/no-preload-165130/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1122 01:01:03.070215  516937 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21934-513600/.minikube/profiles/no-preload-165130/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1122 01:01:03.231697  516937 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21934-513600/.minikube/profiles/no-preload-165130/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1122 01:01:03.553849  516937 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21934-513600/.minikube/profiles/no-preload-165130/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1122 01:01:04.195817  516937 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21934-513600/.minikube/profiles/no-preload-165130/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1122 01:01:05.477534  516937 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21934-513600/.minikube/profiles/no-preload-165130/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:352: "netcat-cd4db9dbf-rqkwq" [9e185da1-714b-43ec-ba21-aabc98c54f51] Running
E1122 01:01:08.038994  516937 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21934-513600/.minikube/profiles/no-preload-165130/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: app=netcat healthy within 11.004207328s
--- PASS: TestNetworkPlugins/group/auto/NetCatPod (11.61s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/DNS (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/DNS
net_test.go:175: (dbg) Run:  kubectl --context auto-163229 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/auto/DNS (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Localhost (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Localhost
net_test.go:194: (dbg) Run:  kubectl --context auto-163229 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/auto/Localhost (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/HairPin (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/HairPin
net_test.go:264: (dbg) Run:  kubectl --context auto-163229 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/auto/HairPin (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: waiting 10m0s for pods matching "app=kindnet" in namespace "kube-system" ...
helpers_test.go:352: "kindnet-cbxk5" [6804cdb4-5d36-49b5-8b14-a8c594fe5441] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: app=kindnet healthy within 6.003598404s
--- PASS: TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (88.87s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p calico-163229 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p calico-163229 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker  --container-runtime=crio: (1m28.867368919s)
--- PASS: TestNetworkPlugins/group/calico/Start (88.87s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/KubeletFlags (0.38s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p kindnet-163229 "pgrep -a kubelet"
I1122 01:01:37.134114  516937 config.go:182] Loaded profile config "kindnet-163229": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
--- PASS: TestNetworkPlugins/group/kindnet/KubeletFlags (0.38s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/NetCatPod (11.47s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context kindnet-163229 replace --force -f testdata/netcat-deployment.yaml
I1122 01:01:37.570770  516937 kapi.go:136] Waiting for deployment netcat to stabilize, generation 1 observed generation 1 spec.replicas 1 status.replicas 0
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-mxr7g" [698bc6fe-4923-41e4-8c0a-a2512a94c805] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-mxr7g" [698bc6fe-4923-41e4-8c0a-a2512a94c805] Running
E1122 01:01:43.885612  516937 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21934-513600/.minikube/profiles/no-preload-165130/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: app=netcat healthy within 11.004102018s
--- PASS: TestNetworkPlugins/group/kindnet/NetCatPod (11.47s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/DNS (0.34s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/DNS
net_test.go:175: (dbg) Run:  kubectl --context kindnet-163229 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kindnet/DNS (0.34s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Localhost (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Localhost
net_test.go:194: (dbg) Run:  kubectl --context kindnet-163229 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kindnet/Localhost (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/HairPin (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/HairPin
net_test.go:264: (dbg) Run:  kubectl --context kindnet-163229 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kindnet/HairPin (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Start (65.8s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p custom-flannel-163229 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=docker  --container-runtime=crio
E1122 01:02:24.847546  516937 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21934-513600/.minikube/profiles/no-preload-165130/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1122 01:02:27.684802  516937 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21934-513600/.minikube/profiles/functional-354825/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1122 01:02:44.615800  516937 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21934-513600/.minikube/profiles/functional-354825/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p custom-flannel-163229 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=docker  --container-runtime=crio: (1m5.797491973s)
--- PASS: TestNetworkPlugins/group/custom-flannel/Start (65.80s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: waiting 10m0s for pods matching "k8s-app=calico-node" in namespace "kube-system" ...
helpers_test.go:352: "calico-node-8pxg6" [16a005b6-a663-4247-a09f-822e78e413ee] Running
E1122 01:03:05.775539  516937 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21934-513600/.minikube/profiles/old-k8s-version-625837/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: k8s-app=calico-node healthy within 6.004202973s
--- PASS: TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/KubeletFlags (0.32s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p calico-163229 "pgrep -a kubelet"
I1122 01:03:07.893744  516937 config.go:182] Loaded profile config "calico-163229": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
--- PASS: TestNetworkPlugins/group/calico/KubeletFlags (0.32s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/NetCatPod (10.26s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context calico-163229 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-rlrtq" [da00730a-2dac-4adc-b272-bdf69a678a6f] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-rlrtq" [da00730a-2dac-4adc-b272-bdf69a678a6f] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: app=netcat healthy within 10.004671271s
--- PASS: TestNetworkPlugins/group/calico/NetCatPod (10.26s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/DNS (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/DNS
net_test.go:175: (dbg) Run:  kubectl --context calico-163229 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/calico/DNS (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Localhost (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Localhost
net_test.go:194: (dbg) Run:  kubectl --context calico-163229 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/calico/Localhost (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/HairPin (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/HairPin
net_test.go:264: (dbg) Run:  kubectl --context calico-163229 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/calico/HairPin (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.3s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p custom-flannel-163229 "pgrep -a kubelet"
I1122 01:03:21.226589  516937 config.go:182] Loaded profile config "custom-flannel-163229": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
--- PASS: TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.30s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/NetCatPod (11.28s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context custom-flannel-163229 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-mnvcg" [0333cb8e-441e-42e9-a086-d4b01f033aec] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-mnvcg" [0333cb8e-441e-42e9-a086-d4b01f033aec] Running
E1122 01:03:28.808174  516937 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21934-513600/.minikube/profiles/default-k8s-diff-port-882305/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1122 01:03:28.814560  516937 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21934-513600/.minikube/profiles/default-k8s-diff-port-882305/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1122 01:03:28.825952  516937 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21934-513600/.minikube/profiles/default-k8s-diff-port-882305/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1122 01:03:28.847299  516937 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21934-513600/.minikube/profiles/default-k8s-diff-port-882305/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1122 01:03:28.888765  516937 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21934-513600/.minikube/profiles/default-k8s-diff-port-882305/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1122 01:03:28.970555  516937 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21934-513600/.minikube/profiles/default-k8s-diff-port-882305/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1122 01:03:29.131788  516937 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21934-513600/.minikube/profiles/default-k8s-diff-port-882305/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1122 01:03:29.453567  516937 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21934-513600/.minikube/profiles/default-k8s-diff-port-882305/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1122 01:03:30.095328  516937 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21934-513600/.minikube/profiles/default-k8s-diff-port-882305/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1122 01:03:31.377017  516937 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21934-513600/.minikube/profiles/default-k8s-diff-port-882305/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: app=netcat healthy within 11.003121361s
--- PASS: TestNetworkPlugins/group/custom-flannel/NetCatPod (11.28s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/DNS (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context custom-flannel-163229 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/custom-flannel/DNS (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Localhost (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context custom-flannel-163229 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/Localhost (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/HairPin (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context custom-flannel-163229 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/HairPin (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (81.27s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p enable-default-cni-163229 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker  --container-runtime=crio
E1122 01:03:46.769129  516937 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21934-513600/.minikube/profiles/no-preload-165130/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1122 01:03:49.304528  516937 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21934-513600/.minikube/profiles/default-k8s-diff-port-882305/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p enable-default-cni-163229 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker  --container-runtime=crio: (1m21.26899497s)
--- PASS: TestNetworkPlugins/group/enable-default-cni/Start (81.27s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Start (60.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p flannel-163229 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker  --container-runtime=crio
E1122 01:04:09.786269  516937 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21934-513600/.minikube/profiles/default-k8s-diff-port-882305/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1122 01:04:50.748050  516937 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21934-513600/.minikube/profiles/default-k8s-diff-port-882305/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p flannel-163229 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker  --container-runtime=crio: (1m0.109387822s)
--- PASS: TestNetworkPlugins/group/flannel/Start (60.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: waiting 10m0s for pods matching "app=flannel" in namespace "kube-flannel" ...
helpers_test.go:352: "kube-flannel-ds-nh59z" [ade0f08e-551c-4ed7-9393-7d8563250628] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: app=flannel healthy within 6.003981053s
--- PASS: TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.3s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p enable-default-cni-163229 "pgrep -a kubelet"
I1122 01:05:02.636521  516937 config.go:182] Loaded profile config "enable-default-cni-163229": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
--- PASS: TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.30s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/NetCatPod (12.29s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context enable-default-cni-163229 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-nsxdp" [67f317f4-f359-46da-a0d4-45d1ac4ccee9] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-nsxdp" [67f317f4-f359-46da-a0d4-45d1ac4ccee9] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: app=netcat healthy within 12.004377462s
--- PASS: TestNetworkPlugins/group/enable-default-cni/NetCatPod (12.29s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/KubeletFlags (0.42s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p flannel-163229 "pgrep -a kubelet"
I1122 01:05:03.980442  516937 config.go:182] Loaded profile config "flannel-163229": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
--- PASS: TestNetworkPlugins/group/flannel/KubeletFlags (0.42s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/NetCatPod (11.39s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context flannel-163229 replace --force -f testdata/netcat-deployment.yaml
I1122 01:05:04.357311  516937 kapi.go:136] Waiting for deployment netcat to stabilize, generation 1 observed generation 1 spec.replicas 1 status.replicas 0
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-b62fg" [1e989f1a-c10b-4a1f-905a-9c4471eee2b3] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-b62fg" [1e989f1a-c10b-4a1f-905a-9c4471eee2b3] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: app=netcat healthy within 11.003920182s
--- PASS: TestNetworkPlugins/group/flannel/NetCatPod (11.39s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/DNS (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:175: (dbg) Run:  kubectl --context enable-default-cni-163229 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/enable-default-cni/DNS (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Localhost (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Localhost
net_test.go:194: (dbg) Run:  kubectl --context enable-default-cni-163229 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/Localhost (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/HairPin (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/HairPin
net_test.go:264: (dbg) Run:  kubectl --context enable-default-cni-163229 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/HairPin (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/DNS (0.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context flannel-163229 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/flannel/DNS (0.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Localhost (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context flannel-163229 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/flannel/Localhost (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/HairPin (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context flannel-163229 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/flannel/HairPin (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (73.06s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p bridge-163229 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker  --container-runtime=crio
E1122 01:05:40.218434  516937 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21934-513600/.minikube/profiles/addons-882841/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p bridge-163229 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker  --container-runtime=crio: (1m13.063213183s)
--- PASS: TestNetworkPlugins/group/bridge/Start (73.06s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/KubeletFlags (0.3s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p bridge-163229 "pgrep -a kubelet"
I1122 01:06:53.381437  516937 config.go:182] Loaded profile config "bridge-163229": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
--- PASS: TestNetworkPlugins/group/bridge/KubeletFlags (0.30s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/NetCatPod (9.26s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context bridge-163229 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-qqll2" [13ab4ff9-6765-437e-bd10-048828adf716] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-qqll2" [13ab4ff9-6765-437e-bd10-048828adf716] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: app=netcat healthy within 9.004018804s
--- PASS: TestNetworkPlugins/group/bridge/NetCatPod (9.26s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/DNS (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/DNS
net_test.go:175: (dbg) Run:  kubectl --context bridge-163229 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/bridge/DNS (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Localhost (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Localhost
net_test.go:194: (dbg) Run:  kubectl --context bridge-163229 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/bridge/Localhost (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/HairPin (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/HairPin
net_test.go:264: (dbg) Run:  kubectl --context bridge-163229 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/bridge/HairPin (0.12s)

                                                
                                    

Test skip (31/328)

x
+
TestDownloadOnly/v1.28.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/cached-images
aaa_download_only_test.go:128: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.28.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/binaries
aaa_download_only_test.go:150: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.28.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/kubectl
aaa_download_only_test.go:166: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.28.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/cached-images
aaa_download_only_test.go:128: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.34.1/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/binaries
aaa_download_only_test.go:150: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.34.1/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/kubectl
aaa_download_only_test.go:166: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.34.1/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnlyKic (0.46s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:231: (dbg) Run:  out/minikube-linux-arm64 start --download-only -p download-docker-291874 --alsologtostderr --driver=docker  --container-runtime=crio
aaa_download_only_test.go:248: Skip for arm64 platform. See https://github.com/kubernetes/minikube/issues/10144
helpers_test.go:175: Cleaning up "download-docker-291874" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p download-docker-291874
--- SKIP: TestDownloadOnlyKic (0.46s)

                                                
                                    
x
+
TestOffline (0s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:35: skipping TestOffline - only docker runtime supported on arm64. See https://github.com/kubernetes/minikube/issues/10144
--- SKIP: TestOffline (0.00s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/RealCredentials (0s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/RealCredentials
addons_test.go:759: This test requires a GCE instance (excluding Cloud Shell) with a container based driver
--- SKIP: TestAddons/serial/GCPAuth/RealCredentials (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:483: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestAddons/parallel/AmdGpuDevicePlugin (0s)

                                                
                                                
=== RUN   TestAddons/parallel/AmdGpuDevicePlugin
=== PAUSE TestAddons/parallel/AmdGpuDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/AmdGpuDevicePlugin
addons_test.go:1033: skip amd gpu test on all but docker driver and amd64 platform
--- SKIP: TestAddons/parallel/AmdGpuDevicePlugin (0.00s)

                                                
                                    
x
+
TestDockerFlags (0s)

                                                
                                                
=== RUN   TestDockerFlags
docker_test.go:41: skipping: only runs with docker container runtime, currently testing crio
--- SKIP: TestDockerFlags (0.00s)

                                                
                                    
x
+
TestDockerEnvContainerd (0s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with crio true linux arm64
docker_test.go:172: skipping: TestDockerEnvContainerd can only be run with the containerd runtime on Docker driver
--- SKIP: TestDockerEnvContainerd (0.00s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
driver_install_or_update_test.go:37: Skip if not darwin.
--- SKIP: TestHyperKitDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade (0s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade
driver_install_or_update_test.go:101: Skip if not darwin.
--- SKIP: TestHyperkitDriverSkipUpgrade (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1792: arm64 is not supported by mysql. Skip the test. See https://github.com/kubernetes/minikube/issues/10144
--- SKIP: TestFunctional/parallel/MySQL (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv
=== PAUSE TestFunctional/parallel/DockerEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DockerEnv
functional_test.go:478: only validate docker env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/DockerEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:565: only validate podman env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.00s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes (0s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes
functional_test.go:82: 
--- SKIP: TestFunctionalNewestKubernetes (0.00s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestImageBuild (0s)

                                                
                                                
=== RUN   TestImageBuild
image_test.go:33: 
--- SKIP: TestImageBuild (0.00s)

                                                
                                    
x
+
TestISOImage (0s)

                                                
                                                
=== RUN   TestISOImage
iso_test.go:36: This test requires a VM driver
--- SKIP: TestISOImage (0.00s)

                                                
                                    
x
+
TestChangeNoneUser (0s)

                                                
                                                
=== RUN   TestChangeNoneUser
none_test.go:38: Test requires none driver and SUDO_USER env to not be empty
--- SKIP: TestChangeNoneUser (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestSkaffold (0s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:45: skaffold requires docker-env, currently testing crio container runtime
--- SKIP: TestSkaffold (0.00s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.15s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:101: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:175: Cleaning up "disable-driver-mounts-046489" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p disable-driver-mounts-046489
--- SKIP: TestStartStop/group/disable-driver-mounts (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet (5.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet
net_test.go:93: Skipping the test as crio container runtimes requires CNI
panic.go:615: 
----------------------- debugLogs start: kubenet-163229 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-163229

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: kubenet-163229

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-163229

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: kubenet-163229

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: kubenet-163229

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: kubenet-163229

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: kubenet-163229

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: kubenet-163229

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: kubenet-163229

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: kubenet-163229

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "kubenet-163229" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-163229"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "kubenet-163229" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-163229"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "kubenet-163229" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-163229"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: kubenet-163229

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "kubenet-163229" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-163229"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "kubenet-163229" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-163229"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "kubenet-163229" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "kubenet-163229" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "kubenet-163229" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "kubenet-163229" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "kubenet-163229" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "kubenet-163229" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "kubenet-163229" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "kubenet-163229" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "kubenet-163229" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-163229"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "kubenet-163229" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-163229"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "kubenet-163229" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-163229"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "kubenet-163229" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-163229"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "kubenet-163229" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-163229"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "kubenet-163229" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "kubenet-163229" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "kubenet-163229" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "kubenet-163229" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-163229"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "kubenet-163229" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-163229"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "kubenet-163229" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-163229"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "kubenet-163229" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-163229"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "kubenet-163229" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-163229"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/21934-513600/.minikube/ca.crt
extensions:
- extension:
last-update: Sat, 22 Nov 2025 00:50:28 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: cluster_info
server: https://192.168.76.2:8443
name: kubernetes-upgrade-134864
contexts:
- context:
cluster: kubernetes-upgrade-134864
extensions:
- extension:
last-update: Sat, 22 Nov 2025 00:50:28 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: context_info
namespace: default
user: kubernetes-upgrade-134864
name: kubernetes-upgrade-134864
current-context: kubernetes-upgrade-134864
kind: Config
preferences: {}
users:
- name: kubernetes-upgrade-134864
user:
client-certificate: /home/jenkins/minikube-integration/21934-513600/.minikube/profiles/kubernetes-upgrade-134864/client.crt
client-key: /home/jenkins/minikube-integration/21934-513600/.minikube/profiles/kubernetes-upgrade-134864/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: kubenet-163229

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "kubenet-163229" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-163229"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "kubenet-163229" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-163229"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "kubenet-163229" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-163229"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "kubenet-163229" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-163229"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "kubenet-163229" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-163229"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "kubenet-163229" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-163229"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "kubenet-163229" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-163229"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "kubenet-163229" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-163229"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "kubenet-163229" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-163229"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "kubenet-163229" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-163229"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "kubenet-163229" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-163229"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "kubenet-163229" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-163229"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "kubenet-163229" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-163229"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "kubenet-163229" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-163229"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "kubenet-163229" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-163229"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "kubenet-163229" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-163229"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "kubenet-163229" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-163229"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "kubenet-163229" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-163229"

                                                
                                                
----------------------- debugLogs end: kubenet-163229 [took: 4.999353389s] --------------------------------
helpers_test.go:175: Cleaning up "kubenet-163229" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p kubenet-163229
--- SKIP: TestNetworkPlugins/group/kubenet (5.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium (4.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium
net_test.go:102: Skipping the test as it's interfering with other tests and is outdated
panic.go:615: 
----------------------- debugLogs start: cilium-163229 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: cilium-163229

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: cilium-163229

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: cilium-163229

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: cilium-163229

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: cilium-163229

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: cilium-163229

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: cilium-163229

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: cilium-163229

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: cilium-163229

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: cilium-163229

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "cilium-163229" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-163229"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "cilium-163229" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-163229"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "cilium-163229" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-163229"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: cilium-163229

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "cilium-163229" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-163229"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "cilium-163229" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-163229"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "cilium-163229" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "cilium-163229" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "cilium-163229" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "cilium-163229" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "cilium-163229" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "cilium-163229" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "cilium-163229" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "cilium-163229" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "cilium-163229" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-163229"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "cilium-163229" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-163229"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "cilium-163229" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-163229"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "cilium-163229" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-163229"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "cilium-163229" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-163229"

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set:
Error in configuration: context was not found for specified context: cilium-163229

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set pod(s):
Error in configuration: context was not found for specified context: cilium-163229

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (current):
error: context "cilium-163229" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (previous):
error: context "cilium-163229" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment:
Error in configuration: context was not found for specified context: cilium-163229

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment pod(s):
Error in configuration: context was not found for specified context: cilium-163229

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (current):
error: context "cilium-163229" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (previous):
error: context "cilium-163229" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "cilium-163229" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "cilium-163229" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "cilium-163229" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "cilium-163229" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-163229"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "cilium-163229" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-163229"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "cilium-163229" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-163229"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "cilium-163229" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-163229"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "cilium-163229" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-163229"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/21934-513600/.minikube/ca.crt
extensions:
- extension:
last-update: Sat, 22 Nov 2025 00:50:37 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: cluster_info
server: https://192.168.76.2:8443
name: kubernetes-upgrade-134864
contexts:
- context:
cluster: kubernetes-upgrade-134864
extensions:
- extension:
last-update: Sat, 22 Nov 2025 00:50:37 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: context_info
namespace: default
user: kubernetes-upgrade-134864
name: kubernetes-upgrade-134864
current-context: kubernetes-upgrade-134864
kind: Config
preferences: {}
users:
- name: kubernetes-upgrade-134864
user:
client-certificate: /home/jenkins/minikube-integration/21934-513600/.minikube/profiles/kubernetes-upgrade-134864/client.crt
client-key: /home/jenkins/minikube-integration/21934-513600/.minikube/profiles/kubernetes-upgrade-134864/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: cilium-163229

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "cilium-163229" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-163229"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "cilium-163229" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-163229"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "cilium-163229" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-163229"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "cilium-163229" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-163229"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "cilium-163229" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-163229"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "cilium-163229" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-163229"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "cilium-163229" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-163229"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "cilium-163229" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-163229"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "cilium-163229" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-163229"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "cilium-163229" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-163229"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "cilium-163229" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-163229"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "cilium-163229" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-163229"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "cilium-163229" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-163229"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "cilium-163229" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-163229"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "cilium-163229" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-163229"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "cilium-163229" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-163229"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "cilium-163229" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-163229"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "cilium-163229" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-163229"

                                                
                                                
----------------------- debugLogs end: cilium-163229 [took: 4.026072722s] --------------------------------
helpers_test.go:175: Cleaning up "cilium-163229" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p cilium-163229
--- SKIP: TestNetworkPlugins/group/cilium (4.20s)

                                                
                                    
Copied to clipboard