Conversation
140a7bb to
3653509
Compare
Considering following real-life scenario: - User X creates cluster A named "foo". - User X creates cluster B also named "foo" (Packet allows that). - User Y creates cluster C named "bar". All clusters are created in the same facility. Now, if user Y tries to install cluster-autoscaler component, it will fail, as facility have 2 devices with the same name, despite they don't even belong to user Y clusters. This is because we check device uniqueness before checking the cluster name on the devices, which seems incorrect. This commit breaks down 'getWorkerUserdata()' function into few smaller functions to make it more readable and to allow implementing simple tests for the desired behavior. Also, previously we were returning userData of last node found and now we return userData of first node, to simplify the code. There is also a test included for it now. This commit should also simplify fixing #767. Closes #766. Signed-off-by: Mateusz Gozdek <mateusz@kinvolk.io>
As otherwise running codespell prints a warning that this is a binary file. Signed-off-by: Mateusz Gozdek <mateusz@kinvolk.io>
3653509 to
8e29ee5
Compare
|
I was about to make a small PR for code spell ignoring |
|
@invidian One last thing which I forgot before, please mark these tests to run in parallel? |
Is it really necessery? 0 ✓ (444ms) 12:24:12 invidian@dellxps15mateusz ~/repos/kinvolk/lokomotive (invidian/cluster-autoscaler-fix-duplicated-nodes)$ go test -v -count=1 ./pkg/components/cluster-autoscaler/
=== RUN TestGetClusterWorkersFilterByFacility
--- PASS: TestGetClusterWorkersFilterByFacility (0.00s)
=== RUN TestGetClusterWorkersFilterByCluster
--- PASS: TestGetClusterWorkersFilterByCluster (0.00s)
=== RUN TestGetClusterWorkersFilterNonWorkers
--- PASS: TestGetClusterWorkersFilterNonWorkers (0.00s)
=== RUN TestFindDuplicatedDevicesNonUniqueHostname
--- PASS: TestFindDuplicatedDevicesNonUniqueHostname (0.00s)
=== RUN TestFindDuplicatedDevicesUniqueHostname
--- PASS: TestFindDuplicatedDevicesUniqueHostname (0.00s)
=== RUN TestDeviceHostnames
--- PASS: TestDeviceHostnames (0.00s)
=== RUN TestGetWorkerUserdataNoUserdataOnError
--- PASS: TestGetWorkerUserdataNoUserdataOnError (0.00s)
=== RUN TestGetWorkerUserdataDuplicatedWorkers
--- PASS: TestGetWorkerUserdataDuplicatedWorkers (0.00s)
=== RUN TestGetWorkerUserdataEmptyUserdata
--- PASS: TestGetWorkerUserdataEmptyUserdata (0.00s)
=== RUN TestGetWorkerUserdataFirstDevice
--- PASS: TestGetWorkerUserdataFirstDevice (0.00s)
=== RUN TestGetWorkerUserdataReturnBase64
--- PASS: TestGetWorkerUserdataReturnBase64 (0.00s)
=== RUN TestGetWorkerUserdataDuplicatedWorkersDifferentClusters
--- PASS: TestGetWorkerUserdataDuplicatedWorkersDifferentClusters (0.00s)
=== RUN TestGetWorkerUserdataDuplicatedWorkersIncludeHostnames
--- PASS: TestGetWorkerUserdataDuplicatedWorkersIncludeHostnames (0.00s)
=== RUN TestEmptyConfig
--- PASS: TestEmptyConfig (0.00s)
=== RUN TestEmptyBody
--- PASS: TestEmptyBody (0.00s)
=== RUN TestRender
--- PASS: TestRender (0.00s)
PASS
ok github.com/kinvolk/lokomotive/pkg/components/cluster-autoscaler 0.029s
0 ✓ (2.228s) 12:24:21 invidian@dellxps15mateusz ~/repos/kinvolk/lokomotive (invidian/cluster-autoscaler-fix-duplicated-nodes)$ |
What's wrong with parallelizing the unit tests? |
It's a lot of boilerplate to add with almost no gain. The test execution takes |
I think it sets a precedent for other tests as well. As in when someone else looks at code and tries to write similar tests it becomes a norm to make it parallel. What is it that we are losing by making it parallel I don't understand though 😕? |
Still, the delay of
As I said, we add boilerplate code, which is not really needed, as there is almost no gain from it. |
surajssd
left a comment
There was a problem hiding this comment.
I concede for moving ahead. But I still believe they should be made parallel.
Considering following real-life scenario:
All clusters are created in the same facility.
Now, if user Y tries to install cluster-autoscaler component, it will
fail, as facility have 2 devices with the same name, despite they don't
even belong to user Y clusters.
This is because we check device uniqueness before checking the cluster
name on the devices, which seems incorrect.
This commit breaks down 'getWorkerUserdata()' function into few smaller
functions to make it more readable and to allow implementing simple
tests for the desired behavior.
Also, previously we were returning userData of last node found and now
we return userData of first node, to simplify the code. There is also a
test included for it now.
This commit should also simplify fixing #767.
Closes #766.
The tests could be smarter and we are testing unexported functions, which is generally not recommended, but it should be OK for now.
Signed-off-by: Mateusz Gozdek mateusz@kinvolk.io