Skip to content

Add StackHPC Ironic tunings #1011

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Open
wants to merge 3 commits into
base: stackhpc/2023.1
Choose a base branch
from

Conversation

jovial
Copy link
Contributor

@jovial jovial commented Mar 22, 2024

No description provided.

Copy link
Member

@dougszumski dougszumski left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Nice!

socat_address = {% raw %}{{ api_interface_address }}{% endraw %}

[conductor]
automated_clean=true
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I saw at some sites we have this cleaning tweak from the Ironic docs:

[deploy]/erase_devices_priority=0
[deploy]/erase_devices_metadata_priority=0
[conductor]/clean_step_priority_override=deploy.erase_devices_express:5

I know cleaning is very site specific. Probably safer to leave the above out and default to scrub. Perhaps worth linking to the Ironic Doc?

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I would actually like our default to be fail if secure erase fails, with a note on how to work around that.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I explicitly set a value for erase_devices as the priroity is dependent on the hardware manager in use. What do you think?

heartbeat_interval = 30
# Default is 60 seconds
heartbeat_timeout = 360
sync_local_state_interval = 360
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think there is a pthread config we can use here as well now?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This one:

# Run the health check heartbeat thread through a native
# python thread by default. If this option is equal to False
# then the health check heartbeat will inherit the execution
# model from the parent process. For example if the parent
# process has monkey patched the stdlib by using
# eventlet/greenlet then the heartbeat will be run through a
# green thread. This option should be set to True only for the
# wsgi services. (boolean value)
#heartbeat_in_pthread = false

I guess that should only be set on ironic-api. Was there some other tuning options too?

[deploy]
shred_random_overwrite_iterations = 0
shred_final_overwrite_with_zeros = false
continue_if_disk_secure_erase_fails = true
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Ah, @dougszumski this is part of what you were talking about?

@jovial jovial force-pushed the feature/2023.1/ironic-tuning branch from cff7553 to f41358c Compare April 17, 2024 17:52
@jovial
Copy link
Contributor Author

jovial commented Apr 18, 2024

I've removed the bits that were flagged and have made pieces conditional on having Ironic enabled. Please review again 🙏

@jovial jovial marked this pull request as ready for review April 18, 2024 08:31
@jovial jovial requested a review from a team as a code owner April 18, 2024 08:31
@priteau priteau self-requested a review September 26, 2024 09:49
@mnasiadka
Copy link
Member

any chance we move forward on this?

@Alex-Welsh
Copy link
Member

@JohnGarbutt or @dougszumski Any chance you could re-review this? It looks like exactly the sort of opinionated defaults we should be setting in SKC, but I don't have the subject knowledge to review it myself

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
None yet
Development

Successfully merging this pull request may close these issues.

6 participants