Skip to content

Conversation

@teleprint-me
Copy link
Contributor

Title: Add verbose parameter for llamacpp

Description:
This pull request adds a 'verbose' parameter to the llamacpp module. The 'verbose' parameter, when set to True, will enable the output of detailed logs during the execution of the Llama model. This added parameter can aid in debugging and understanding the internal processes of the module.

The verbose parameter is a boolean that prints verbose output to stderr when set to True. By default, the verbose parameter is set to True but can be toggled off if less output is desired. This new parameter has been added to the validate_environment method of the LlamaCpp class which initializes the llama_cpp.Llama API:

class LlamaCpp(LLM):
    ...
    @root_validator()
    def validate_environment(cls, values: Dict) -> Dict:
        ...
        model_param_names = [
            ...
            "verbose",  # New verbose parameter added
        ]
        ...
        values["client"] = Llama(model_path, **model_params)
        ...

Issue:
Not applicable (If there is an issue that this PR resolves, please link to it here)

Dependencies:
No new dependencies introduced.

Maintainer:
Tagging @hinthornw, as this change relates to Tools / Toolkits.

Twitter handle:
(If you want a shout-out on Twitter and have a Twitter handle, mention it here.)

This change does not introduce any new features or integrations, so no new tests or notebooks are provided. However, existing tests should cover this new parameter.

Maintainers, please review at your earliest convenience. Thank you for considering this contribution!

@vercel
Copy link

vercel bot commented Jul 6, 2023

The latest updates on your projects. Learn more about Vercel for Git ↗︎

1 Ignored Deployment
Name Status Preview Comments Updated (UTC)
langchain ⬜️ Ignored (Inspect) Jul 6, 2023 6:42pm

streaming: bool = True
"""Whether to stream the results, token by token."""

verbose: bool = True
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

is this the default? or is it default false and this is actually a change from existing behavior?

Copy link
Contributor Author

@teleprint-me teleprint-me Jul 6, 2023

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@hwchase17

Yes, the default is set to True in the API.

https://llama-cpp-python.readthedocs.io/en/latest/api-reference/

It's fine when you're not streaming. The output gets mangled when stream is True.

@baskaryan baskaryan added the lgtm label Jul 7, 2023
@baskaryan baskaryan merged commit c9a0f24 into langchain-ai:master Jul 7, 2023
@baskaryan
Copy link
Collaborator

thanks @teleprint-me!

@teleprint-me teleprint-me deleted the llamacpp-verbose-fix branch July 7, 2023 20:44
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants