diff --git a/.container/.env.example b/.container/.env.example
index fa9355efff..dc3a4b4425 100644
--- a/.container/.env.example
+++ b/.container/.env.example
@@ -15,4 +15,4 @@ GOOGLE_API_KEY=123456789
SEARCH_ENGINE_ID=123456789
# For OpenWeatherMap API
-OPENWEATHERMAP_API_KEY=123456789
\ No newline at end of file
+OPENWEATHERMAP_API_KEY=123456789
diff --git a/.container/README.md b/.container/README.md
index f1dc485ad5..7afd5cefd1 100644
--- a/.container/README.md
+++ b/.container/README.md
@@ -11,10 +11,10 @@ develop on it, with Docker.
## Configure Environment
Before starting the container, you need to navigate into the
-[.container](../.container) folder and create a `.env` file **with your own
-API
-keys**, so that these keys will be present in the environment variables of
-the container, which will later be used by CAMEL. The list of API keys that
+[.container](../.container) folder and create a `.env` file **with your own
+API
+keys**, so that these keys will be present in the environment variables of
+the container, which will later be used by CAMEL. The list of API keys that
can be found in the `.env.example` file.
```bash
@@ -25,7 +25,7 @@ cp .env.example .env
```
## Start Container
-After configuring the API keys, simply run the following command to start
+After configuring the API keys, simply run the following command to start
up the working container. This will automatically set up the environment and
dependencies for CAMEL. It may take some time, please be patient.
@@ -33,7 +33,7 @@ dependencies for CAMEL. It may take some time, please be patient.
docker compose up -d
```
-After the build is completed, you can see the image `camel:localdev` in the
+After the build is completed, you can see the image `camel:localdev` in the
list of images, along with a started container, `camel-localdev`.
```bash
@@ -54,7 +54,7 @@ docker compose exec camel bash
Then you will be in the container environment under the CAMEL directory, with
all the dependencies installed.
-Then You can try running the
+Then You can try running the
[role_playing.py](../examples/ai_society/role_playing.py)
example.
@@ -66,8 +66,8 @@ If you see the agents interacting with each other, this means you are all set.
Have fun with CAMEL in Docker!
## Save Your Progress
-We support volume mounting in the started container, which means that all
-of your changes in the CAMEL directory inside the container will be synced
+We support volume mounting in the started container, which means that all
+of your changes in the CAMEL directory inside the container will be synced
into the CAMEL repo on your host system. Therefore, you don't need to worry
about losing your progress when you exit the container.
@@ -75,8 +75,8 @@ about losing your progress when you exit the container.
You can simply press `Ctrl + D` or use the `exit` command to exit the
container.
-After exiting the container, under normal cases the container will still be
-running in the background. If you don't need the container anymore, you can
+After exiting the container, under normal cases the container will still be
+running in the background. If you don't need the container anymore, you can
stop and delete the container with the following command.
```bash
@@ -84,25 +84,25 @@ docker compose down
```
## Online Images
-For users who only want to have a quick tryout on CAMEL, we also provide the
+For users who only want to have a quick tryout on CAMEL, we also provide the
pre-built images on
[our GitHub Container Registry](https://github.com/camel-ai/camel/pkgs/container/camel).
-Considering the size of the image, we only offer the image with the basic
+Considering the size of the image, we only offer the image with the basic
dependencies.
-Note that there are some key differences between the local development
+Note that there are some key differences between the local development
image and the pre-built image that you should be aware of.
-1. The pre-built image is built upon the source code of each release of CAMEL.
- This means that they are not suitable for development, as they don't
- contain the git support. If you want to develop on CAMEL, please build
+1. The pre-built image is built upon the source code of each release of CAMEL.
+ This means that they are not suitable for development, as they don't
+ contain the git support. If you want to develop on CAMEL, please build
the image by yourself according to the instructions above.
-2. The pre-built image only contains the basic dependencies for running the
- examples. If you want to run the examples that require additional
- dependencies, you need to install them according to the
+2. The pre-built image only contains the basic dependencies for running the
+ examples. If you want to run the examples that require additional
+ dependencies, you need to install them according to the
installation guide in CAMEL's [README](../README.md).
-3. The pre-built image doesn't contain the API keys. You need to set up the
+3. The pre-built image doesn't contain the API keys. You need to set up the
API keys by yourself in the container environment.
-4. The pre-built image does not support volume mounting. This means that all
+4. The pre-built image does not support volume mounting. This means that all
of your changes in the container will be lost when you delete the container.
To quickly start a container with the pre-built image, you can use the
@@ -123,4 +123,4 @@ command.
```bash
python examples/ai_society/role_playing.py
-```
\ No newline at end of file
+```
diff --git a/.container/docker-compose.yaml b/.container/docker-compose.yaml
index dd02a19a6a..cee3f2a2ad 100644
--- a/.container/docker-compose.yaml
+++ b/.container/docker-compose.yaml
@@ -11,5 +11,3 @@ services:
- .env
user: "${UID:-1000}:${GID:-1000}"
command: ["tail", "-f", "/dev/null"]
-
-
diff --git a/.env.example b/.env.example
index b4c96a3d27..b9676d3808 100644
--- a/.env.example
+++ b/.env.example
@@ -140,4 +140,4 @@
# Grok API key
# XAI_API_KEY="Fill your Grok API Key here"
-# XAI_API_BASE_URL="Fill your Grok API Base URL here"
\ No newline at end of file
+# XAI_API_BASE_URL="Fill your Grok API Base URL here"
diff --git a/.github/actions/camel_install/action.yml b/.github/actions/camel_install/action.yml
index 79ceef1c82..f85a17c087 100644
--- a/.github/actions/camel_install/action.yml
+++ b/.github/actions/camel_install/action.yml
@@ -21,7 +21,7 @@ runs:
name: Restore caches for the virtual environment based on uv.lock
with:
path: ./.venv
- key: venv-${{ hashFiles('uv.lock') }}
+ key: venv-${{ hashFiles('uv.lock', 'pyproject.toml') }}
- name: Validate cached virtual environment
id: validate-venv
if: steps.cache-restore.outputs.cache-hit == 'true'
@@ -49,4 +49,4 @@ runs:
if: steps.cache-restore.outputs.cache-hit != 'true' || steps.validate-venv.outputs.cache-valid == 'false'
with:
path: ./.venv
- key: venv-${{ hashFiles('uv.lock') }}
+ key: venv-${{ hashFiles('uv.lock', 'pyproject.toml') }}
diff --git a/.github/workflows/codeql.yml b/.github/workflows/codeql.yml
index 751345f0bc..960f9b9174 100644
--- a/.github/workflows/codeql.yml
+++ b/.github/workflows/codeql.yml
@@ -9,70 +9,95 @@
# the `language` matrix defined below to confirm you have the correct set of
# supported CodeQL languages.
#
-name: "CodeQL"
+name: "CodeQL Advanced"
on:
push:
- branches: ["master"]
+ branches: [ "master" ]
pull_request:
- # The branches below must be a subset of the branches above
- branches: ["master"]
+ branches: [ "master" ]
schedule:
- - cron: "0 0 * * 1"
-
-permissions:
- contents: read
+ - cron: '33 15 * * 6'
jobs:
analyze:
- name: Analyze
- runs-on: ubuntu-latest
+ name: Analyze (${{ matrix.language }})
+ # Runner size impacts CodeQL analysis time. To learn more, please see:
+ # - https://gh.io/recommended-hardware-resources-for-running-codeql
+ # - https://gh.io/supported-runners-and-hardware-resources
+ # - https://gh.io/using-larger-runners (GitHub.com only)
+ # Consider using larger runners or machines with greater resources for possible analysis time improvements.
+ runs-on: ${{ (matrix.language == 'swift' && 'macos-latest') || 'ubuntu-latest' }}
permissions:
+ # required for all workflows
+ security-events: write
+
+ # required to fetch internal or private CodeQL packs
+ packages: read
+
+ # only required for workflows in private repositories
actions: read
contents: read
- security-events: write
strategy:
fail-fast: false
matrix:
- language: ["javascript", "python", "typescript"]
- # CodeQL supports [ $supported-codeql-languages ]
- # Learn more about CodeQL language support at https://aka.ms/codeql-docs/language-support
-
+ include:
+ - language: actions
+ build-mode: none
+ - language: javascript-typescript
+ build-mode: none
+ - language: python
+ build-mode: none
+ # CodeQL supports the following values keywords for 'language': 'actions', 'c-cpp', 'csharp', 'go', 'java-kotlin', 'javascript-typescript', 'python', 'ruby', 'rust', 'swift'
+ # Use `c-cpp` to analyze code written in C, C++ or both
+ # Use 'java-kotlin' to analyze code written in Java, Kotlin or both
+ # Use 'javascript-typescript' to analyze code written in JavaScript, TypeScript or both
+ # To learn more about changing the languages that are analyzed or customizing the build mode for your analysis,
+ # see https://docs.github.com/en/code-security/code-scanning/creating-an-advanced-setup-for-code-scanning/customizing-your-advanced-setup-for-code-scanning.
+ # If you are analyzing a compiled language, you can modify the 'build-mode' for that language to customize how
+ # your codebase is analyzed, see https://docs.github.com/en/code-security/code-scanning/creating-an-advanced-setup-for-code-scanning/codeql-code-scanning-for-compiled-languages
steps:
- - name: Harden the runner (Audit all outbound calls)
- uses: step-security/harden-runner@df199fb7be9f65074067a9eb93f12bb4c5547cf2 # v2.13.3
- with:
- egress-policy: audit
-
- - name: Checkout repository
- uses: actions/checkout@8e8c483db84b4bee98b60c0593521ed34d9990e8 # v6.0.1
-
- # Initializes the CodeQL tools for scanning.
- - name: Initialize CodeQL
- uses: github/codeql-action/init@cf1bb45a277cb3c205638b2cd5c984db1c46a412 # v4.31.7
- with:
- languages: ${{ matrix.language }}
- # If you wish to specify custom queries, you can do so here or in a config file.
- # By default, queries listed here will override any specified in a config file.
- # Prefix the list here with "+" to use these queries and those in the config file.
+ - name: Checkout repository
+ uses: actions/checkout@v4
- # Autobuild attempts to build any compiled languages (C/C++, C#, or Java).
- # If this step fails, then you should remove it and run the build manually (see below)
- - name: Autobuild
- uses: github/codeql-action/autobuild@cf1bb45a277cb3c205638b2cd5c984db1c46a412 # v4.31.7
+ # Add any setup steps before running the `github/codeql-action/init` action.
+ # This includes steps like installing compilers or runtimes (`actions/setup-node`
+ # or others). This is typically only required for manual builds.
+ # - name: Setup runtime (example)
+ # uses: actions/setup-example@v1
- # ℹ️ Command-line programs to run using the OS shell.
- # 📚 See https://docs.github.com/en/actions/using-workflows/workflow-syntax-for-github-actions#jobsjob_idstepsrun
+ # Initializes the CodeQL tools for scanning.
+ - name: Initialize CodeQL
+ uses: github/codeql-action/init@v4
+ with:
+ languages: ${{ matrix.language }}
+ build-mode: ${{ matrix.build-mode }}
+ # If you wish to specify custom queries, you can do so here or in a config file.
+ # By default, queries listed here will override any specified in a config file.
+ # Prefix the list here with "+" to use these queries and those in the config file.
- # If the Autobuild fails above, remove it and uncomment the following three lines.
- # modify them (or add more) to build your code if your project, please refer to the EXAMPLE below for guidance.
+ # For more details on CodeQL's query packs, refer to: https://docs.github.com/en/code-security/code-scanning/automatically-scanning-your-code-for-vulnerabilities-and-errors/configuring-code-scanning#using-queries-in-ql-packs
+ # queries: security-extended,security-and-quality
- # - run: |
- # echo "Run, Build Application using script"
- # ./location_of_script_within_repo/buildscript.sh
+ # If the analyze step fails for one of the languages you are analyzing with
+ # "We were unable to automatically build your code", modify the matrix above
+ # to set the build mode to "manual" for that language. Then modify this step
+ # to build your code.
+ # ℹ️ Command-line programs to run using the OS shell.
+ # 📚 See https://docs.github.com/en/actions/using-workflows/workflow-syntax-for-github-actions#jobsjob_idstepsrun
+ - name: Run manual build steps
+ if: matrix.build-mode == 'manual'
+ shell: bash
+ run: |
+ echo 'If you are using a "manual" build mode for one or more of the' \
+ 'languages you are analyzing, replace this with the commands to build' \
+ 'your code, for example:'
+ echo ' make bootstrap'
+ echo ' make release'
+ exit 1
- - name: Perform CodeQL Analysis
- uses: github/codeql-action/analyze@cf1bb45a277cb3c205638b2cd5c984db1c46a412 # v4.31.7
- with:
- category: "/language:${{matrix.language}}"
+ - name: Perform CodeQL Analysis
+ uses: github/codeql-action/analyze@v4
+ with:
+ category: "/language:${{matrix.language}}"
diff --git a/.github/workflows/test_minimal_dependency.yml b/.github/workflows/test_minimal_dependency.yml
index 6942083a7d..b68c1b6489 100644
--- a/.github/workflows/test_minimal_dependency.yml
+++ b/.github/workflows/test_minimal_dependency.yml
@@ -43,4 +43,4 @@ jobs:
run: |
source .venv/bin/activate
pip install pytest dotenv
- pytest test/integration_test/test_minimal_dependency.py
\ No newline at end of file
+ pytest test/integration_test/test_minimal_dependency.py
diff --git a/.pre-commit-config.yaml b/.pre-commit-config.yaml
index 72311029c4..70b6ea47e3 100644
--- a/.pre-commit-config.yaml
+++ b/.pre-commit-config.yaml
@@ -23,7 +23,7 @@ repos:
hooks:
- id: check-license
name: Check License
- entry: python licenses/update_license.py . licenses/license_template.txt
+ entry: python licenses/update_license.py . licenses/license_template.txt
language: system
types: [python]
exclude: ^(docs/cookbooks/|examples/usecases/) # Ignore files under docs/cookbooks and examples/usecases
@@ -40,16 +40,8 @@ repos:
rev: v8.16.3
hooks:
- id: gitleaks
- - repo: https://github.com/pre-commit/mirrors-eslint
- rev: v8.38.0
- hooks:
- - id: eslint
- repo: https://github.com/pre-commit/pre-commit-hooks
rev: v4.4.0
hooks:
- id: end-of-file-fixer
- id: trailing-whitespace
- - repo: https://github.com/pylint-dev/pylint
- rev: v2.17.2
- hooks:
- - id: pylint
diff --git a/CONTRIBUTING.md b/CONTRIBUTING.md
index ec38d359c1..7cc027969f 100644
--- a/CONTRIBUTING.md
+++ b/CONTRIBUTING.md
@@ -4,7 +4,7 @@ Thank you for your interest in contributing to the CAMEL project! 🎉 We're exc
## Join Our Community 🌍
-### Schedule an Introduction Call 📞
+### Schedule an Introduction Call 📞
- English speakers: [here](https://cal.com/wendong-fan-5yu7x5/30min)
- Chinese speakers: [here](https://cal.com/wendong-fan-5yu7x5/30min)
@@ -21,7 +21,7 @@ Thank you for your interest in contributing to the CAMEL project! 🎉 We're exc
### Contributing to the Code 👨💻👩💻
-If you're eager to contribute to this project, that's fantastic! We're thrilled to have your support.
+If you're eager to contribute to this project, that's fantastic! We're thrilled to have your support.
- If you are a contributor from the community:
- Follow the [Fork-and-Pull-Request](https://docs.github.com/en/get-started/quickstart/contributing-to-projects) workflow when opening your pull requests.
@@ -40,9 +40,9 @@ Ensuring excellent documentation and thorough testing is absolutely crucial. Her
- Update any affected example console scripts in the `examples` directory, Gradio demos in the `apps` directory, and documentation in the `docs` directory.
- Update unit tests when relevant.
- If you add a feature:
- - Include unit tests in the `test` directory.
+ - Include unit tests in the `test` directory.
- Add a demo script in the `examples` directory.
-
+
We're a small team focused on building great things. If you have something in mind that you'd like to add or modify, opening a pull request is the ideal way to catch our attention. 🚀
### Contributing to the Cookbook Writing 📚
@@ -62,9 +62,9 @@ Here’s how you can contribute to writing cookbooks:
- Interactive Elements: Whenever applicable, add interactive code cells in Colab that users can directly run and modify.
##### 1.2. Developing cookbooks for in-progress features
-You can install the latest version of CAMEL from the main branch or a topic branch. This allows you to use the latest codebase, or in-progress features in your cookbook.
+You can install the latest version of CAMEL from the main branch or a topic branch. This allows you to use the latest codebase, or in-progress features in your cookbook.
-`!pip install "git+https://github.com/camel-ai/camel.git@master#egg=camel-ai[all]"`
+`!pip install "git+https://github.com/camel-ai/camel.git@master#egg=camel-ai[all]"`
Changing the branch and extras section (e.g. remove `#egg=camel-ai[all]`) will behave as expected.
@@ -174,10 +174,10 @@ r"""Class for managing conversations of CAMEL Chat Agents.
Example:
```markdown
Args:
- system_message (BaseMessage): The system message for initializing
+ system_message (BaseMessage): The system message for initializing
the agent's conversation context.
- model (BaseModelBackend, optional): The model backend to use for
- response generation. Defaults to :obj:`OpenAIModel` with
+ model (BaseModelBackend, optional): The model backend to use for
+ response generation. Defaults to :obj:`OpenAIModel` with
`GPT_4O_MINI`. (default: :obj:`OpenAIModel` with `GPT_4O_MINI`)
```
@@ -220,12 +220,12 @@ Avoid using `print` for output. Use Python's `logging` module (`logger`) to ensu
Examples:
-- Bad:
+- Bad:
```python
print("Process started")
print(f"User input: {user_input}")
```
-- Good:
+- Good:
```python
Args:
logger.info("Process started")
diff --git a/LICENSE b/LICENSE
index d9056118eb..c46213dce5 100644
--- a/LICENSE
+++ b/LICENSE
@@ -198,4 +198,4 @@
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
- limitations under the License.
\ No newline at end of file
+ limitations under the License.
diff --git a/README.ja.md b/README.ja.md
index 9d340e2cfb..6f6da33dc3 100644
--- a/README.ja.md
+++ b/README.ja.md
@@ -632,4 +632,4 @@ CAMELのマルチエージェントフレームワークがインフラ自動化
[package-download-url]: https://pypi.org/project/camel-ai
[join-us]:https://eigent-ai.notion.site/eigent-ai-careers
[join-us-image]:https://img.shields.io/badge/Join%20Us-yellow?style=plastic
-[image-join-us]: https://camel-ai.github.io/camel_asset/graphics/join_us.png
\ No newline at end of file
+[image-join-us]: https://camel-ai.github.io/camel_asset/graphics/join_us.png
diff --git a/README.md b/README.md
index 8ecc4389f9..a7d05d3772 100644
--- a/README.md
+++ b/README.md
@@ -49,7 +49,7 @@
-Join us ([*Discord*](https://discord.camel-ai.org/) or [*WeChat*](https://ghli.org/camel/wechat.png)) in pushing the boundaries of finding the scaling laws of agents.
+Join us ([*Discord*](https://discord.camel-ai.org/) or [*WeChat*](https://ghli.org/camel/wechat.png)) in pushing the boundaries of finding the scaling laws of agents.
🌟 Star CAMEL on GitHub and be instantly notified of new releases.
@@ -382,7 +382,7 @@ We believe that studying these agents on a large scale offers valuable insights
>### Research with US
>
->We warmly invite you to use CAMEL for your impactful research.
+>We warmly invite you to use CAMEL for your impactful research.
>
> Rigorous research takes time and resources. We are a community-driven research collective with 100+ researchers exploring the frontier research of Multi-agent Systems. Join our ongoing projects or test new ideas with us, [reach out via email](mailto:camel-ai@eigent.ai) for more information.
>
@@ -538,7 +538,7 @@ We are actively involved in community events including:
- 🎙️ **Community Meetings** — Weekly virtual syncs with the CAMEL team
- 🏆 **Competitions** — Hackathons, Bounty Tasks and coding challenges hosted by CAMEL
- 🤝 **Volunteer Activities** — Contributions, documentation drives, and mentorship
-- 🌍 **Ambassador Programs** — Represent CAMEL in your university or local tech groups
+- 🌍 **Ambassador Programs** — Represent CAMEL in your university or local tech groups
> Want to host or participate in a CAMEL event? Join our [Discord](https://discord.com/invite/CNcNpquyDc) or want to be part of [Ambassador Program](https://www.camel-ai.org/ambassador).
diff --git a/README.zh.md b/README.zh.md
index f3517c69d2..7ba110cabb 100644
--- a/README.zh.md
+++ b/README.zh.md
@@ -554,4 +554,4 @@ pip install camel-ai
[package-download-url]: https://pypi.org/project/camel-ai
[join-us]:https://eigent-ai.notion.site/eigent-ai-careers
[join-us-image]:https://img.shields.io/badge/Join%20Us-yellow?style=plastic
-[image-join-us]: https://camel-ai.github.io/camel_asset/graphics/join_us.png
\ No newline at end of file
+[image-join-us]: https://camel-ai.github.io/camel_asset/graphics/join_us.png
diff --git a/apps/data_explorer/README.md b/apps/data_explorer/README.md
index 8b14c05efa..33872f2377 100644
--- a/apps/data_explorer/README.md
+++ b/apps/data_explorer/README.md
@@ -9,4 +9,4 @@
Validated for python 3.8 and 3.10.
-Run `python data_explorer.py --help` for command line options.
\ No newline at end of file
+Run `python data_explorer.py --help` for command line options.
diff --git a/camel/agents/chat_agent.py b/camel/agents/chat_agent.py
index f101679cff..c4f8a94d49 100644
--- a/camel/agents/chat_agent.py
+++ b/camel/agents/chat_agent.py
@@ -165,7 +165,7 @@ def _cleanup_temp_files():
textwrap.dedent(
"""\
Please format the following content:
-
+
{content}
"""
)
diff --git a/camel/agents/deductive_reasoner_agent.py b/camel/agents/deductive_reasoner_agent.py
index c56e3f279f..b5a12fe9ba 100644
--- a/camel/agents/deductive_reasoner_agent.py
+++ b/camel/agents/deductive_reasoner_agent.py
@@ -103,105 +103,105 @@ def deduce_conditions_and_quality(
"""
self.reset()
- deduce_prompt = """You are a deductive reasoner. You are tasked to
- complete the TASK based on the THOUGHT OF DEDUCTIVE REASONING, the
- STARTING STATE A and the TARGET STATE B. You are given the CONTEXT
+ deduce_prompt = """You are a deductive reasoner. You are tasked to
+ complete the TASK based on the THOUGHT OF DEDUCTIVE REASONING, the
+ STARTING STATE A and the TARGET STATE B. You are given the CONTEXT
CONTENT to help you complete the TASK.
-Your answer MUST strictly adhere to the structure of ANSWER TEMPLATE, ONLY
+Your answer MUST strictly adhere to the structure of ANSWER TEMPLATE, ONLY
fill in the BLANKs, and DO NOT alter or modify any other part of the template
===== MODELING OF DEDUCTIVE REASONING =====
-You are tasked with understanding a mathematical model based on the components
+You are tasked with understanding a mathematical model based on the components
${A, B, C, Q, L}$. In this model: ``L: A ⊕ C -> q * B``.
- $A$ represents the known starting state.
- $B$ represents the known target state.
- $C$ represents the conditions required to transition from $A$ to $B$.
-- $Q$ represents the quality or effectiveness of the transition from $A$ to
+- $Q$ represents the quality or effectiveness of the transition from $A$ to
$B$.
- $L$ represents the path or process from $A$ to $B$.
===== THOUGHT OF DEDUCTIVE REASONING =====
1. Define the Parameters of A and B:
- - Characterization: Before delving into transitions, thoroughly understand
- the nature and boundaries of both $A$ and $B$. This includes the type,
+ - Characterization: Before delving into transitions, thoroughly understand
+ the nature and boundaries of both $A$ and $B$. This includes the type,
properties, constraints, and possible interactions between the two.
- - Contrast and Compare: Highlight the similarities and differences between
- $A$ and $B$. This comparative analysis will give an insight into what
+ - Contrast and Compare: Highlight the similarities and differences between
+ $A$ and $B$. This comparative analysis will give an insight into what
needs changing and what remains constant.
2. Historical & Empirical Analysis:
- - Previous Transitions according to the Knowledge Base of GPT: (if
- applicable) Extract conditions and patterns from the historical instances
- where a similar transition from a state comparable to $A$ moved towards
+ - Previous Transitions according to the Knowledge Base of GPT: (if
+ applicable) Extract conditions and patterns from the historical instances
+ where a similar transition from a state comparable to $A$ moved towards
$B$.
- - Scientific Principles: (if applicable) Consider the underlying
- scientific principles governing or related to the states and their
- transition. For example, if $A$ and $B$ are physical states, laws of
+ - Scientific Principles: (if applicable) Consider the underlying
+ scientific principles governing or related to the states and their
+ transition. For example, if $A$ and $B$ are physical states, laws of
physics might apply.
3. Logical Deduction of Conditions ($C$):
- - Direct Path Analysis: What are the immediate and direct conditions
+ - Direct Path Analysis: What are the immediate and direct conditions
required to move from $A$ to $B$?
- - Intermediate States: Are there states between $A$ and $B$ that must be
- traversed or can be used to make the transition smoother or more
+ - Intermediate States: Are there states between $A$ and $B$ that must be
+ traversed or can be used to make the transition smoother or more
efficient? If yes, what is the content?
- - Constraints & Limitations: Identify potential barriers or restrictions
- in moving from $A$ to $B$. These can be external (e.g., environmental
+ - Constraints & Limitations: Identify potential barriers or restrictions
+ in moving from $A$ to $B$. These can be external (e.g., environmental
factors) or internal (properties of $A$ or $B$).
- - Resource and Information Analysis: What resources and information are
- required for the transition? This could be time, entity, factor, code
+ - Resource and Information Analysis: What resources and information are
+ required for the transition? This could be time, entity, factor, code
language, software platform, unknowns, etc.
- - External Influences: Consider socio-economic, political, or
- environmental factors (if applicable) that could influence the transition
+ - External Influences: Consider socio-economic, political, or
+ environmental factors (if applicable) that could influence the transition
conditions.
- - Creative/Heuristic Reasoning: Open your mind to multiple possible $C$'s,
- no matter how unconventional they might seem. Utilize analogies,
- metaphors, or brainstorming techniques to envision possible conditions or
+ - Creative/Heuristic Reasoning: Open your mind to multiple possible $C$'s,
+ no matter how unconventional they might seem. Utilize analogies,
+ metaphors, or brainstorming techniques to envision possible conditions or
paths from $A$ to $B$.
- - The conditions $C$ should be multiple but in one sentence. And each
+ - The conditions $C$ should be multiple but in one sentence. And each
condition should be concerned with one aspect/entity.
4. Entity/Label Recognition of Conditions ($C$):
- - Identify and categorize entities of Conditions ($C$) such as the names,
- locations, dates, specific technical terms or contextual parameters that
+ - Identify and categorize entities of Conditions ($C$) such as the names,
+ locations, dates, specific technical terms or contextual parameters that
might be associated with events, innovations post-2022.
- - The output of the entities/labels will be used as tags or labels for
- semantic similarity searches. The entities/labels may be the words, or
- phrases, each of them should contain valuable, high information entropy
+ - The output of the entities/labels will be used as tags or labels for
+ semantic similarity searches. The entities/labels may be the words, or
+ phrases, each of them should contain valuable, high information entropy
information, and should be independent.
- - Ensure that the identified entities are formatted in a manner suitable
- for database indexing and retrieval. Organize the entities into
- categories, and combine the category with its instance into a continuous
+ - Ensure that the identified entities are formatted in a manner suitable
+ for database indexing and retrieval. Organize the entities into
+ categories, and combine the category with its instance into a continuous
phrase, without using colons or other separators.
- - Format these entities for database indexing: output the category rather
- than its instance/content into a continuous phrase. For example, instead
+ - Format these entities for database indexing: output the category rather
+ than its instance/content into a continuous phrase. For example, instead
of "Jan. 02", identify it as "Event time".
5. Quality Assessment ($Q$):
- - Efficiency: How efficient is the transition from $A$ to $B$, which
+ - Efficiency: How efficient is the transition from $A$ to $B$, which
measures the resources used versus the desired outcome?
- - Effectiveness: Did the transition achieve the desired outcome or was the
+ - Effectiveness: Did the transition achieve the desired outcome or was the
target state achieved as intended?
- - Safety & Risks: Assess any risks associated with the transition and the
+ - Safety & Risks: Assess any risks associated with the transition and the
measures to mitigate them.
- - Feedback Mechanisms: Incorporate feedback loops to continuously monitor
+ - Feedback Mechanisms: Incorporate feedback loops to continuously monitor
and adjust the quality of transition, making it more adaptive.
6. Iterative Evaluation:
- - Test & Refine: Based on the initially deduced conditions and assessed
- quality, iterate the process to refine and optimize the transition. This
- might involve tweaking conditions, employing different paths, or changing
+ - Test & Refine: Based on the initially deduced conditions and assessed
+ quality, iterate the process to refine and optimize the transition. This
+ might involve tweaking conditions, employing different paths, or changing
resources.
- - Feedback Integration: Use feedback to make improvements and increase the
+ - Feedback Integration: Use feedback to make improvements and increase the
quality of the transition.
-7. Real-world scenarios often present challenges that may not be captured by
+7. Real-world scenarios often present challenges that may not be captured by
models and frameworks. While using the model, maintain an adaptive mindset:
- - Scenario Exploration: Continuously imagine various possible scenarios,
+ - Scenario Exploration: Continuously imagine various possible scenarios,
both positive and negative, to prepare for unexpected events.
- Flexibility: Be prepared to modify conditions ($C$) or alter the path/
process ($L$) if unforeseen challenges arise.
- - Feedback Integration: Rapidly integrate feedback from actual
- implementations to adjust the model's application, ensuring relevancy and
+ - Feedback Integration: Rapidly integrate feedback from actual
+ implementations to adjust the model's application, ensuring relevancy and
effectiveness.
===== TASK =====
-Given the starting state $A$ and the target state $B$, assuming that a path
-$L$ always exists between $A$ and $B$, how can one deduce or identify the
+Given the starting state $A$ and the target state $B$, assuming that a path
+$L$ always exists between $A$ and $B$, how can one deduce or identify the
necessary conditions $C$ and the quality $Q$ of the transition?
===== STARTING STATE $A$ =====
@@ -217,7 +217,7 @@ def deduce_conditions_and_quality(
- Logical Deduction of Conditions ($C$) (multiple conditions can be deduced):
condition :
.
-- Entity/Label Recognition of Conditions:\n[, , ...] (include
+- Entity/Label Recognition of Conditions:\n[, , ...] (include
square brackets)
- Quality Assessment ($Q$) (do not use symbols):
.
diff --git a/camel/agents/knowledge_graph_agent.py b/camel/agents/knowledge_graph_agent.py
index 979deba048..045b14f93e 100644
--- a/camel/agents/knowledge_graph_agent.py
+++ b/camel/agents/knowledge_graph_agent.py
@@ -40,53 +40,53 @@
text_prompt = """
-You are tasked with extracting nodes and relationships from given content and
-structures them into Node and Relationship objects. Here's the outline of what
+You are tasked with extracting nodes and relationships from given content and
+structures them into Node and Relationship objects. Here's the outline of what
you needs to do:
Content Extraction:
-You should be able to process input content and identify entities mentioned
+You should be able to process input content and identify entities mentioned
within it.
-Entities can be any noun phrases or concepts that represent distinct entities
+Entities can be any noun phrases or concepts that represent distinct entities
in the context of the given content.
Node Extraction:
For each identified entity, you should create a Node object.
Each Node object should have a unique identifier (id) and a type (type).
-Additional properties associated with the node can also be extracted and
+Additional properties associated with the node can also be extracted and
stored.
Relationship Extraction:
You should identify relationships between entities mentioned in the content.
For each relationship, create a Relationship object.
-A Relationship object should have a subject (subj) and an object (obj) which
+A Relationship object should have a subject (subj) and an object (obj) which
are Node objects representing the entities involved in the relationship.
-Each relationship should also have a type (type), and additional properties if
+Each relationship should also have a type (type), and additional properties if
applicable.
Output Formatting:
-The extracted nodes and relationships should be formatted as instances of the
+The extracted nodes and relationships should be formatted as instances of the
provided Node and Relationship classes.
Ensure that the extracted data adheres to the structure defined by the classes.
-Output the structured data in a format that can be easily validated against
+Output the structured data in a format that can be easily validated against
the provided code.
-Do not wrap the output in lists or dictionaries, provide the Node and
+Do not wrap the output in lists or dictionaries, provide the Node and
Relationship with unique identifiers.
-Strictly follow the format provided in the example output, do not add any
+Strictly follow the format provided in the example output, do not add any
additional information.
Instructions for you:
Read the provided content thoroughly.
-Identify distinct entities mentioned in the content and categorize them as
+Identify distinct entities mentioned in the content and categorize them as
nodes.
-Determine relationships between these entities and represent them as directed
+Determine relationships between these entities and represent them as directed
relationships.
Provide the extracted nodes and relationships in the specified format below.
Example for you:
Example Content:
-"John works at XYZ Corporation. He is a software engineer. The company is
+"John works at XYZ Corporation. He is a software engineer. The company is
located in New York City."
Expected Output:
@@ -99,14 +99,14 @@
Relationships:
-Relationship(subj=Node(id='John', type='Person'), obj=Node(id='XYZ
+Relationship(subj=Node(id='John', type='Person'), obj=Node(id='XYZ
Corporation', type='Organization'), type='WorksAt')
-Relationship(subj=Node(id='John', type='Person'), obj=Node(id='New York City',
+Relationship(subj=Node(id='John', type='Person'), obj=Node(id='New York City',
type='Location'), type='ResidesIn')
===== TASK =====
-Please extracts nodes and relationships from given content and structures them
-into Node and Relationship objects.
+Please extracts nodes and relationships from given content and structures them
+into Node and Relationship objects.
{task}
"""
diff --git a/camel/agents/mcp_agent.py b/camel/agents/mcp_agent.py
index 309087534f..701ba83a25 100644
--- a/camel/agents/mcp_agent.py
+++ b/camel/agents/mcp_agent.py
@@ -62,9 +62,9 @@
SYS_MSG_CONTENT = """
-You are a helpful assistant, and you prefer to use tools provided by the user
+You are a helpful assistant, and you prefer to use tools provided by the user
to solve problems.
-Using a tool, you will tell the user `server_idx`, `tool_name` and
+Using a tool, you will tell the user `server_idx`, `tool_name` and
`tool_args` formatted in JSON as following:
```json
{
diff --git a/camel/agents/multi_hop_generator_agent.py b/camel/agents/multi_hop_generator_agent.py
index bcdcdca287..4135365f0d 100644
--- a/camel/agents/multi_hop_generator_agent.py
+++ b/camel/agents/multi_hop_generator_agent.py
@@ -56,7 +56,7 @@ def __init__(self, **kwargs: Any) -> None:
system_text: str = textwrap.dedent(
"""\
- You are an expert at generating
+ You are an expert at generating
multi-hop question-answer pairs.
For each context, you should:
1. Identify multiple related facts or pieces of information
diff --git a/camel/agents/tool_agents/hugging_face_tool_agent.py b/camel/agents/tool_agents/hugging_face_tool_agent.py
index a8600ba2a6..9d72cab394 100644
--- a/camel/agents/tool_agents/hugging_face_tool_agent.py
+++ b/camel/agents/tool_agents/hugging_face_tool_agent.py
@@ -90,7 +90,7 @@ def __init__(
sea_add_island_image = {name}.step("Draw me a picture of the sea then transform the picture to add an island")
sea_add_island_image.save("./sea_add_island_image.png")
-# If you'd like to keep a state across executions or to pass non-text objects to the agent,
+# If you'd like to keep a state across executions or to pass non-text objects to the agent,
# you can do so by specifying variables that you would like the agent to use. For example,
# you could generate the first image of rivers and lakes, and ask the model to update that picture to add an island by doing the following:
picture = {name}.step("Generate a picture of rivers and lakes.")
diff --git a/camel/benchmarks/apibank.py b/camel/benchmarks/apibank.py
index 850a33ca98..2bb8a77f11 100644
--- a/camel/benchmarks/apibank.py
+++ b/camel/benchmarks/apibank.py
@@ -542,9 +542,9 @@ def evaluate(self, sample_id, model_output):
replace the key and value with the actual parameters. \
Your output should start with a square bracket "[" \
and end with a square bracket "]". Do not output any \
-other explanation or prompt or the result of the API call in your output.
+other explanation or prompt or the result of the API call in your output.
This year is 2023.
-Input:
+Input:
User: [User's utterence]
AI: [AI's utterence]
@@ -559,7 +559,7 @@ def evaluate(self, sample_id, model_output):
conversation history 1..t, please generate the next \
dialog that the AI should response after the API call t.
This year is 2023.
-Input:
+Input:
User: [User's utterence]
AI: [AI's utterence]
[ApiName(key1='value1', key2='value2', …)]
diff --git a/camel/benchmarks/browsecomp.py b/camel/benchmarks/browsecomp.py
index 3f269388ec..f35ca701b1 100644
--- a/camel/benchmarks/browsecomp.py
+++ b/camel/benchmarks/browsecomp.py
@@ -72,27 +72,27 @@ class GradingResponse(BaseModel):
extracted_final_answer: str = Field(
description="""
The final exact answer extracted from the [response].
-Put the extracted answer as 'None' if there is no exact, final answer to
+Put the extracted answer as 'None' if there is no exact, final answer to
extract from the response."""
)
reasoning: str = Field(
description="""
-Explain why the extracted_final_answer is correct or incorrect
-based on [correct_answer], focusing only on if there are meaningful
-differences between [correct_answer] and the extracted_final_answer.
-Do not comment on any background to the problem, do not attempt
-to solve the problem, do not argue for any answer different
+Explain why the extracted_final_answer is correct or incorrect
+based on [correct_answer], focusing only on if there are meaningful
+differences between [correct_answer] and the extracted_final_answer.
+Do not comment on any background to the problem, do not attempt
+to solve the problem, do not argue for any answer different
than [correct_answer], focus only on whether the answers match."""
)
correct: str = Field(
- description="""Answer 'yes' if extracted_final_answer matches the
-[correct_answer] given above, or is within a small margin of error for
-numerical problems. Answer 'no' otherwise, i.e. if there if there is any
-inconsistency, ambiguity, non-equivalency, or if the extracted answer is
+ description="""Answer 'yes' if extracted_final_answer matches the
+[correct_answer] given above, or is within a small margin of error for
+numerical problems. Answer 'no' otherwise, i.e. if there if there is any
+inconsistency, ambiguity, non-equivalency, or if the extracted answer is
incorrect."""
)
confidence: str = Field(
- description="""The extracted confidence score between 0|\%|
+ description="""The extracted confidence score between 0|\%|
and 100|\%| from [response]. Put 100 if there is no confidence score available.
"""
)
@@ -161,7 +161,7 @@ class EvalResult(BaseModel):
"""
GRADER_TEMPLATE = """
-Judge whether the following [response] to [question] is correct or not
+Judge whether the following [response] to [question] is correct or not
based on the precise and unambiguous [correct_answer] below.
[question]: {question}
@@ -171,26 +171,26 @@ class EvalResult(BaseModel):
Your judgement must be in the format and criteria specified below:
extracted_final_answer: The final exact answer extracted from the [response].
-Put the extracted answer as 'None' if there is no exact, final answer to
+Put the extracted answer as 'None' if there is no exact, final answer to
extract from the response.
[correct_answer]: {correct_answer}
-reasoning: Explain why the extracted_final_answer is correct or incorrect
-based on [correct_answer], focusing only on if there are meaningful
-differences between [correct_answer] and the extracted_final_answer.
-Do not comment on any background to the problem, do not attempt
-to solve the problem, do not argue for any answer different
+reasoning: Explain why the extracted_final_answer is correct or incorrect
+based on [correct_answer], focusing only on if there are meaningful
+differences between [correct_answer] and the extracted_final_answer.
+Do not comment on any background to the problem, do not attempt
+to solve the problem, do not argue for any answer different
than [correct_answer], focus only on whether the answers match.
-correct: Answer 'yes' if extracted_final_answer matches the
-[correct_answer] given above, or is within a small margin of error for
-numerical problems. Answer 'no' otherwise, i.e. if there is any
-inconsistency, ambiguity, non-equivalency, or if the extracted answer is
+correct: Answer 'yes' if extracted_final_answer matches the
+[correct_answer] given above, or is within a small margin of error for
+numerical problems. Answer 'no' otherwise, i.e. if there is any
+inconsistency, ambiguity, non-equivalency, or if the extracted answer is
incorrect.
-confidence: The extracted confidence score between 0|\%| and 100|\%|
+confidence: The extracted confidence score between 0|\%| and 100|\%|
from [response]. Put 100 if there is no confidence score available.
""".strip()
diff --git a/camel/benchmarks/mock_website/README.md b/camel/benchmarks/mock_website/README.md
index fe4433644a..f31e4c7c75 100644
--- a/camel/benchmarks/mock_website/README.md
+++ b/camel/benchmarks/mock_website/README.md
@@ -86,11 +86,9 @@ The dispatcher will automatically download project-specific `templates` and `sta
## TODO: Automated Question Generation Module
-A planned future module for this project is the development of an automated question generation system. This system would analyze the current state of the web application environment (e.g., visible elements, available products, cart status) and generate relevant questions or tasks for a web agent to solve.
+A planned future module for this project is the development of an automated question generation system. This system would analyze the current state of the web application environment (e.g., visible elements, available products, cart status) and generate relevant questions or tasks for a web agent to solve.
This could involve:
* Identifying interactable elements and their states.
* Understanding the current context (e.g., on product page, in cart).
* Formulating natural language questions or goal descriptions based on this context (e.g., "Find a product under $50 in the Electronics category and add it to the cart," or "What is the current subtotal of the cart after adding two units of item X?").
-
-
diff --git a/camel/benchmarks/mock_website/requirements.txt b/camel/benchmarks/mock_website/requirements.txt
index 812033fc32..1a580d77d9 100644
--- a/camel/benchmarks/mock_website/requirements.txt
+++ b/camel/benchmarks/mock_website/requirements.txt
@@ -1,3 +1,3 @@
Flask>=2.0
huggingface-hub
-requests
\ No newline at end of file
+requests
diff --git a/camel/benchmarks/mock_website/task.json b/camel/benchmarks/mock_website/task.json
index ef652493b1..1979081da5 100644
--- a/camel/benchmarks/mock_website/task.json
+++ b/camel/benchmarks/mock_website/task.json
@@ -101,4 +101,4 @@
"quantity": 1
}
]
-}
\ No newline at end of file
+}
diff --git a/camel/benchmarks/nexus.py b/camel/benchmarks/nexus.py
index 7355fc7ca2..6e15cf207e 100644
--- a/camel/benchmarks/nexus.py
+++ b/camel/benchmarks/nexus.py
@@ -68,7 +68,7 @@ class NexusTool:
Respond with nothing but the function call ONLY, such that I can \
directly execute your function call without any post processing \
-necessary from my end. Do not use variables.
+necessary from my end. Do not use variables.
If there are more than two function calls, separate them with a semicolon (;).
{tools}
diff --git a/camel/bots/discord/discord_store.py b/camel/bots/discord/discord_store.py
index e68fd27fa6..1b850c2f41 100644
--- a/camel/bots/discord/discord_store.py
+++ b/camel/bots/discord/discord_store.py
@@ -94,7 +94,7 @@ async def save(self, installation: DiscordInstallation):
await db.execute(
"""
INSERT INTO discord_installations (
- guild_id, access_token, refresh_token,
+ guild_id, access_token, refresh_token,
installed_at, token_expires_at
) VALUES (?, ?, ?, ?, ?)
ON CONFLICT(guild_id) DO UPDATE SET
diff --git a/camel/bots/slack/models.py b/camel/bots/slack/models.py
index 598a2127e9..96c1ddf09a 100644
--- a/camel/bots/slack/models.py
+++ b/camel/bots/slack/models.py
@@ -126,11 +126,11 @@ class SlackEventBody(BaseModel):
"""The timestamp (in seconds) representing when the event was triggered."""
authorizations: Optional[list[SlackAuthProfile]] = None
- """An optional list of authorizations that describe which installation can
+ """An optional list of authorizations that describe which installation can
see the event."""
is_ext_shared_channel: bool
- """Indicates if the event is part of a shared channel between different
+ """Indicates if the event is part of a shared channel between different
organizations."""
event_context: str
diff --git a/camel/data_collectors/alpaca_collector.py b/camel/data_collectors/alpaca_collector.py
index 78b742a2be..572263bda3 100644
--- a/camel/data_collectors/alpaca_collector.py
+++ b/camel/data_collectors/alpaca_collector.py
@@ -26,7 +26,7 @@
Extract key entities and attributes from the conversations
and convert them into a structured JSON format.
For example:
- Instruction: You are a helpful assistant.
+ Instruction: You are a helpful assistant.
User: When is the release date of the video game Portal?
Assistant: The release date of the video game Portal is October 9.
Your output should be:
diff --git a/camel/datagen/cot_datagen.py b/camel/datagen/cot_datagen.py
index 0a29300203..c9e6ba6eff 100644
--- a/camel/datagen/cot_datagen.py
+++ b/camel/datagen/cot_datagen.py
@@ -39,7 +39,7 @@ class AgentResponse(BaseModel):
score: Annotated[float, confloat(ge=0, le=1)] = Field(
...,
- description="""Similarity score between 0 and 1
+ description="""Similarity score between 0 and 1
comparing current answer to correct answer""",
)
diff --git a/camel/datagen/evol_instruct/scorer.py b/camel/datagen/evol_instruct/scorer.py
index 0b02a93dd0..aaf0c159d5 100644
--- a/camel/datagen/evol_instruct/scorer.py
+++ b/camel/datagen/evol_instruct/scorer.py
@@ -39,24 +39,24 @@ def score(
class MathScorer(BaseScorer):
def __init__(self, agent: Optional[ChatAgent] = None):
self.system_msg = """
-You are an evaluator for math problems. Your task is to compare a new math
-problem against a reference math problem by trying to solve it, and rate it
+You are an evaluator for math problems. Your task is to compare a new math
+problem against a reference math problem by trying to solve it, and rate it
in **three dimensions**.
-1. Diversity (1-5): How novel is the new problem compared to the
+1. Diversity (1-5): How novel is the new problem compared to the
reference? 1 = almost the same, 5 = completely different.
-2. Difficulty (1-10): Rate the relative difficulty compared to the reference
-problem. 1 = much less difficult, 5 = similar difficulty, 10 = much more
-difficult. The difficulty should be based on the complexity of reasoning—i.e.,
+2. Difficulty (1-10): Rate the relative difficulty compared to the reference
+problem. 1 = much less difficult, 5 = similar difficulty, 10 = much more
+difficult. The difficulty should be based on the complexity of reasoning—i.e.,
problems that require multi-step reasoning or clever methods to solve.
-3. Solvability (1-10): How likely is the problem solvable using standard math
-techniques and only contain one question that could be answered by a number or
-a formula? 1 = very unsolvable or ambiguous, 10 = solvable and could be
+3. Solvability (1-10): How likely is the problem solvable using standard math
+techniques and only contain one question that could be answered by a number or
+a formula? 1 = very unsolvable or ambiguous, 10 = solvable and could be
answered by a number or a formula.
-Respond with a JSON object like:
+Respond with a JSON object like:
{ "solution": ..., "diversity": ..., "difficulty": ..., "solvability": ... }
"""
self.agent = agent or ChatAgent(self.system_msg)
diff --git a/camel/datagen/evol_instruct/templates.py b/camel/datagen/evol_instruct/templates.py
index 5683be95da..70cafddf03 100644
--- a/camel/datagen/evol_instruct/templates.py
+++ b/camel/datagen/evol_instruct/templates.py
@@ -231,29 +231,29 @@ class MathEvolInstructTemplates(BaseEvolInstructTemplates):
EVOL_METHODS = {
"constraints": """
Add one or more significant constraints or requirements into the
-'#Given Prompt#'. The added constraints must meaningfully alter how the model
-would respond. For example, specify additional rules, contexts, or limitations
-that demand creative adjustments. This method should make the problem more
-challenging in the reasoning and the solution of it should be clever and
+'#Given Prompt#'. The added constraints must meaningfully alter how the model
+would respond. For example, specify additional rules, contexts, or limitations
+that demand creative adjustments. This method should make the problem more
+challenging in the reasoning and the solution of it should be clever and
elegant.
""",
"deepening": """
-Increase the difficulty of the #Given Prompt# by integrating additional layers
-of reasoning and rigor. Refine the problem so that all added difficulty is
-consolidated into a single coherent question requiring one final answer,
+Increase the difficulty of the #Given Prompt# by integrating additional layers
+of reasoning and rigor. Refine the problem so that all added difficulty is
+consolidated into a single coherent question requiring one final answer,
avoiding fragmentation into multiple sub-problems.
""",
"expansion": """
-Expand the #Given Prompt# by incorporating additional perspectives or layers
-of complexity into the problem statement. Ensure that the revised problem
-remains a single, unified question with one final answer, rather than a
+Expand the #Given Prompt# by incorporating additional perspectives or layers
+of complexity into the problem statement. Ensure that the revised problem
+remains a single, unified question with one final answer, rather than a
series of separate sub-questions.
""",
"condense": """
-Reformulate the given math problem into a well-structured and formally stated
-mathematical question. Remove unnecessary instructions, explanations, or hints.
-If the given problem contains several sub-questions, make necessary changes
-to let the problem could be answered with one number or one expression by
+Reformulate the given math problem into a well-structured and formally stated
+mathematical question. Remove unnecessary instructions, explanations, or hints.
+If the given problem contains several sub-questions, make necessary changes
+to let the problem could be answered with one number or one expression by
removing the sub-questions or combining them into one.
""",
}
diff --git a/camel/datagen/self_improving_cot.py b/camel/datagen/self_improving_cot.py
index d03f2d1a4a..53da1b05bb 100644
--- a/camel/datagen/self_improving_cot.py
+++ b/camel/datagen/self_improving_cot.py
@@ -861,7 +861,7 @@ def generate(self, rationalization: bool = False) -> List[Dict[str, Any]]:
Please show your complete reasoning process."""
- EVALUATION_TEMPLATE = """Please evaluate this reasoning trace and
+ EVALUATION_TEMPLATE = """Please evaluate this reasoning trace and
provide scores and feedback in valid JSON format.
Problem: {problem}
@@ -884,7 +884,7 @@ def generate(self, rationalization: bool = False) -> List[Dict[str, Any]]:
"feedback": ""
}}"""
- IMPROVEMENT_TEMPLATE = """Based on this feedback, generate an
+ IMPROVEMENT_TEMPLATE = """Based on this feedback, generate an
improved reasoning trace:
Problem: {problem}
diff --git a/camel/datagen/self_instruct/templates.py b/camel/datagen/self_instruct/templates.py
index 8a34c05656..aaf61e36c8 100644
--- a/camel/datagen/self_instruct/templates.py
+++ b/camel/datagen/self_instruct/templates.py
@@ -23,105 +23,105 @@ class SelfInstructTemplates:
Task: Given my personality and the job, tell me if I would be suitable.
Is it classification? Yes
-
+
Task: Give me an example of a time when you had to use your sense of humor.
Is it classification? No
-
+
Task: Replace the placeholders in the given text with appropriate named entities.
Is it classification? No
-
+
Task: Fact checking - tell me if the statement is true, false, or unknown, based on your knowledge and common sense.
Is it classification? Yes
-
+
Task: Return the SSN number for the person.
Is it classification? No
-
+
Task: Detect if the Reddit thread contains hate speech.
Is it classification? Yes
-
+
Task: Analyze the sentences below to identify biases.
Is it classification? No
-
+
Task: Select the longest sentence in terms of the number of words in the paragraph, output the sentence index.
Is it classification? Yes
-
+
Task: Find out the toxic word or phrase in the sentence.
Is it classification? No
-
+
Task: Rank these countries by their population.
Is it classification? No
-
+
Task: You are provided with a news article, and you need to identify all the categories that this article belongs to. Possible categories include: Music, Sports, Politics, Tech, Finance, Basketball, Soccer, Tennis, Entertainment, Digital Game, World News. Output its categories one by one, seperated by comma.
Is it classification? Yes
-
+
Task: Given the name of an exercise, explain how to do it.
Is it classification? No
-
+
Task: Select the oldest person from the list.
Is it classification? Yes
-
+
Task: Find the four smallest perfect numbers.
Is it classification? No
-
+
Task: Does the information in the document supports the claim? You can answer "Support" or "Unsupport".
Is it classification? Yes
-
+
Task: Create a detailed budget for the given hypothetical trip.
Is it classification? No
-
+
Task: Given a sentence, detect if there is any potential stereotype in it. If so, you should explain the stereotype. Else, output no.
Is it classification? No
-
+
Task: Explain the following idiom to me, and try to give me some examples.
Is it classification? No
-
+
Task: Is there anything I can eat for a breakfast that doesn't include eggs, yet includes protein, and has roughly 700-1000 calories?
Is it classification? No
-
+
Task: Answer the following multiple choice question. Select A, B, C, or D for the final answer.
Is it classification? Yes
-
+
Task: Decide whether the syllogism is logically sound.
Is it classification? Yes
-
+
Task: How can individuals and organizations reduce unconscious bias?
Is it classification? No
-
+
Task: What are some things you can do to de-stress?
Is it classification? No
-
+
Task: Find out the largest one from a set of numbers. Output the number directly.
Is it classification? Yes
-
+
Task: Replace the token in the text with proper words that are consistent with the context. You can use multiple words for each token.
Is it classification? No
-
+
Task: Write a cover letter based on the given facts.
Is it classification? No
-
+
Task: Identify the pos tag of the word in the given sentence.
Is it classification? Yes
-
+
Task: Write a program to compute the sum of integers from k to n.
Is it classification? No
-
+
Task: In this task, you need to compare the meaning of the two sentences and tell if they are the same. Output yes or no.
Is it classification? Yes
-
+
Task: To make the pairs have the same analogy, write the fourth word.
Is it classification? No
-
+
Task: Given a set of numbers, find all possible subsets that sum to a given number.
Is it classification? No
-
+
"""
- output_first_template_for_clf = '''You are given a classification instruction.
-
+ output_first_template_for_clf = '''You are given a classification instruction.
+
Produce multiple labeled examples following the format below. For each example:
- Begin with a "Class label:" line identifying one possible category.
- Follow that with one line specifying the example input (e.g., "Sentence:", "Dialogue:", "Opinion:", or "Email:").
- The content after these lines should serve as an illustrative example of that label.
-
+
Do not restate or include the "Task:" line. Do not add additional commentary. Just produce the labeled examples.
Example format (no initial task line, task will be provided) when task is Task: Classify the sentiment of the sentence into positive, negative, or mixed.:
@@ -131,9 +131,9 @@ class SelfInstructTemplates:
Sentence: I had a great day today. The weather was beautiful and I spent time with friends and family.
Class label: Negative
Sentence: I was really disappointed by the latest superhero movie. I would not recommend it to anyone.
-
+
Below are more examples:
-
+
Task: Given a dialogue, classify whether the user is satisfied with the service. You should respond with "Satisfied" or "Unsatisfied".
Class label: Satisfied
Dialogue:
@@ -233,12 +233,12 @@ def calculate_average(numbers):
Post: I can't believe the government is still not taking action on climate change. It's time for us to take matters into our own hands.
Hashtags: #climatechange #actnow
Topic: Climate change
- Class label: Not relevant
+ Class label: Not relevant
Post: I just bought the new iPhone and it is amazing!
Hashtags: #apple #technology
Topic: Travel
- Task: The answer will be 'yes' if the provided sentence contains an explicit mention that answers the given question. Otherwise, answer 'no'.
+ Task: The answer will be 'yes' if the provided sentence contains an explicit mention that answers the given question. Otherwise, answer 'no'.
Class label: Yes
Sentence: Jack played basketball for an hour after school.
Question: How long did Jack play basketball?
@@ -268,22 +268,22 @@ def calculate_average(numbers):
Task: {instruction}
'''
- input_first_template_for_gen = '''You will be given a task,
- Your job is to generate at most two example instances demonstrating how to
+ input_first_template_for_gen = '''You will be given a task,
+ Your job is to generate at most two example instances demonstrating how to
perform this task. For each instance:
- If the task requires input (as an actual example of the task), provide it.
- If the task can be answered directly without requiring input, omit the input section.
-
+
Example 1
Input: [Provide input here if needed, otherwise omit this section]
Output: [Provide the correct output]
-
+
Example 2
Input: [Provide input here if needed, otherwise omit this section]
Output: [Provide the correct output]
Do not include any additional commentary, explanations, or more than two instances.
-
+
Below are some examples:
Task: Which exercises are best for reducing belly fat at home?
@@ -302,7 +302,7 @@ def calculate_average(numbers):
Task: Converting 85 F to Celsius.
Output: 85°F = 29.44°C
- Task: Sort the given list ascendingly.
+ Task: Sort the given list ascendingly.
Example 1
List: [10, 92, 2, 5, -4, 92, 5, 101]
Output: [-4, 2, 5, 5, 10, 92, 92, 101]
@@ -323,7 +323,7 @@ def calculate_average(numbers):
Paragraph: Gun violence in the United States results in tens of thousands of deaths and injuries annually, and was the leading cause of death for children 19 and younger in 2020. In 2018, the most recent year for which data are available as of 2021, the Centers for Disease Control and Prevention's (CDC) National Center for Health Statistics reports 38,390 deaths by firearm, of which 24,432 were by suicide. The rate of firearm deaths per 100,000 people rose from 10.3 per 100,000 in 1999 to 12 per 100,000 in 2017, with 109 people dying per day or about 14,542 homicides in total, being 11.9 per 100,000 in 2018. In 2010, there were 19,392 firearm-related suicides, and 11,078 firearm-related homicides in the U.S. In 2010, 358 murders were reported involving a rifle while 6,009 were reported involving a handgun; another 1,939 were reported with an unspecified type of firearm. In 2011, a total of 478,400 fatal and nonfatal violent crimes were committed with a firearm.
Question: How many more firearm-related deaths were there in 2018 compared to 2010?
Output:
- 38390 - (19392 + 11078) = 38390 - 30470 = 7920.
+ 38390 - (19392 + 11078) = 38390 - 30470 = 7920.
So, in 2018, there were 7920 more deaths by firearm than in 2010.
Task: Write Python code to solve this leetcode problem.
diff --git a/camel/datasets/few_shot_generator.py b/camel/datasets/few_shot_generator.py
index fd9dfd0d8e..92ac585bc2 100644
--- a/camel/datasets/few_shot_generator.py
+++ b/camel/datasets/few_shot_generator.py
@@ -29,24 +29,24 @@
logger = get_logger(__name__)
-SYSTEM_PROMPT = """**You are an advanced data generation assistant.**
-Your goal is to generate high-quality synthetic data points based on
-provided examples. Your output must be well-structured,
-logically sound, and formatted correctly.
+SYSTEM_PROMPT = """**You are an advanced data generation assistant.**
+Your goal is to generate high-quality synthetic data points based on
+provided examples. Your output must be well-structured,
+logically sound, and formatted correctly.
**Instructions:**
-1. **Follow the Structure**
- Each data point must include:
- - **Question**: A clear, well-formed query.
- - **Rationale**: A step-by-step, executable reasoning process ending
- with `print(final_answer)`.
- - **Final Answer**: The correct, concise result.
-
-2. **Ensure Logical Consistency**
- - The `rationale` must be code that runs correctly.
- - The `final_answer` should match the printed output.
-
-3. **Output Format (Strict)**
+1. **Follow the Structure**
+ Each data point must include:
+ - **Question**: A clear, well-formed query.
+ - **Rationale**: A step-by-step, executable reasoning process ending
+ with `print(final_answer)`.
+ - **Final Answer**: The correct, concise result.
+
+2. **Ensure Logical Consistency**
+ - The `rationale` must be code that runs correctly.
+ - The `final_answer` should match the printed output.
+
+3. **Output Format (Strict)**
```
Question: [Generated question]
Rationale: [Code that solves the question, ending in a print statement,
diff --git a/camel/memories/blocks/chat_history_block.py b/camel/memories/blocks/chat_history_block.py
index 1f311f131a..7c49435eb7 100644
--- a/camel/memories/blocks/chat_history_block.py
+++ b/camel/memories/blocks/chat_history_block.py
@@ -93,7 +93,7 @@ def retrieve(
Message Processing Logic:
1. Preserve first system/developer message (if needed)
2. Keep latest window_size messages from the rest
-
+
Examples:
- Case 1: First message is SYSTEM, total 5 messages, window_size=2
Input: [system_msg, user_msg1, user_msg2, user_msg3, user_msg4]
diff --git a/camel/models/_utils.py b/camel/models/_utils.py
index 462606efb5..0dc76a4921 100644
--- a/camel/models/_utils.py
+++ b/camel/models/_utils.py
@@ -48,7 +48,7 @@ def try_modify_message_with_format(
updated_prompt = textwrap.dedent(
f"""\
{message["content"]}
-
+
Please generate a JSON response adhering to the following JSON schema:
{json_schema}
Make sure the JSON response is valid and matches the EXACT structure defined in the schema. Your result should ONLY be a valid json object, WITHOUT ANY OTHER TEXT OR COMMENTS.
diff --git a/camel/models/sglang_model.py b/camel/models/sglang_model.py
index e7244203ac..45dd9d637d 100644
--- a/camel/models/sglang_model.py
+++ b/camel/models/sglang_model.py
@@ -504,10 +504,10 @@ def _wait_for_server(base_url: str, timeout: Optional[float] = 30) -> None:
print(
"""\n
NOTE: Typically, the server runs in a separate terminal.
- In this notebook, we run the server and notebook code
+ In this notebook, we run the server and notebook code
together, so their outputs are combined.
- To improve clarity, the server logs are displayed in the
- original black color, while the notebook outputs are
+ To improve clarity, the server logs are displayed in the
+ original black color, while the notebook outputs are
highlighted in blue.
"""
)
diff --git a/camel/prompts/persona_hub.py b/camel/prompts/persona_hub.py
index b8b6f939ce..a2dc65c537 100644
--- a/camel/prompts/persona_hub.py
+++ b/camel/prompts/persona_hub.py
@@ -44,7 +44,7 @@ class PersonaHubPrompt(TextPromptDict):
""") # noqa: E501
PERSONA_TO_PERSONA = TextPrompt("""
-Given the following persona:
+Given the following persona:
{persona_name}
{persona_description}
diff --git a/camel/prompts/solution_extraction.py b/camel/prompts/solution_extraction.py
index 547c6683ec..3da49fbfa6 100644
--- a/camel/prompts/solution_extraction.py
+++ b/camel/prompts/solution_extraction.py
@@ -29,13 +29,13 @@ class SolutionExtractionPromptTemplateDict(TextPromptDict):
"""
ASSISTANT_PROMPT = TextPrompt(
- """You are an experienced solution extracting agent.
-Your task is to extract full and complete solutions by looking at the conversation between a user and an assistant with particular specializations.
-You should present me with a final and detailed solution purely based on the conversation.
-You should present the solution as if its yours.
-Use present tense and as if you are the one presenting the solution.
+ """You are an experienced solution extracting agent.
+Your task is to extract full and complete solutions by looking at the conversation between a user and an assistant with particular specializations.
+You should present me with a final and detailed solution purely based on the conversation.
+You should present the solution as if its yours.
+Use present tense and as if you are the one presenting the solution.
You should not miss any necessary details or examples.
-Keep all provided explanations and codes provided throughout the conversation.
+Keep all provided explanations and codes provided throughout the conversation.
Remember your task is not to summarize rather to extract the full solution."""
)
diff --git a/camel/prompts/video_description_prompt.py b/camel/prompts/video_description_prompt.py
index 92de2c956b..95a16b4efd 100644
--- a/camel/prompts/video_description_prompt.py
+++ b/camel/prompts/video_description_prompt.py
@@ -28,7 +28,7 @@ class VideoDescriptionPromptTemplateDict(TextPromptDict):
"""
ASSISTANT_PROMPT = TextPrompt(
- """You are a master of video analysis.
+ """You are a master of video analysis.
Please provide a shot description of the content of the current video."""
)
diff --git a/camel/runtimes/docker_runtime.py b/camel/runtimes/docker_runtime.py
index 04d4cffd0f..31d3001de3 100644
--- a/camel/runtimes/docker_runtime.py
+++ b/camel/runtimes/docker_runtime.py
@@ -295,14 +295,14 @@ def wrapper(
)
if resp.status_code != 200:
logger.error(
- f"""ailed to execute function:
- {func.get_function_name()},
- status code: {resp.status_code},
+ f"""ailed to execute function:
+ {func.get_function_name()},
+ status code: {resp.status_code},
response: {resp.text}"""
)
return {
"error": f"""Failed to execute function:
- {func.get_function_name()},
+ {func.get_function_name()},
response: {resp.text}"""
}
data = resp.json()
diff --git a/camel/runtimes/llm_guard_runtime.py b/camel/runtimes/llm_guard_runtime.py
index 385a55f3fa..488e40e933 100644
--- a/camel/runtimes/llm_guard_runtime.py
+++ b/camel/runtimes/llm_guard_runtime.py
@@ -26,37 +26,37 @@
logger = logging.getLogger(__name__)
-GUARDPROMPT = """You are a function safety evaluator tasked with assessing the
-potential risk level of a given function based on both its description
-and parameters. Your goal is to determine if the function may pose any
-harm to the user's environment, such as deleting files, executing
-arbitrary code, or accessing sensitive information. Pay special attention
-to the provided parameters — even if a function has the potential to be
-dangerous, its actual parameters may indicate harmless behavior, and the
-risk level should be adjusted accordingly. Use the `function_risk` tool to
+GUARDPROMPT = """You are a function safety evaluator tasked with assessing the
+potential risk level of a given function based on both its description
+and parameters. Your goal is to determine if the function may pose any
+harm to the user's environment, such as deleting files, executing
+arbitrary code, or accessing sensitive information. Pay special attention
+to the provided parameters — even if a function has the potential to be
+dangerous, its actual parameters may indicate harmless behavior, and the
+risk level should be adjusted accordingly. Use the `function_risk` tool to
assign a risk score to the function, based on the following criteria:
-- **Score 1**: No harm. This includes simple operations like mathematical
- calculations, content searches, or data retrievals that do not impact
- the user's environment or access sensitive resources. This also
- includes functions with potentially dangerous capabilities that have
+- **Score 1**: No harm. This includes simple operations like mathematical
+ calculations, content searches, or data retrievals that do not impact
+ the user's environment or access sensitive resources. This also
+ includes functions with potentially dangerous capabilities that have
harmless, controlled parameters that ensure safe execution.
-- **Score 2**: Minimal harm. The function might read user files, retrieve
- non-sensitive data, or access other low-risk resources,
+- **Score 2**: Minimal harm. The function might read user files, retrieve
+ non-sensitive data, or access other low-risk resources,
posing little risk to the user.
-- **Score 3**: Risk present. The function might delete files, modify the
- file system, execute arbitrary code, or access sensitive data, which
- could negatively impact the user's environment. However, if the
- actual parameters provided clearly indicate safe and restricted
+- **Score 3**: Risk present. The function might delete files, modify the
+ file system, execute arbitrary code, or access sensitive data, which
+ could negatively impact the user's environment. However, if the
+ actual parameters provided clearly indicate safe and restricted
usage, this risk should be downgraded accordingly.
-When evaluating, always consider both the function's description and its
-specific parameters. If the function appears risky due to its design but
-the provided parameters indicate a safe and non-impactful operation,
-adjust the risk score to reflect this. Assign an appropriate risk score
-and provide a brief explanation of your reasoning based on the function's
+When evaluating, always consider both the function's description and its
+specific parameters. If the function appears risky due to its design but
+the provided parameters indicate a safe and non-impactful operation,
+adjust the risk score to reflect this. Assign an appropriate risk score
+and provide a brief explanation of your reasoning based on the function's
description and the actual parameters given.
-YOU MUST USE THE `function_risk` TOOL TO ASSESS THE RISK
+YOU MUST USE THE `function_risk` TOOL TO ASSESS THE RISK
LEVEL OF EACH FUNCTION.
"""
diff --git a/camel/runtimes/remote_http_runtime.py b/camel/runtimes/remote_http_runtime.py
index 67b9b1732e..ff56c2f6f5 100644
--- a/camel/runtimes/remote_http_runtime.py
+++ b/camel/runtimes/remote_http_runtime.py
@@ -124,14 +124,14 @@ def wrapper(
)
if resp.status_code != 200:
logger.error(
- f"""ailed to execute function:
- {func.get_function_name()},
- status code: {resp.status_code},
+ f"""ailed to execute function:
+ {func.get_function_name()},
+ status code: {resp.status_code},
response: {resp.text}"""
)
return {
"error": f"""Failed to execute function:
- {func.get_function_name()},
+ {func.get_function_name()},
response: {resp.text}"""
}
data = resp.json()
diff --git a/camel/schemas/openai_converter.py b/camel/schemas/openai_converter.py
index 1421cabb54..dc8588d241 100644
--- a/camel/schemas/openai_converter.py
+++ b/camel/schemas/openai_converter.py
@@ -28,7 +28,7 @@
from .base import BaseConverter
DEFAULT_CONVERTER_PROMPTS = """
- Extract key entities and attributes from the user
+ Extract key entities and attributes from the user
provided text, and convert them into a structured JSON format.
"""
diff --git a/camel/societies/workforce/prompts.py b/camel/societies/workforce/prompts.py
index acba05ad88..ad8e1aaa76 100644
--- a/camel/societies/workforce/prompts.py
+++ b/camel/societies/workforce/prompts.py
@@ -57,7 +57,7 @@
Each assignment dictionary should have:
- "task_id": the ID of the task
-- "assignee_id": the ID of the chosen worker node
+- "assignee_id": the ID of the chosen worker node
- "dependencies": list of task IDs that this task depends on (empty list if no dependencies)
Example valid response:
@@ -417,7 +417,7 @@
QUALITY_EVALUATION_RESPONSE_FORMAT = """JSON format:
{
"quality_score": 0-100,
- "reasoning": "explanation (1-2 sentences)",
+ "reasoning": "explanation (1-2 sentences)",
"issues": ["issue1", "issue2"],
"recovery_strategy": "retry|reassign|replan|decompose or null",
"modified_task_content": "new content if replan, else null"
diff --git a/camel/societies/workforce/structured_output_handler.py b/camel/societies/workforce/structured_output_handler.py
index 48096b8aed..3f35afaf2e 100644
--- a/camel/societies/workforce/structured_output_handler.py
+++ b/camel/societies/workforce/structured_output_handler.py
@@ -144,9 +144,9 @@ def generate_structured_prompt(
# Add critical reminder
structured_section += """
-**CRITICAL**: Your response must contain ONLY the JSON object within the code
+**CRITICAL**: Your response must contain ONLY the JSON object within the code
block.
-Do not include any explanatory text, comments, or content outside the JSON
+Do not include any explanatory text, comments, or content outside the JSON
structure.
Ensure the JSON is valid and properly formatted.
"""
diff --git a/camel/storages/graph_storages/nebula_graph.py b/camel/storages/graph_storages/nebula_graph.py
index 14e8a48caa..7affbface4 100644
--- a/camel/storages/graph_storages/nebula_graph.py
+++ b/camel/storages/graph_storages/nebula_graph.py
@@ -534,8 +534,8 @@ def _check_edges(self, entity_id: str) -> bool:
"""
# Combine the outgoing and incoming edge count query
check_query = f"""
- (GO FROM {entity_id} OVER * YIELD count(*) as out_count)
- UNION
+ (GO FROM {entity_id} OVER * YIELD count(*) as out_count)
+ UNION
(GO FROM {entity_id} REVERSELY OVER * YIELD count(*) as in_count)
"""
diff --git a/camel/storages/graph_storages/neo4j_graph.py b/camel/storages/graph_storages/neo4j_graph.py
index aee7d924b9..a8fe0d28f4 100644
--- a/camel/storages/graph_storages/neo4j_graph.py
+++ b/camel/storages/graph_storages/neo4j_graph.py
@@ -642,9 +642,9 @@ def random_walk_with_restarts(
nodeLabelStratification: $nodeLabelStratification,
relationshipWeightProperty: $relationshipWeightProperty
})
- YIELD graphName, fromGraphName, nodeCount,
+ YIELD graphName, fromGraphName, nodeCount,
relationshipCount, startNodeCount, projectMillis
- RETURN graphName, fromGraphName, nodeCount,
+ RETURN graphName, fromGraphName, nodeCount,
relationshipCount, startNodeCount, projectMillis
"""
@@ -710,9 +710,9 @@ def common_neighbour_aware_random_walk(
nodeLabelStratification: $nodeLabelStratification,
relationshipWeightProperty: $relationshipWeightProperty
})
- YIELD graphName, fromGraphName, nodeCount,
+ YIELD graphName, fromGraphName, nodeCount,
relationshipCount, startNodeCount, projectMillis
- RETURN graphName, fromGraphName, nodeCount,
+ RETURN graphName, fromGraphName, nodeCount,
relationshipCount, startNodeCount, projectMillis
"""
@@ -766,7 +766,7 @@ def get_triplet(
WHERE ($subj IS NULL OR n1.id = $subj)
AND ($obj IS NULL OR n2.id = $obj)
AND ($rel IS NULL OR type(r) = $rel)
- RETURN n1.id AS subj, n2.id AS obj,
+ RETURN n1.id AS subj, n2.id AS obj,
type(r) AS rel, r.timestamp AS timestamp
"""
diff --git a/camel/storages/vectordb_storages/pgvector.py b/camel/storages/vectordb_storages/pgvector.py
index 7e5f0736d8..e0c5f4a8ae 100644
--- a/camel/storages/vectordb_storages/pgvector.py
+++ b/camel/storages/vectordb_storages/pgvector.py
@@ -119,8 +119,8 @@ def _ensure_index(self) -> None:
with self._conn.cursor() as cur:
index_name = f"{self.table_name}_vector_idx"
query = SQL("""
- CREATE INDEX IF NOT EXISTS {index_name}
- ON {table}
+ CREATE INDEX IF NOT EXISTS {index_name}
+ ON {table}
USING hnsw (vector vector_cosine_ops)
""").format(
index_name=Identifier(index_name),
@@ -168,8 +168,8 @@ def add(self, records: List[VectorRecord], **kwargs: Any) -> None:
query = SQL("""
INSERT INTO {table} (id, vector, payload)
VALUES (%s, %s, %s)
- ON CONFLICT (id) DO UPDATE SET
- vector=EXCLUDED.vector,
+ ON CONFLICT (id) DO UPDATE SET
+ vector=EXCLUDED.vector,
payload=EXCLUDED.payload
""").format(table=Identifier(self.table_name))
@@ -249,7 +249,7 @@ def query(
from psycopg.sql import SQL, Identifier, Literal
query_sql = SQL("""
- SELECT id, vector, payload, (vector {} %s::vector)
+ SELECT id, vector, payload, (vector {} %s::vector)
AS similarity
FROM {}
ORDER BY similarity {}
diff --git a/camel/storages/vectordb_storages/surreal.py b/camel/storages/vectordb_storages/surreal.py
index b40bf73249..9ec3663523 100644
--- a/camel/storages/vectordb_storages/surreal.py
+++ b/camel/storages/vectordb_storages/surreal.py
@@ -12,7 +12,7 @@
# limitations under the License.
# ========= Copyright 2023-2024 @ CAMEL-AI.org. All Rights Reserved. =========
import re
-from typing import TYPE_CHECKING, Any, Dict, List, Optional
+from typing import Any, Dict, List, Optional, cast
from camel.logger import get_logger
from camel.storages.vectordb_storages import (
@@ -25,9 +25,6 @@
from camel.types import VectorDistance
from camel.utils import dependencies_required
-if TYPE_CHECKING:
- from surrealdb import Surreal # type: ignore[import-not-found]
-
logger = get_logger(__name__)
@@ -123,11 +120,12 @@ def _table_exists(self) -> bool:
bool: True if the table exists, False otherwise.
"""
res = self._surreal_client.query("INFO FOR DB;")
- tables = res.get('tables', {})
- logger.debug(f"_table_exists: {res}")
+ res_dict = cast(Dict[str, Any], res)
+ tables = res_dict.get('tables', {})
+ logger.debug(f"_table_exists: {res!r}")
return self.table in tables
- def _get_table_info(self) -> dict[str, int | None]:
+ def _get_table_info(self) -> Dict[str, Optional[int]]:
r"""Retrieve dimension and record count from the table metadata.
Returns:
@@ -136,8 +134,9 @@ def _get_table_info(self) -> dict[str, int | None]:
if not self._table_exists():
return {"dim": self.vector_dim, "count": 0}
res = self._surreal_client.query(f"INFO FOR TABLE {self.table};")
- logger.debug(f"_get_table_info: {res}")
- indexes = res.get("indexes", {})
+ res_dict = cast(Dict[str, Any], res)
+ logger.debug(f"_get_table_info: {res!r}")
+ indexes = res_dict.get("indexes", {})
dim = self.vector_dim
idx_def = indexes.get("hnsw_idx")
@@ -148,7 +147,8 @@ def _get_table_info(self) -> dict[str, int | None]:
cnt = self._surreal_client.query(
f"SELECT COUNT() FROM ONLY {self.table} GROUP ALL LIMIT 1;"
)
- count = cnt.get("count", 0)
+ cnt_dict = cast(Dict[str, Any], cnt)
+ count = cnt_dict.get("count", 0)
return {"dim": dim, "count": count}
def _create_table(self):
@@ -251,24 +251,31 @@ def query(
f"query surql: {surql_query} with $vector = {query.query_vector}"
)
- response = self._surreal_client.query(
- surql_query, {"vector": query.query_vector}
- )
- logger.debug(f"query response: {response}")
-
- return [
- VectorDBQueryResult(
- record=VectorRecord(
- id=row["id"].id,
- vector=row["embedding"],
- payload=row["payload"],
- ),
- similarity=1.0 - row["dist"]
- if self.distance == VectorDistance.COSINE
- else -row["score"],
+ query_params: Dict[str, Any] = {"vector": query.query_vector}
+ response = self._surreal_client.query(surql_query, query_params)
+ logger.debug(f"query response: {response!r}")
+
+ results: List[VectorDBQueryResult] = []
+ response_list = cast(List[Dict[str, Any]], response)
+ for row in response_list:
+ record_id = row["id"]
+ # SurrealDB RecordID has an 'id' attribute for the actual ID
+ actual_id = record_id.id if hasattr(record_id, 'id') else record_id
+ dist = float(row["dist"])
+ similarity = (
+ 1.0 - dist if self.distance == VectorDistance.COSINE else -dist
+ )
+ results.append(
+ VectorDBQueryResult(
+ record=VectorRecord(
+ id=actual_id,
+ vector=row["embedding"],
+ payload=row["payload"],
+ ),
+ similarity=similarity,
+ )
)
- for row in response
- ]
+ return results
def add(self, records: List[VectorRecord], **kwargs) -> None:
r"""Insert validated vector records into the SurrealDB table.
@@ -360,6 +367,6 @@ def load(self) -> None:
raise NotImplementedError("SurrealDB does not support loading")
@property
- def client(self) -> "Surreal":
+ def client(self) -> Any:
r"""Provides access to the underlying SurrealDB client."""
return self._surreal_client
diff --git a/camel/tasks/task_prompt.py b/camel/tasks/task_prompt.py
index f01fa79403..ed90074da6 100644
--- a/camel/tasks/task_prompt.py
+++ b/camel/tasks/task_prompt.py
@@ -47,7 +47,7 @@
{other_results}
-so, the final answer of the root task is:
+so, the final answer of the root task is:
"""
)
diff --git a/camel/toolkits/async_browser_toolkit.py b/camel/toolkits/async_browser_toolkit.py
index 0858b374e2..9a35d8416f 100644
--- a/camel/toolkits/async_browser_toolkit.py
+++ b/camel/toolkits/async_browser_toolkit.py
@@ -1383,8 +1383,8 @@ async def browse_url(
if not task_completed:
simulation_result = f"""
- The task is not completed within the round limit. Please check
- the last round {self.history_window} information to see if
+ The task is not completed within the round limit. Please check
+ the last round {self.history_window} information to see if
there is any useful information:
{self.history[-self.history_window:]}
"""
diff --git a/camel/toolkits/audio_analysis_toolkit.py b/camel/toolkits/audio_analysis_toolkit.py
index 1934743b5a..3ad29b5684 100644
--- a/camel/toolkits/audio_analysis_toolkit.py
+++ b/camel/toolkits/audio_analysis_toolkit.py
@@ -208,7 +208,7 @@ def ask_question_about_audio(self, audio_path: str, question: str) -> str:
{transcript}
speech_transcription_result>
- Please answer the following question based on the speech
+ Please answer the following question based on the speech
transcription result above:
{question}
"""
diff --git a/camel/toolkits/browser_toolkit.py b/camel/toolkits/browser_toolkit.py
index a851565640..be9742fdc7 100644
--- a/camel/toolkits/browser_toolkit.py
+++ b/camel/toolkits/browser_toolkit.py
@@ -680,15 +680,15 @@ def find_text_on_page(self, search_text: str) -> str:
# ruff: noqa: E501
assert self.page is not None
script = f"""
- (function() {{
+ (function() {{
let text = "{search_text}";
let found = window.find(text);
if (!found) {{
let elements = document.querySelectorAll("*:not(script):not(
- style)");
+ style)");
for (let el of elements) {{
if (el.innerText && el.innerText.includes(text)) {{
- el.scrollIntoView({{behavior: "smooth", block:
+ el.scrollIntoView({{behavior: "smooth", block:
"center"}});
el.style.backgroundColor = "yellow";
el.style.border = '2px solid red';
@@ -744,8 +744,8 @@ def show_interactive_elements(self):
self.page.evaluate(self.page_script)
self.page.evaluate("""
() => {
- document.querySelectorAll('a, button, input, select, textarea,
- [tabindex]:not([tabindex="-1"]),
+ document.querySelectorAll('a, button, input, select, textarea,
+ [tabindex]:not([tabindex="-1"]),
[contenteditable="true"]').forEach(el => {
el.style.border = '2px solid red';
});
@@ -1227,8 +1227,8 @@ def browse_url(
simulation_result: str
if not task_completed:
simulation_result = f"""
- The task is not completed within the round limit. Please
- check the last round {self.history_window} information to
+ The task is not completed within the round limit. Please
+ check the last round {self.history_window} information to
see if there is any useful information:
{self.history[-self.history_window :]}
"""
diff --git a/camel/toolkits/browser_toolkit_commons.py b/camel/toolkits/browser_toolkit_commons.py
index 7335252e29..41d951d9c9 100644
--- a/camel/toolkits/browser_toolkit_commons.py
+++ b/camel/toolkits/browser_toolkit_commons.py
@@ -134,7 +134,7 @@
TASK_PLANNING_PROMPT_TEMPLATE = """
{task_prompt}
-According to the problem above, if we use browser interaction, what is the general process of the interaction after visiting the webpage `{start_url}`?
+According to the problem above, if we use browser interaction, what is the general process of the interaction after visiting the webpage `{start_url}`?
Please note that it can be viewed as Partially Observable MDP. Do not over-confident about your plan.
Please first restate the task in detail, and then provide a detailed plan to solve the task.
@@ -156,7 +156,7 @@
Your output should be in json format, including the following fields:
- `if_need_replan`: bool, A boolean value indicating whether the task needs to be fundamentally replanned.
-- `replanned_schema`: str, The replanned schema for the task, which should not be changed too much compared with the original one. If the task does not need to be replanned, the value should be an empty string.
+- `replanned_schema`: str, The replanned schema for the task, which should not be changed too much compared with the original one. If the task does not need to be replanned, the value should be an empty string.
""" # noqa: E501
AVAILABLE_ACTIONS_PROMPT = """
diff --git a/camel/toolkits/excel_toolkit.py b/camel/toolkits/excel_toolkit.py
index f379e77558..e6892fc765 100644
--- a/camel/toolkits/excel_toolkit.py
+++ b/camel/toolkits/excel_toolkit.py
@@ -233,10 +233,10 @@ def extract_excel_content(self, document_path: str) -> str:
Sheet Name: {sheet_info['sheet_name']}
Cell information list:
{sheet_info['cell_info_list']}
-
+
Markdown View of the content:
{sheet_info['markdown_content']}
-
+
{'-'*40}
"""
diff --git a/camel/toolkits/function_tool.py b/camel/toolkits/function_tool.py
index c79b9170c6..4c0111ad5f 100644
--- a/camel/toolkits/function_tool.py
+++ b/camel/toolkits/function_tool.py
@@ -544,8 +544,8 @@ def validate_openai_tool_schema(
# Check the function description, if no description then raise warming
if not openai_tool_schema["function"].get("description"):
- warnings.warn(f"""Function description is missing for
- {openai_tool_schema['function']['name']}. This may
+ warnings.warn(f"""Function description is missing for
+ {openai_tool_schema['function']['name']}. This may
affect the quality of tool calling.""")
# Validate whether parameters
diff --git a/camel/toolkits/hybrid_browser_toolkit/ts/src/browser-scripts.js b/camel/toolkits/hybrid_browser_toolkit/ts/src/browser-scripts.js
index 93ec791910..ee6cd95400 100644
--- a/camel/toolkits/hybrid_browser_toolkit/ts/src/browser-scripts.js
+++ b/camel/toolkits/hybrid_browser_toolkit/ts/src/browser-scripts.js
@@ -74,22 +74,22 @@ function getDocumentDimensions() {
function waitForElement(selector, timeout = 5000) {
return new Promise((resolve, reject) => {
const startTime = Date.now();
-
+
function checkElement() {
const element = document.querySelector(selector);
if (element && element.offsetParent !== null) {
resolve(element);
return;
}
-
+
if (Date.now() - startTime > timeout) {
reject(new Error(`Element ${selector} not found within timeout`));
return;
}
-
+
setTimeout(checkElement, 100);
}
-
+
checkElement();
});
}
@@ -100,7 +100,7 @@ function waitForElement(selector, timeout = 5000) {
function getElementCoordinates(element) {
const rect = element.getBoundingClientRect();
const scroll = getCurrentScrollPosition();
-
+
return {
x: rect.left + scroll.x,
y: rect.top + scroll.y,
@@ -122,4 +122,4 @@ if (typeof module !== 'undefined' && module.exports) {
waitForElement,
getElementCoordinates
};
-}
\ No newline at end of file
+}
diff --git a/camel/toolkits/hybrid_browser_toolkit/ts/src/browser-session.ts b/camel/toolkits/hybrid_browser_toolkit/ts/src/browser-session.ts
index 8f5c8b6560..d59c8db4f2 100644
--- a/camel/toolkits/hybrid_browser_toolkit/ts/src/browser-session.ts
+++ b/camel/toolkits/hybrid_browser_toolkit/ts/src/browser-session.ts
@@ -13,7 +13,7 @@ export class HybridBrowserSession {
private configLoader: ConfigLoader;
private scrollPosition: { x: number; y: number } = {x: 0, y: 0};
private hasNavigatedBefore = false; // Track if we've navigated before
- private logLimit: number;
+ private logLimit: number;
constructor(config: BrowserToolkitConfig = {}) {
// Use ConfigLoader's fromPythonConfig to handle conversion properly
@@ -43,25 +43,25 @@ export class HybridBrowserSession {
});
}
- async ensureBrowser(): Promise {
+ async ensureBrowser(): Promise {
if (this.browser) {
return;
}
const browserConfig = this.configLoader.getBrowserConfig();
const stealthConfig = this.configLoader.getStealthConfig();
-
+
// Check if CDP URL is provided
if (browserConfig.cdpUrl) {
// Connect to existing browser via CDP
this.browser = await chromium.connectOverCDP(browserConfig.cdpUrl);
-
+
// Get existing contexts or create new one
const contexts = this.browser.contexts();
if (contexts.length > 0) {
this.context = contexts[0];
this.contextOwnedByUs = false;
-
+
// Apply stealth headers to existing context if configured
// Note: userAgent cannot be changed on an existing context
if (stealthConfig.enabled) {
@@ -76,7 +76,7 @@ export class HybridBrowserSession {
const contextOptions: any = {
viewport: browserConfig.viewport
};
-
+
// Apply stealth headers and UA if configured
if (stealthConfig.enabled) {
if (stealthConfig.extraHTTPHeaders) {
@@ -86,12 +86,12 @@ export class HybridBrowserSession {
contextOptions.userAgent = stealthConfig.userAgent;
}
}
-
+
this.context = await this.browser.newContext(contextOptions);
this.contextOwnedByUs = true;
this.browser = this.context.browser();
}
-
+
const pages = this.context.pages();
console.log(`[CDP] cdpKeepCurrentPage: ${browserConfig.cdpKeepCurrentPage}, pages count: ${pages.length}`);
if (browserConfig.cdpKeepCurrentPage) {
@@ -105,7 +105,7 @@ export class HybridBrowserSession {
break;
}
}
-
+
if (validPage) {
const tabId = this.generateTabId();
this.registerNewPage(tabId, validPage);
@@ -133,7 +133,7 @@ export class HybridBrowserSession {
break;
}
}
-
+
if (!availablePageFound) {
console.log('[CDP] No blank pages found, creating new page');
const newPage = await this.context.newPage();
@@ -157,7 +157,7 @@ export class HybridBrowserSession {
if (stealthConfig.enabled) {
launchOptions.args = stealthConfig.args || [];
-
+
// Apply stealth user agent/headers if configured
if (stealthConfig.userAgent) {
launchOptions.userAgent = stealthConfig.userAgent;
@@ -187,7 +187,7 @@ export class HybridBrowserSession {
const contextOptions: any = {
viewport: browserConfig.viewport
};
-
+
// Apply stealth headers and UA if configured
if (stealthConfig.enabled) {
if (stealthConfig.extraHTTPHeaders) {
@@ -197,10 +197,10 @@ export class HybridBrowserSession {
contextOptions.userAgent = stealthConfig.userAgent;
}
}
-
+
this.context = await this.browser.newContext(contextOptions);
this.contextOwnedByUs = true;
-
+
const initialPage = await this.context.newPage();
const initialTabId = this.generateTabId();
this.registerNewPage(initialTabId, initialPage);
@@ -225,7 +225,7 @@ export class HybridBrowserSession {
const browserConfig = this.configLoader.getBrowserConfig();
return (
// Standard about:blank variations (prefix match for query params)
- url === 'about:blank' ||
+ url === 'about:blank' ||
url.startsWith('about:blank?') ||
// Configured blank page URLs (exact match for compatibility)
browserConfig.blankPageUrls.includes(url) ||
@@ -239,12 +239,12 @@ export class HybridBrowserSession {
async getCurrentPage(): Promise {
if (!this.currentTabId || !this.pages.has(this.currentTabId)) {
const browserConfig = this.configLoader.getBrowserConfig();
-
+
// In CDP keep-current-page mode, find existing page
if (browserConfig.cdpKeepCurrentPage && browserConfig.cdpUrl && this.context) {
const allPages = this.context.pages();
console.log(`[getCurrentPage] cdpKeepCurrentPage mode: Looking for existing page, found ${allPages.length} pages`);
-
+
if (allPages.length > 0) {
// Try to find a page that's not already tracked
for (const page of allPages) {
@@ -257,7 +257,7 @@ export class HybridBrowserSession {
return page;
}
}
-
+
// If all pages are tracked, use the first available one
const firstPage = allPages[0];
if (!firstPage.isClosed()) {
@@ -271,10 +271,10 @@ export class HybridBrowserSession {
}
}
}
-
+
throw new Error('No active page available in CDP mode with cdpKeepCurrentPage=true');
}
-
+
// Normal mode: create new page
if (this.context) {
console.log('[getCurrentPage] No active page, creating new page');
@@ -282,10 +282,10 @@ export class HybridBrowserSession {
const tabId = this.generateTabId();
this.registerNewPage(tabId, newPage);
this.currentTabId = tabId;
-
+
newPage.setDefaultNavigationTimeout(browserConfig.navigationTimeout);
newPage.setDefaultTimeout(browserConfig.navigationTimeout);
-
+
return newPage;
}
throw new Error('No browser context available');
@@ -314,7 +314,7 @@ export class HybridBrowserSession {
zoomLevel: window.outerWidth / window.innerWidth || 1
};
}) as { x: number; y: number; devicePixelRatio: number; zoomLevel: number };
-
+
// Store scroll position
this.scrollPosition = { x: scrollInfo.x, y: scrollInfo.y };
return this.scrollPosition;
@@ -347,8 +347,8 @@ export class HybridBrowserSession {
private filterElementsInViewport(
- elements: Record,
- viewport: { width: number, height: number },
+ elements: Record,
+ viewport: { width: number, height: number },
scrollPos: { x: number, y: number }
): Record {
const filtered: Record = {};
@@ -358,31 +358,31 @@ export class HybridBrowserSession {
const viewportTop = 0;
const viewportRight = viewport.width;
const viewportBottom = viewport.height;
-
+
for (const [ref, element] of Object.entries(elements)) {
// If element has no coordinates, include it (fallback)
if (!element.coordinates) {
filtered[ref] = element;
continue;
}
-
+
const { x, y, width, height } = element.coordinates;
-
+
// Check if element is visible in current viewport
// Element is visible if it overlaps with viewport bounds
// Since boundingBox() coords are viewport-relative, we compare directly
const isVisible = (
x < viewportRight && // Left edge is before viewport right
- y < viewportBottom && // Top edge is before viewport bottom
+ y < viewportBottom && // Top edge is before viewport bottom
x + width > viewportLeft && // Right edge is after viewport left
y + height > viewportTop // Bottom edge is after viewport top
);
-
+
if (isVisible) {
filtered[ref] = element;
}
}
-
+
return filtered;
}
@@ -392,7 +392,7 @@ export class HybridBrowserSession {
viewportRefs: Set,
tabSize: number = 2
): string[] {
- // Filter snapshot lines to include only those in viewportRefs
+ // Filter snapshot lines to include only those in viewportRefs
// and their context
const levelStack: number[] = [];
const filteredLines: string[] = [];
@@ -438,13 +438,13 @@ export class HybridBrowserSession {
private async getSnapshotForAINative(includeCoordinates = false, viewportLimit = false): Promise {
const startTime = Date.now();
const page = await this.getCurrentPage();
-
+
try {
// Use _snapshotForAI() to properly update _lastAriaSnapshot
const snapshotStart = Date.now();
const snapshotText = await (page as any)._snapshotForAI();
const snapshotTime = Date.now() - snapshotStart;
-
+
// Extract refs from the snapshot text
const refPattern = /\[ref=([^\]]+)\]/g;
const refs: string[] = [];
@@ -452,11 +452,11 @@ export class HybridBrowserSession {
while ((match = refPattern.exec(snapshotText)) !== null) {
refs.push(match[1]);
}
-
+
// Get element information including coordinates if needed
const mappingStart = Date.now();
const playwrightMapping: Record = {};
-
+
// Parse element info in a single pass
const snapshotIndex = this.buildSnapshotIndex(snapshotText);
for (const ref of refs) {
@@ -466,7 +466,7 @@ export class HybridBrowserSession {
role: role || 'unknown',
};
}
-
+
if (includeCoordinates) {
// Get coordinates for each ref using aria-ref selector
for (const ref of refs) {
@@ -474,11 +474,11 @@ export class HybridBrowserSession {
const selector = `aria-ref=${ref}`;
const element = await page.locator(selector).first();
const exists = await element.count() > 0;
-
+
if (exists) {
// Get bounding box
const boundingBox = await element.boundingBox();
-
+
if (boundingBox) {
// Add coordinates to existing element info
playwrightMapping[ref] = {
@@ -497,22 +497,22 @@ export class HybridBrowserSession {
}
}
}
-
+
const mappingTime = Date.now() - mappingStart;
-
+
// Apply viewport filtering if requested
let finalElements = playwrightMapping;
let finalSnapshot = snapshotText;
-
+
if (viewportLimit) {
const viewport = page.viewportSize() || { width: 1280, height: 720 };
const scrollPos = await this.getCurrentScrollPosition();
finalElements = this.filterElementsInViewport(playwrightMapping, viewport, scrollPos);
finalSnapshot = this.rebuildSnapshotText(snapshotText, finalElements);
}
-
+
const totalTime = Date.now() - startTime;
-
+
return {
snapshot: finalSnapshot,
elements: finalElements,
@@ -531,7 +531,7 @@ export class HybridBrowserSession {
} catch (error) {
console.error('Failed to get AI snapshot with native mapping:', error);
const totalTime = Date.now() - startTime;
-
+
return {
snapshot: 'Error: Unable to capture page snapshot',
elements: {},
@@ -556,58 +556,58 @@ export class HybridBrowserSession {
* Enhanced click implementation with new tab detection and scroll fix
*/
private async performClick(page: Page, ref: string): Promise<{ success: boolean; method?: string; error?: string; newTabId?: string; diffSnapshot?: string }> {
-
+
try {
// Ensure we have the latest snapshot and mapping
await (page as any)._snapshotForAI();
-
+
// Use Playwright's aria-ref selector engine
const selector = `aria-ref=${ref}`;
-
+
// Check if element exists
const element = await page.locator(selector).first();
const exists = await element.count() > 0;
-
+
if (!exists) {
return { success: false, error: `Element with ref ${ref} not found` };
}
-
+
const role = await element.getAttribute('role');
const elementTagName = await element.evaluate(el => el.tagName.toLowerCase());
const isCombobox = role === 'combobox' || elementTagName === 'combobox';
const isTextbox = role === 'textbox' || elementTagName === 'input' || elementTagName === 'textarea';
const shouldCheckDiff = isCombobox || isTextbox;
-
+
let snapshotBefore: string | null = null;
if (shouldCheckDiff) {
snapshotBefore = await (page as any)._snapshotForAI();
}
-
+
// Check element properties
const browserConfig = this.configLoader.getBrowserConfig();
const target = await element.getAttribute(browserConfig.targetAttribute);
const href = await element.getAttribute(browserConfig.hrefAttribute);
const onclick = await element.getAttribute(browserConfig.onclickAttribute);
const tagName = await element.evaluate(el => el.tagName.toLowerCase());
-
+
// Check if element naturally opens new tab
const naturallyOpensNewTab = (
- target === browserConfig.blankTarget ||
+ target === browserConfig.blankTarget ||
(onclick && onclick.includes(browserConfig.windowOpenString)) ||
(tagName === 'a' && href && (href.includes(`javascript:${browserConfig.windowOpenString}`) || href.includes(browserConfig.blankTarget)))
);
-
+
// Open ALL links in new tabs
// Check if this is a navigable link
- const isNavigableLink = tagName === 'a' && href &&
+ const isNavigableLink = tagName === 'a' && href &&
!href.startsWith(browserConfig.anchorOnly) && // Not an anchor link
!href.startsWith(browserConfig.javascriptVoidPrefix) && // Not a void javascript
href !== browserConfig.javascriptVoidEmpty && // Not empty javascript
href !== browserConfig.anchorOnly; // Not just #
-
+
const shouldOpenNewTab = naturallyOpensNewTab || isNavigableLink;
-
-
+
+
if (shouldOpenNewTab) {
// Handle new tab opening
// If it's a link that doesn't naturally open in new tab, force it
@@ -618,34 +618,34 @@ export class HybridBrowserSession {
}
}, browserConfig.blankTarget);
}
-
+
// Set up popup listener before clicking
const popupPromise = page.context().waitForEvent('page', { timeout: browserConfig.popupTimeout });
-
+
// Click with force to avoid scrolling issues
await element.click({ force: browserConfig.forceClick });
-
+
try {
// Wait for new page to open
const newPage = await popupPromise;
-
+
// Generate tab ID for the new page
const newTabId = this.generateTabId();
this.registerNewPage(newTabId, newPage);
-
+
// Set up page properties
const browserConfig = this.configLoader.getBrowserConfig();
newPage.setDefaultNavigationTimeout(browserConfig.navigationTimeout);
newPage.setDefaultTimeout(browserConfig.navigationTimeout);
-
-
+
+
// Automatically switch to the new tab
this.currentTabId = newTabId;
await newPage.bringToFront();
-
+
// Wait for new page to be ready
await newPage.waitForLoadState('domcontentloaded', { timeout: browserConfig.popupTimeout }).catch(() => {});
-
+
return { success: true, method: 'playwright-aria-ref-newtab', newTabId };
} catch (popupError) {
return { success: true, method: 'playwright-aria-ref' };
@@ -654,20 +654,20 @@ export class HybridBrowserSession {
// Add options to prevent scrolling issues
const browserConfig = this.configLoader.getBrowserConfig();
await element.click({ force: browserConfig.forceClick });
-
+
if (shouldCheckDiff && snapshotBefore) {
await page.waitForTimeout(300);
const snapshotAfter = await (page as any)._snapshotForAI();
const diffSnapshot = this.getSnapshotDiff(snapshotBefore, snapshotAfter, ['option', 'menuitem']);
-
+
if (diffSnapshot && diffSnapshot.trim() !== '') {
return { success: true, method: 'playwright-aria-ref', diffSnapshot };
}
}
-
+
return { success: true, method: 'playwright-aria-ref' };
}
-
+
} catch (error) {
console.error('[performClick] Exception during click for ref: %s', ref, error);
return { success: false, error: `Click failed with exception: ${error}` };
@@ -684,10 +684,10 @@ export class HybridBrowserSession {
while ((match = refPattern.exec(snapshotBefore)) !== null) {
refsBefore.add(match[1]);
}
-
+
const lines = snapshotAfter.split('\n');
const newElements: string[] = [];
-
+
for (const line of lines) {
const refMatch = line.match(/\[ref=([^\]]+)\]/);
if (refMatch && !refsBefore.has(refMatch[1])) {
@@ -695,13 +695,13 @@ export class HybridBrowserSession {
const rolePattern = new RegExp(`\\b${role}\\b`, 'i');
return rolePattern.test(line);
});
-
+
if (hasTargetRole) {
newElements.push(line.trim());
}
}
}
-
+
if (newElements.length > 0) {
return newElements.join('\n');
} else {
@@ -717,11 +717,11 @@ export class HybridBrowserSession {
try {
// Ensure we have the latest snapshot
await (page as any)._snapshotForAI();
-
+
// Handle multiple inputs if provided
if (inputs && inputs.length > 0) {
const results: Record = {};
-
+
for (const input of inputs) {
const singleResult = await this.performType(page, input.ref, input.text);
results[input.ref] = {
@@ -729,31 +729,31 @@ export class HybridBrowserSession {
error: singleResult.error
};
}
-
+
// Check if all inputs were successful
const allSuccess = Object.values(results).every(r => r.success);
const errors = Object.entries(results)
.filter(([_, r]) => !r.success)
.map(([ref, r]) => `${ref}: ${r.error}`)
.join('; ');
-
+
return {
success: allSuccess,
error: allSuccess ? undefined : `Some inputs failed: ${errors}`,
details: results
};
}
-
+
// Handle single input (backward compatibility)
if (ref && text !== undefined) {
const selector = `aria-ref=${ref}`;
const element = await page.locator(selector).first();
-
+
const exists = await element.count() > 0;
if (!exists) {
return { success: false, error: `Element with ref ${ref} not found` };
}
-
+
// Get element attributes to check if it's readonly or a special input type
let originalPlaceholder: string | null = null;
let isReadonly = false;
@@ -761,7 +761,7 @@ export class HybridBrowserSession {
let isCombobox = false;
let isTextbox = false;
let shouldCheckDiff = false;
-
+
try {
// Get element info in one evaluation to minimize interactions
const elementInfo = await element.evaluate((el: any) => {
@@ -775,22 +775,22 @@ export class HybridBrowserSession {
ariaHaspopup: el.getAttribute('aria-haspopup')
};
});
-
+
originalPlaceholder = elementInfo.placeholder;
isReadonly = elementInfo.readonly;
elementType = elementInfo.type;
- isCombobox = elementInfo.role === 'combobox' ||
+ isCombobox = elementInfo.role === 'combobox' ||
elementInfo.tagName === 'combobox' ||
elementInfo.ariaHaspopup === 'listbox';
- isTextbox = elementInfo.role === 'textbox' ||
- elementInfo.tagName === 'input' ||
+ isTextbox = elementInfo.role === 'textbox' ||
+ elementInfo.tagName === 'input' ||
elementInfo.tagName === 'textarea';
shouldCheckDiff = isCombobox || isTextbox;
-
+
} catch (e) {
console.log(`Warning: Failed to get element attributes: ${e}`);
}
-
+
// Get snapshot before action to record existing elements
const snapshotBefore = await (page as any)._snapshotForAI();
const existingRefs = new Set();
@@ -804,7 +804,7 @@ export class HybridBrowserSession {
// If element is readonly or a date/time input, skip fill attempt and go directly to click
if (isReadonly || ['date', 'datetime-local', 'time'].includes(elementType || '')) {
console.log(`Element ref=${ref} is readonly or date/time input, skipping direct fill attempt`);
-
+
// Click with force option to avoid scrolling
try {
await element.click({ force: true });
@@ -889,37 +889,37 @@ export class HybridBrowserSession {
// We already clicked during the click-then-fill strategy
await page.waitForTimeout(500);
}
-
+
// Step 1: Try to find input elements within the clicked element
const inputSelector = `input:visible, textarea:visible, [contenteditable="true"]:visible, [role="textbox"]:visible`;
const inputElement = await element.locator(inputSelector).first();
-
+
const inputExists = await inputElement.count() > 0;
if (inputExists) {
console.log(`Found input element within ref ${ref}, attempting to fill`);
try {
await inputElement.fill(text, { force: true });
-
+
// If element might show dropdown, check for new elements
if (shouldCheckDiff) {
await page.waitForTimeout(300);
const snapshotFinal = await (page as any)._snapshotForAI();
const diffSnapshot = this.getSnapshotDiff(snapshotBefore, snapshotFinal, ['option', 'menuitem']);
-
+
if (diffSnapshot && diffSnapshot.trim() !== '') {
return { success: true, diffSnapshot };
}
}
-
+
return { success: true };
} catch (innerError) {
console.log(`Failed to fill child element: ${innerError}`);
}
}
-
+
// Step 2: Look for new elements that appeared after the action
console.log(`Looking for new elements that appeared after action...`);
-
+
// Get snapshot after action to find new elements
const snapshotAfter = await (page as any)._snapshotForAI();
const newRefs = new Set();
@@ -931,24 +931,24 @@ export class HybridBrowserSession {
newRefs.add(refId);
}
}
-
+
console.log(`Found ${newRefs.size} new elements after action`);
-
+
// If we have a placeholder, try to find new input elements with that placeholder
if (originalPlaceholder && newRefs.size > 0) {
console.log(`Looking for new input elements with placeholder: ${originalPlaceholder}`);
-
+
// Try each new ref to see if it's an input with our placeholder
for (const newRef of newRefs) {
try {
const newElement = await page.locator(`aria-ref=${newRef}`).first();
const tagName = await newElement.evaluate(el => el.tagName.toLowerCase()).catch(() => null);
-
+
if (tagName === 'input' || tagName === 'textarea') {
const placeholder = await newElement.getAttribute('placeholder').catch(() => null);
if (placeholder === originalPlaceholder) {
console.log(`Found new input element with matching placeholder: ref=${newRef}`);
-
+
// Check if it's visible and fillable
const elementInfo = await newElement.evaluate((el: any) => {
return {
@@ -961,21 +961,21 @@ export class HybridBrowserSession {
};
});
console.log(`New element details:`, JSON.stringify(elementInfo));
-
+
// Try to fill it with force to avoid scrolling
await newElement.fill(text, { force: true });
-
+
// If element might show dropdown, check for new elements
if (shouldCheckDiff) {
await page.waitForTimeout(300);
const snapshotFinal = await (page as any)._snapshotForAI();
const diffSnapshot = this.getSnapshotDiff(snapshotBefore, snapshotFinal, ['option', 'menuitem']);
-
+
if (diffSnapshot && diffSnapshot.trim() !== '') {
return { success: true, diffSnapshot };
}
}
-
+
return { success: true };
}
}
@@ -984,19 +984,19 @@ export class HybridBrowserSession {
}
}
}
-
+
console.log(`No suitable input element found for ref ${ref}`);
}
// Re-throw the original error if we couldn't find an input element
throw fillError;
}
}
-
+
// If we skipped the fill attempt (readonly elements), look for new elements directly
if (isReadonly || ['date', 'datetime-local', 'time'].includes(elementType || '')) {
// Look for new elements that appeared after clicking
console.log(`Looking for new elements that appeared after clicking readonly element...`);
-
+
// Get snapshot after action to find new elements
const snapshotAfter = await (page as any)._snapshotForAI();
const newRefs = new Set();
@@ -1008,24 +1008,24 @@ export class HybridBrowserSession {
newRefs.add(refId);
}
}
-
+
console.log(`Found ${newRefs.size} new elements after clicking readonly element`);
-
+
// If we have a placeholder, try to find new input elements with that placeholder
if (originalPlaceholder && newRefs.size > 0) {
console.log(`Looking for new input elements with placeholder: ${originalPlaceholder}`);
-
+
// Try each new ref to see if it's an input with our placeholder
for (const newRef of newRefs) {
try {
const newElement = await page.locator(`aria-ref=${newRef}`).first();
const tagName = await newElement.evaluate(el => el.tagName.toLowerCase()).catch(() => null);
-
+
if (tagName === 'input' || tagName === 'textarea') {
const placeholder = await newElement.getAttribute('placeholder').catch(() => null);
if (placeholder === originalPlaceholder) {
console.log(`Found new input element with matching placeholder: ref=${newRef}`);
-
+
// Check if it's visible and fillable
const elementInfo = await newElement.evaluate((el: any) => {
return {
@@ -1038,21 +1038,21 @@ export class HybridBrowserSession {
};
});
console.log(`New element details:`, JSON.stringify(elementInfo));
-
+
// Try to fill it with force to avoid scrolling
await newElement.fill(text, { force: true });
-
+
// If element might show dropdown, check for new elements
if (shouldCheckDiff) {
await page.waitForTimeout(300);
const snapshotFinal = await (page as any)._snapshotForAI();
const diffSnapshot = this.getSnapshotDiff(snapshotBefore, snapshotFinal, ['option', 'menuitem']);
-
+
if (diffSnapshot && diffSnapshot.trim() !== '') {
return { success: true, diffSnapshot };
}
}
-
+
return { success: true };
}
}
@@ -1061,12 +1061,12 @@ export class HybridBrowserSession {
}
}
}
-
+
console.log(`No suitable input element found for readonly ref ${ref}`);
return { success: false, error: `Element ref=${ref} is readonly and no suitable input was found` };
}
}
-
+
return { success: false, error: 'No valid input provided' };
} catch (error) {
return { success: false, error: `Type failed: ${error}` };
@@ -1080,19 +1080,19 @@ export class HybridBrowserSession {
try {
// Ensure we have the latest snapshot
await (page as any)._snapshotForAI();
-
+
// Use Playwright's aria-ref selector
const selector = `aria-ref=${ref}`;
const element = await page.locator(selector).first();
-
+
const exists = await element.count() > 0;
if (!exists) {
return { success: false, error: `Element with ref ${ref} not found` };
}
-
+
// Select value using Playwright's built-in selectOption method
await element.selectOption(value);
-
+
return { success: true };
} catch (error) {
return { success: false, error: `Select failed: ${error}` };
@@ -1127,7 +1127,7 @@ export class HybridBrowserSession {
default:
return { success: false, error: `Invalid control action: ${control}` };
}
-
+
return { success: true };
} catch (error) {
return { success: false, error: `Mouse action failed: ${error}` };
@@ -1141,38 +1141,38 @@ export class HybridBrowserSession {
try {
// Ensure we have the latest snapshot
await (page as any)._snapshotForAI();
-
+
// Get elements using Playwright's aria-ref selector
const fromSelector = `aria-ref=${fromRef}`;
const toSelector = `aria-ref=${toRef}`;
-
+
const fromElement = await page.locator(fromSelector).first();
const toElement = await page.locator(toSelector).first();
-
+
// Check if elements exist
const fromExists = await fromElement.count() > 0;
const toExists = await toElement.count() > 0;
-
+
if (!fromExists) {
return { success: false, error: `Source element with ref ${fromRef} not found` };
}
-
+
if (!toExists) {
return { success: false, error: `Target element with ref ${toRef} not found` };
}
-
+
// Get the center coordinates of both elements
const fromBox = await fromElement.boundingBox();
const toBox = await toElement.boundingBox();
-
+
if (!fromBox) {
return { success: false, error: `Could not get bounding box for source element with ref ${fromRef}` };
}
-
+
if (!toBox) {
return { success: false, error: `Could not get bounding box for target element with ref ${toRef}` };
}
-
+
const fromX = fromBox.x + fromBox.width / 2;
const fromY = fromBox.y + fromBox.height / 2;
const toX = toBox.x + toBox.width / 2;
@@ -1194,54 +1194,54 @@ export class HybridBrowserSession {
async executeAction(action: BrowserAction): Promise {
const startTime = Date.now();
const page = await this.getCurrentPage();
-
+
let elementSearchTime = 0;
let actionExecutionTime = 0;
let stabilityWaitTime = 0;
-
+
try {
const elementSearchStart = Date.now();
-
+
// No need to pre-fetch snapshot - each action method handles this
-
+
let newTabId: string | undefined;
let customMessage: string | undefined;
let actionDetails: Record | undefined;
-
+
switch (action.type) {
case 'click': {
elementSearchTime = Date.now() - elementSearchStart;
const clickStart = Date.now();
-
+
// Use simplified click logic
const clickResult = await this.performClick(page, action.ref);
-
+
if (!clickResult.success) {
throw new Error(`Click failed: ${clickResult.error}`);
}
-
+
// Capture new tab ID if present
newTabId = clickResult.newTabId;
-
+
// Capture diff snapshot if present
if (clickResult.diffSnapshot) {
actionDetails = { diffSnapshot: clickResult.diffSnapshot };
}
-
+
actionExecutionTime = Date.now() - clickStart;
break;
}
-
+
case 'type': {
elementSearchTime = Date.now() - elementSearchStart;
const typeStart = Date.now();
const typeResult = await this.performType(page, action.ref, action.text, action.inputs);
-
+
if (!typeResult.success) {
throw new Error(`Type failed: ${typeResult.error}`);
}
-
+
// Set custom message and details if multiple inputs were used
if (typeResult.details) {
const successCount = Object.values(typeResult.details).filter((r: any) => r.success).length;
@@ -1249,7 +1249,7 @@ export class HybridBrowserSession {
customMessage = `Typed text into ${successCount}/${totalCount} elements`;
actionDetails = typeResult.details;
}
-
+
// Capture diff snapshot if present
if (typeResult.diffSnapshot) {
if (!actionDetails) {
@@ -1257,25 +1257,25 @@ export class HybridBrowserSession {
}
actionDetails.diffSnapshot = typeResult.diffSnapshot;
}
-
+
actionExecutionTime = Date.now() - typeStart;
break;
}
-
+
case 'select': {
elementSearchTime = Date.now() - elementSearchStart;
const selectStart = Date.now();
-
+
const selectResult = await this.performSelect(page, action.ref, action.value);
if (!selectResult.success) {
throw new Error(`Select failed: ${selectResult.error}`);
}
-
+
actionExecutionTime = Date.now() - selectStart;
break;
}
-
+
case 'scroll': {
elementSearchTime = Date.now() - elementSearchStart;
const scrollStart = Date.now();
@@ -1288,7 +1288,7 @@ export class HybridBrowserSession {
actionExecutionTime = Date.now() - scrollStart;
break;
}
-
+
case 'enter': {
elementSearchTime = Date.now() - elementSearchStart;
const enterStart = Date.now();
@@ -1331,7 +1331,7 @@ export class HybridBrowserSession {
actionExecutionTime = Date.now() - keyPressStart;
break;
}
-
+
default:
throw new Error(`Unknown action type: ${(action as any).type}`);
}
@@ -1340,9 +1340,9 @@ export class HybridBrowserSession {
const stabilityStart = Date.now();
const stabilityResult = await this.waitForPageStability(page);
stabilityWaitTime = Date.now() - stabilityStart;
-
+
const totalTime = Date.now() - startTime;
-
+
return {
success: true,
message: customMessage || `Action ${action.type} executed successfully`,
@@ -1379,28 +1379,28 @@ export class HybridBrowserSession {
const startTime = Date.now();
const stabilityThreshold = 100; // Consider stable if no changes for 100ms
let lastChangeTime = Date.now();
-
+
try {
// Monitor DOM changes
await page.evaluate(() => {
let changeCount = 0;
(window as any).__domStabilityCheck = { changeCount: 0, lastChange: Date.now() };
-
+
const observer = new MutationObserver(() => {
(window as any).__domStabilityCheck.changeCount++;
(window as any).__domStabilityCheck.lastChange = Date.now();
});
-
- observer.observe(document.body, {
- childList: true,
+
+ observer.observe(document.body, {
+ childList: true,
subtree: true,
attributes: true,
characterData: true
});
-
+
(window as any).__domStabilityObserver = observer;
});
-
+
// Wait until no changes for stabilityThreshold or timeout
await page.waitForFunction(
(threshold) => {
@@ -1424,31 +1424,31 @@ export class HybridBrowserSession {
private async waitForPageStability(page: Page): Promise<{ domContentLoadedTime: number; networkIdleTime: number }> {
let domContentLoadedTime = 0;
let networkIdleTime = 0;
-
+
try {
const domStart = Date.now();
const browserConfig = this.configLoader.getBrowserConfig();
await page.waitForLoadState(browserConfig.domContentLoadedState as any, { timeout: browserConfig.pageStabilityTimeout });
domContentLoadedTime = Date.now() - domStart;
-
+
const networkStart = Date.now();
await page.waitForLoadState(browserConfig.networkIdleState as any, { timeout: browserConfig.networkIdleTimeout });
networkIdleTime = Date.now() - networkStart;
} catch (error) {
// Continue even if stability wait fails
}
-
+
return { domContentLoadedTime, networkIdleTime };
}
async visitPage(url: string): Promise {
const startTime = Date.now();
-
+
try {
// Get current page to check if it's blank
let currentPage: Page;
let currentUrl: string;
-
+
try {
currentPage = await this.getCurrentPage();
currentUrl = currentPage.url();
@@ -1457,36 +1457,36 @@ export class HybridBrowserSession {
console.log('[visitPage] Failed to get current page:', error);
throw new Error(`No active page available: ${error?.message || error}`);
}
-
+
// Check if current page is blank or if this is the first navigation
const browserConfig = this.configLoader.getBrowserConfig();
-
+
// Use unified blank page detection
const isBlankPage = this.isBlankPageUrl(currentUrl) || currentUrl === browserConfig.defaultStartUrl;
-
+
const shouldUseCurrentTab = isBlankPage || !this.hasNavigatedBefore;
-
-
+
+
if (shouldUseCurrentTab) {
// Navigate in current tab if it's blank
-
+
const navigationStart = Date.now();
const browserConfig = this.configLoader.getBrowserConfig();
- await currentPage.goto(url, {
+ await currentPage.goto(url, {
timeout: browserConfig.navigationTimeout,
waitUntil: browserConfig.domContentLoadedState as any
});
-
+
// Reset scroll position after navigation
this.scrollPosition = { x: 0, y: 0 };
-
+
// Mark that we've navigated
this.hasNavigatedBefore = true;
-
+
const navigationTime = Date.now() - navigationStart;
const stabilityResult = await this.waitForPageStability(currentPage);
const totalTime = Date.now() - startTime;
-
+
return {
success: true,
message: `Navigated to ${url}`,
@@ -1502,13 +1502,13 @@ export class HybridBrowserSession {
if (!this.context) {
throw new Error('Browser context not initialized');
}
-
+
const navigationStart = Date.now();
-
+
// In CDP mode, find an available blank tab instead of creating new page
let newPage: Page | null = null;
let newTabId: string | null = null;
-
+
const browserConfig = this.configLoader.getBrowserConfig();
if (browserConfig.cdpUrl) {
// CDP mode: find an available blank tab
@@ -1524,7 +1524,7 @@ export class HybridBrowserSession {
break;
}
}
-
+
if (!newPage || !newTabId) {
console.log('[CDP] No available blank tabs, creating new page');
newPage = await this.context.newPage();
@@ -1537,31 +1537,31 @@ export class HybridBrowserSession {
newTabId = this.generateTabId();
this.registerNewPage(newTabId, newPage);
}
-
+
// Set up page properties
newPage.setDefaultNavigationTimeout(browserConfig.navigationTimeout);
newPage.setDefaultTimeout(browserConfig.navigationTimeout);
-
+
// Navigate to the URL
- await newPage.goto(url, {
+ await newPage.goto(url, {
timeout: browserConfig.navigationTimeout,
waitUntil: browserConfig.domContentLoadedState as any
});
-
+
// Automatically switch to the new tab
this.currentTabId = newTabId;
await newPage.bringToFront();
-
+
// Reset scroll position for the new page
this.scrollPosition = { x: 0, y: 0 };
-
+
// Mark that we've navigated
this.hasNavigatedBefore = true;
-
+
const navigationTime = Date.now() - navigationStart;
const stabilityResult = await this.waitForPageStability(newPage);
const totalTime = Date.now() - startTime;
-
+
return {
success: true,
message: `Opened ${url} in new tab`,
@@ -1593,20 +1593,20 @@ export class HybridBrowserSession {
if (!this.pages.has(tabId)) {
return false;
}
-
+
const page = this.pages.get(tabId)!;
-
+
if (page.isClosed()) {
this.pages.delete(tabId);
return false;
}
-
+
try {
console.log(`Switching to tab ${tabId}`);
-
+
// Update internal state first
this.currentTabId = tabId;
-
+
// Try to activate the tab using a gentler approach
// Instead of bringToFront, we'll use a combination of methods
try {
@@ -1617,7 +1617,7 @@ export class HybridBrowserSession {
// Dispatch a focus event
window.dispatchEvent(new Event('focus'));
}).catch(() => {});
-
+
// Method 2: For non-headless mode, schedule bringToFront asynchronously
// This prevents WebSocket disruption by not blocking the current operation
if (!this.configLoader.getBrowserConfig().headless) {
@@ -1638,7 +1638,7 @@ export class HybridBrowserSession {
// Log but don't fail - internal state is still updated
console.warn(`Tab focus warning for ${tabId}:`, error);
}
-
+
console.log(`Successfully switched to tab ${tabId}`);
return true;
} catch (error) {
@@ -1651,15 +1651,15 @@ export class HybridBrowserSession {
if (!this.pages.has(tabId)) {
return false;
}
-
+
const page = this.pages.get(tabId)!;
-
+
if (!page.isClosed()) {
await page.close();
}
-
+
this.pages.delete(tabId);
-
+
if (tabId === this.currentTabId) {
const remainingTabs = Array.from(this.pages.keys());
if (remainingTabs.length > 0) {
@@ -1668,7 +1668,7 @@ export class HybridBrowserSession {
this.currentTabId = null;
}
}
-
+
return true;
}
@@ -1794,15 +1794,15 @@ export class HybridBrowserSession {
async takeScreenshot(): Promise<{ buffer: Buffer; timing: { screenshot_time_ms: number } }> {
const startTime = Date.now();
const page = await this.getCurrentPage();
-
+
const browserConfig = this.configLoader.getBrowserConfig();
- const buffer = await page.screenshot({
+ const buffer = await page.screenshot({
timeout: browserConfig.screenshotTimeout,
fullPage: browserConfig.fullPageScreenshot
});
-
+
const screenshotTime = Date.now() - startTime;
-
+
return {
buffer,
timing: {
@@ -1813,16 +1813,16 @@ export class HybridBrowserSession {
async close(): Promise {
const browserConfig = this.configLoader.getBrowserConfig();
-
+
for (const page of this.pages.values()) {
if (!page.isClosed()) {
await page.close();
}
}
-
+
this.pages.clear();
this.currentTabId = null;
-
+
// Handle context cleanup separately for CDP mode
if (!browserConfig.cdpUrl && this.context && this.contextOwnedByUs) {
// For non-CDP mode, close context here
@@ -1830,7 +1830,7 @@ export class HybridBrowserSession {
this.context = null;
this.contextOwnedByUs = false;
}
-
+
if (this.browser) {
if (browserConfig.cdpUrl) {
// In CDP mode: tear down only our context, then disconnect
@@ -1847,4 +1847,4 @@ export class HybridBrowserSession {
this.browser = null;
}
}
-}
\ No newline at end of file
+}
diff --git a/camel/toolkits/hybrid_browser_toolkit/ts/src/config-loader.ts b/camel/toolkits/hybrid_browser_toolkit/ts/src/config-loader.ts
index 703c151c5f..e338881a39 100644
--- a/camel/toolkits/hybrid_browser_toolkit/ts/src/config-loader.ts
+++ b/camel/toolkits/hybrid_browser_toolkit/ts/src/config-loader.ts
@@ -10,10 +10,10 @@ export interface BrowserConfig {
headless: boolean;
userDataDir?: string;
stealth: StealthConfig;
-
+
// Default settings
defaultStartUrl: string;
-
+
// Timeout configurations (in milliseconds)
defaultTimeout?: number;
shortTimeout?: number;
@@ -22,54 +22,54 @@ export interface BrowserConfig {
screenshotTimeout: number;
pageStabilityTimeout: number;
domContentLoadedTimeout: number;
-
+
// Action timeouts
popupTimeout: number;
clickTimeout: number;
-
+
// Tab management
tabIdPrefix: string;
tabCounterPadding: number;
consoleLogLimit: number;
-
+
// Scroll and positioning
scrollPositionScale: number;
navigationDelay: number;
-
+
// Page states and URLs
blankPageUrls: string[];
dataUrlPrefix: string;
-
+
// Wait states
domContentLoadedState: string;
networkIdleState: string;
-
+
// HTML attributes
targetAttribute: string;
hrefAttribute: string;
onclickAttribute: string;
-
+
// Target and navigation values
blankTarget: string;
windowOpenString: string;
javascriptVoidPrefix: string;
javascriptVoidEmpty: string;
anchorOnly: string;
-
+
// Action options
forceClick: boolean;
fullPageScreenshot: boolean;
-
+
// Keyboard keys
enterKey: string;
-
+
// Other options
useNativePlaywrightMapping: boolean;
viewport: {
width: number;
height: number;
};
-
+
// CDP connection options
connectOverCdp: boolean;
cdpUrl?: string;
@@ -169,7 +169,7 @@ export class ConfigLoader {
...(browserConfig.stealth || {})
}
};
-
+
this.wsConfig = {
...getDefaultWebSocketConfig(),
...wsConfig
@@ -198,7 +198,7 @@ export class ConfigLoader {
if (config.stealth !== undefined) {
// Handle both boolean and object formats for backward compatibility
if (typeof config.stealth === 'boolean') {
- browserConfig.stealth = {
+ browserConfig.stealth = {
enabled: config.stealth,
args: getDefaultStealthConfig().args
};
@@ -211,12 +211,12 @@ export class ConfigLoader {
if (config.networkIdleTimeout !== undefined) browserConfig.networkIdleTimeout = config.networkIdleTimeout;
if (config.screenshotTimeout !== undefined) browserConfig.screenshotTimeout = config.screenshotTimeout;
if (config.pageStabilityTimeout !== undefined) browserConfig.pageStabilityTimeout = config.pageStabilityTimeout;
-
+
if (config.browser_log_to_file !== undefined) wsConfig.browser_log_to_file = config.browser_log_to_file;
if (config.session_id !== undefined) wsConfig.session_id = config.session_id;
if (config.viewport_limit !== undefined) wsConfig.viewport_limit = config.viewport_limit;
if (config.fullVisualMode !== undefined) wsConfig.fullVisualMode = config.fullVisualMode;
-
+
// CDP connection options
if (config.connectOverCdp !== undefined) browserConfig.connectOverCdp = config.connectOverCdp;
if (config.cdpUrl !== undefined) browserConfig.cdpUrl = config.cdpUrl;
@@ -230,4 +230,4 @@ export class ConfigLoader {
return { ...this.browserConfig.stealth };
}
-}
\ No newline at end of file
+}
diff --git a/camel/toolkits/hybrid_browser_toolkit/ts/src/hybrid-browser-toolkit.ts b/camel/toolkits/hybrid_browser_toolkit/ts/src/hybrid-browser-toolkit.ts
index 996986b447..23bbec68df 100644
--- a/camel/toolkits/hybrid_browser_toolkit/ts/src/hybrid-browser-toolkit.ts
+++ b/camel/toolkits/hybrid_browser_toolkit/ts/src/hybrid-browser-toolkit.ts
@@ -22,10 +22,10 @@ export class HybridBrowserToolkit {
async openBrowser(startUrl?: string): Promise {
const startTime = Date.now();
-
+
try {
await this.session.ensureBrowser();
-
+
// Check if we should skip navigation in CDP keep-current-page mode
const browserConfig = this.configLoader.getBrowserConfig();
if (browserConfig.cdpUrl && browserConfig.cdpKeepCurrentPage && !startUrl) {
@@ -33,12 +33,12 @@ export class HybridBrowserToolkit {
const snapshotStart = Date.now();
const snapshot = await this.getSnapshotForAction(this.viewportLimit);
const snapshotTime = Date.now() - snapshotStart;
-
+
const page = await this.session.getCurrentPage();
const currentUrl = page ? await page.url() : 'unknown';
-
+
const totalTime = Date.now() - startTime;
-
+
return {
success: true,
message: `Browser opened in CDP keep-current-page mode (current page: ${currentUrl})`,
@@ -49,18 +49,18 @@ export class HybridBrowserToolkit {
},
};
}
-
+
// For normal mode or CDP with cdpKeepCurrentPage=false: navigate to URL
if (!browserConfig.cdpUrl || !browserConfig.cdpKeepCurrentPage) {
const url = startUrl || this.config.defaultStartUrl || 'https://google.com/';
const result = await this.session.visitPage(url);
-
+
const snapshotStart = Date.now();
const snapshot = await this.getSnapshotForAction(this.viewportLimit);
const snapshotTime = Date.now() - snapshotStart;
-
+
const totalTime = Date.now() - startTime;
-
+
return {
success: true,
message: result.message,
@@ -72,14 +72,14 @@ export class HybridBrowserToolkit {
},
};
}
-
+
// Fallback: Just return current page snapshot without any navigation
const snapshotStart = Date.now();
const snapshot = await this.getSnapshotForAction(this.viewportLimit);
const snapshotTime = Date.now() - snapshotStart;
-
+
const totalTime = Date.now() - startTime;
-
+
return {
success: true,
message: `Browser opened without navigation`,
@@ -120,35 +120,35 @@ export class HybridBrowserToolkit {
try {
// Ensure browser is initialized before visiting page
await this.session.ensureBrowser();
-
+
const result = await this.session.visitPage(url);
-
+
// Format response for Python layer compatibility
const response: any = {
result: result.message,
snapshot: '',
};
-
+
if (result.success) {
const snapshotStart = Date.now();
response.snapshot = await this.getSnapshotForAction(this.viewportLimit);
const snapshotTime = Date.now() - snapshotStart;
-
+
if (result.timing) {
result.timing.snapshot_time_ms = snapshotTime;
}
}
-
+
// Include timing if available
if (result.timing) {
response.timing = result.timing;
}
-
+
// Include newTabId if present
if (result.newTabId) {
response.newTabId = result.newTabId;
}
-
+
return response;
} catch (error) {
console.error('[visitPage] Error:', error);
@@ -175,7 +175,7 @@ export class HybridBrowserToolkit {
return `Error capturing snapshot: ${error}`;
}
}
-
+
// Internal method for getting snapshot in actions (respects fullVisualMode)
private async getSnapshotForAction(viewportLimit: boolean = false): Promise {
if (this.fullVisualMode) {
@@ -192,16 +192,16 @@ export class HybridBrowserToolkit {
async getSomScreenshot(): Promise {
const startTime = Date.now();
console.log('[HybridBrowserToolkit] Starting getSomScreenshot...');
-
+
try {
// Get page and snapshot data
const page = await this.session.getCurrentPage();
const snapshotResult = await this.session.getSnapshotForAI(true); // Include coordinates
-
+
// Parse clickable elements from snapshot text
const clickableElements = this.parseClickableElements(snapshotResult.snapshot);
console.log(`[HybridBrowserToolkit] Found ${clickableElements.size} clickable elements`);
-
+
// Apply hierarchy-based filtering
const filteredElements = filterClickableByHierarchy(snapshotResult.snapshot, clickableElements);
console.log(`[HybridBrowserToolkit] After filtering: ${filteredElements.size} elements remain`);
@@ -213,11 +213,11 @@ export class HybridBrowserToolkit {
filteredElements,
undefined // No export path - don't generate files
);
-
+
// Add snapshot timing info to result
result.timing.snapshot_time_ms = snapshotResult.timing.snapshot_time_ms;
result.timing.coordinate_enrichment_time_ms = snapshotResult.timing.coordinate_enrichment_time_ms;
-
+
return result;
} catch (error) {
const totalTime = Date.now() - startTime;
@@ -242,7 +242,7 @@ export class HybridBrowserToolkit {
private parseClickableElements(snapshotText: string): Set {
const clickableElements = new Set();
const lines = snapshotText.split('\n');
-
+
for (const line of lines) {
// Look for lines containing [cursor=pointer] or [active] and extract ref
if (line.includes('[cursor=pointer]') || line.includes('[active]')) {
@@ -252,23 +252,23 @@ export class HybridBrowserToolkit {
}
}
}
-
+
return clickableElements;
}
private async executeActionWithSnapshot(action: BrowserAction): Promise {
const result = await this.session.executeAction(action);
-
+
const response: any = {
result: result.message,
snapshot: '',
};
-
+
if (result.success) {
if (result.details?.diffSnapshot) {
response.snapshot = result.details.diffSnapshot;
-
+
if (result.timing) {
result.timing.snapshot_time_ms = 0; // Diff snapshot time is included in action time
}
@@ -277,23 +277,23 @@ export class HybridBrowserToolkit {
const snapshotStart = Date.now();
response.snapshot = await this.getPageSnapshot(this.viewportLimit);
const snapshotTime = Date.now() - snapshotStart;
-
+
if (result.timing) {
result.timing.snapshot_time_ms = snapshotTime;
}
}
}
-
+
// Include timing if available
if (result.timing) {
response.timing = result.timing;
}
-
+
// Include newTabId if present
if (result.newTabId) {
response.newTabId = result.newTabId;
}
-
+
// Include details if present (excluding diffSnapshot as it's already in snapshot)
if (result.details) {
const { diffSnapshot, ...otherDetails } = result.details;
@@ -301,7 +301,7 @@ export class HybridBrowserToolkit {
response.details = otherDetails;
}
}
-
+
return response;
}
@@ -312,7 +312,7 @@ export class HybridBrowserToolkit {
async type(refOrInputs: string | Array<{ ref: string; text: string }>, text?: string): Promise {
let action: BrowserAction;
-
+
if (typeof refOrInputs === 'string') {
// Single input mode (backward compatibility)
if (text === undefined) {
@@ -323,7 +323,7 @@ export class HybridBrowserToolkit {
// Multiple inputs mode
action = { type: 'type', inputs: refOrInputs };
}
-
+
return this.executeActionWithSnapshot(action);
}
@@ -363,20 +363,20 @@ export class HybridBrowserToolkit {
async back(): Promise {
const startTime = Date.now();
-
+
try {
const page = await this.session.getCurrentPage();
-
+
const navigationStart = Date.now();
await page.goBack({ waitUntil: 'domcontentloaded' });
const navigationTime = Date.now() - navigationStart;
-
+
const snapshotStart = Date.now();
const snapshot = await this.getSnapshotForAction(this.viewportLimit);
const snapshotTime = Date.now() - snapshotStart;
-
+
const totalTime = Date.now() - startTime;
-
+
return {
success: true,
message: 'Navigated back successfully',
@@ -403,20 +403,20 @@ export class HybridBrowserToolkit {
async forward(): Promise {
const startTime = Date.now();
-
+
try {
const page = await this.session.getCurrentPage();
-
+
const navigationStart = Date.now();
await page.goForward({ waitUntil: 'domcontentloaded' });
const navigationTime = Date.now() - navigationStart;
-
+
const snapshotStart = Date.now();
const snapshot = await this.getSnapshotForAction(this.viewportLimit);
const snapshotTime = Date.now() - snapshotStart;
-
+
const totalTime = Date.now() - startTime;
-
+
return {
success: true,
message: 'Navigated forward successfully',
@@ -444,17 +444,17 @@ export class HybridBrowserToolkit {
async switchTab(tabId: string): Promise {
const startTime = Date.now();
-
+
try {
const success = await this.session.switchToTab(tabId);
-
+
if (success) {
const snapshotStart = Date.now();
const snapshot = await this.getPageSnapshot(this.viewportLimit);
const snapshotTime = Date.now() - snapshotStart;
-
+
const totalTime = Date.now() - startTime;
-
+
return {
result: `Switched to tab ${tabId}`,
snapshot: snapshot,
@@ -479,7 +479,7 @@ export class HybridBrowserToolkit {
async closeTab(tabId: string): Promise {
const success = await this.session.closeTab(tabId);
-
+
if (success) {
return {
success: true,
@@ -511,7 +511,7 @@ export class HybridBrowserToolkit {
const startTime = Date.now();
try {
const page = await this.session.getCurrentPage();
-
+
// Wrap the code to capture console.log output
const wrappedCode = `
(function() {
@@ -527,7 +527,7 @@ export class HybridBrowserToolkit {
}).join(' '));
originalLog.apply(console, args);
};
-
+
let result;
try {
result = eval(${JSON.stringify(code)});
@@ -539,12 +539,12 @@ export class HybridBrowserToolkit {
throw error;
}
}
-
+
console.log = originalLog;
return { result, logs: _logs };
})()
`;
-
+
const evalResult = await page.evaluate(wrappedCode) as { result: any; logs: string[] };
const { result, logs } = evalResult;
@@ -571,7 +571,7 @@ export class HybridBrowserToolkit {
snapshot_time_ms: snapshotTime,
},
};
-
+
} catch (error) {
const totalTime = Date.now() - startTime;
return {
@@ -587,4 +587,3 @@ export class HybridBrowserToolkit {
}
}
-
diff --git a/camel/toolkits/hybrid_browser_toolkit/ts/src/index.ts b/camel/toolkits/hybrid_browser_toolkit/ts/src/index.ts
index 70e5ff2fb8..93bc34265f 100644
--- a/camel/toolkits/hybrid_browser_toolkit/ts/src/index.ts
+++ b/camel/toolkits/hybrid_browser_toolkit/ts/src/index.ts
@@ -4,4 +4,4 @@ export { ConfigLoader, StealthConfig, BrowserConfig, WebSocketConfig } from './c
export * from './types';
// Default export for convenience
-export { HybridBrowserToolkit as default } from './hybrid-browser-toolkit';
\ No newline at end of file
+export { HybridBrowserToolkit as default } from './hybrid-browser-toolkit';
diff --git a/camel/toolkits/hybrid_browser_toolkit/ts/src/parent-child-filter.ts b/camel/toolkits/hybrid_browser_toolkit/ts/src/parent-child-filter.ts
index 7767826351..88fdf2270a 100644
--- a/camel/toolkits/hybrid_browser_toolkit/ts/src/parent-child-filter.ts
+++ b/camel/toolkits/hybrid_browser_toolkit/ts/src/parent-child-filter.ts
@@ -38,11 +38,11 @@ function isPropagatingElement(element: ElementInfo): boolean {
const tagName = element.tagName || element.type || '';
const tag = tagName.toLowerCase();
const role = element.role || element.attributes?.role || null;
-
+
// For generic elements with cursor=pointer, we need to be more selective
// Only treat them as propagating if they don't have text content
// (text-containing generics are usually labels, not containers)
- if ((tag === 'generic' || element.type === 'generic') &&
+ if ((tag === 'generic' || element.type === 'generic') &&
element.attributes?.['cursor'] === 'pointer') {
// If element has direct text content, it's likely a label, not a container
if (element.text && element.text.trim()) {
@@ -51,7 +51,7 @@ function isPropagatingElement(element: ElementInfo): boolean {
// If no text, it might be a container
return true;
}
-
+
for (const pattern of PROPAGATING_ELEMENTS) {
if (pattern.tag === tag) {
if (pattern.role === null || pattern.role === role) {
@@ -71,20 +71,20 @@ function isContained(
threshold: number
): boolean {
// Calculate intersection
- const xOverlap = Math.max(0,
- Math.min(childBounds.x + childBounds.width, parentBounds.x + parentBounds.width) -
+ const xOverlap = Math.max(0,
+ Math.min(childBounds.x + childBounds.width, parentBounds.x + parentBounds.width) -
Math.max(childBounds.x, parentBounds.x)
);
const yOverlap = Math.max(0,
- Math.min(childBounds.y + childBounds.height, parentBounds.y + parentBounds.height) -
+ Math.min(childBounds.y + childBounds.height, parentBounds.y + parentBounds.height) -
Math.max(childBounds.y, parentBounds.y)
);
-
+
const intersectionArea = xOverlap * yOverlap;
const childArea = childBounds.width * childBounds.height;
-
+
if (childArea === 0) return false;
-
+
return (intersectionArea / childArea) >= threshold;
}
@@ -96,47 +96,47 @@ function shouldFilterChild(childEl: ElementInfo, parentEl: ElementInfo): boolean
if (!isPropagatingElement(parentEl)) {
return false;
}
-
+
// Never filter if elements don't have coordinates
if (!childEl.coordinates || !parentEl.coordinates) {
return false;
}
-
+
// Check containment
if (!isContained(childEl.coordinates, parentEl.coordinates, CONTAINMENT_THRESHOLD)) {
return false;
}
-
+
const childTag = (childEl.tagName || childEl.type || '').toLowerCase();
const childRole = childEl.role || childEl.attributes?.role || null;
-
+
// Exception rules - never filter these:
-
+
// 1. Form elements (need individual interaction)
if (['input', 'select', 'textarea', 'label'].includes(childTag)) {
return false;
}
-
+
// 2. Child is also a propagating element (might have stopPropagation)
if (isPropagatingElement(childEl)) {
return false;
}
-
+
// 3. Has onclick handler
if (childEl.attributes?.onclick) {
return false;
}
-
+
// 4. Has meaningful aria-label
if (childEl.attributes?.['aria-label']?.trim()) {
return false;
}
-
+
// 5. Has interactive role
if (['button', 'link', 'checkbox', 'radio', 'tab', 'menuitem'].includes(childRole || '')) {
return false;
}
-
+
// Default: filter this child
return true;
}
@@ -157,50 +157,50 @@ export function filterParentChildElements(
const elementRefs = Array.from(clickableRefs);
const filteredElements = new Set(elementRefs);
const debugInfo: any[] = [];
-
+
console.log(`[Parent-Child Filter] Analyzing ${elementRefs.length} clickable elements`);
-
+
// Check each pair of elements for parent-child filtering
for (let i = 0; i < elementRefs.length; i++) {
const parentRef = elementRefs[i];
const parentEl = elements[parentRef];
-
+
if (!parentEl?.coordinates) continue;
-
+
const isParentPropagating = isPropagatingElement(parentEl);
-
+
for (let j = 0; j < elementRefs.length; j++) {
if (i === j) continue;
-
+
const childRef = elementRefs[j];
const childEl = elements[childRef];
-
+
if (!childEl?.coordinates) continue;
-
+
// Debug parent-child relationships when enabled
const DEBUG_PARENT_CHILD = process.env.DEBUG_PARENT_CHILD === 'true';
if (DEBUG_PARENT_CHILD) {
const shouldFilter = shouldFilterChild(childEl, parentEl);
console.log(`\n[Debug] Checking ${parentRef} -> ${childRef}:`);
console.log(`Parent:`, {
- ref: parentRef,
- type: parentEl.type || parentEl.tagName,
+ ref: parentRef,
+ type: parentEl.type || parentEl.tagName,
role: parentEl.role,
coords: parentEl.coordinates,
isPropagating: isParentPropagating
});
console.log(`Child:`, {
- ref: childRef,
- type: childEl.type || childEl.tagName,
+ ref: childRef,
+ type: childEl.type || childEl.tagName,
role: childEl.role,
coords: childEl.coordinates
});
console.log(`Should filter? ${shouldFilter}`);
}
-
+
if (shouldFilterChild(childEl, parentEl)) {
filteredElements.delete(childRef);
-
+
debugInfo.push({
type: 'filtered',
childRef,
@@ -215,12 +215,12 @@ export function filterParentChildElements(
}
}
}
-
+
const filteredCount = elementRefs.length - filteredElements.size;
console.log(`[Parent-Child Filter] Filtered out ${filteredCount} child elements`);
-
+
return {
filteredElements,
debugInfo
};
-}
\ No newline at end of file
+}
diff --git a/camel/toolkits/hybrid_browser_toolkit/ts/src/snapshot-parser.ts b/camel/toolkits/hybrid_browser_toolkit/ts/src/snapshot-parser.ts
index f18910a24a..a7e770b544 100644
--- a/camel/toolkits/hybrid_browser_toolkit/ts/src/snapshot-parser.ts
+++ b/camel/toolkits/hybrid_browser_toolkit/ts/src/snapshot-parser.ts
@@ -14,15 +14,15 @@ export interface SnapshotNode {
export function parseSnapshotHierarchy(snapshotText: string): Map {
const nodes = new Map();
const lines = snapshotText.split('\n');
-
+
// Stack to track parent elements at each indentation level
const parentStack: { ref: string; indent: number }[] = [];
-
+
for (const line of lines) {
if (!line.trim()) continue;
-
+
const indent = line.length - line.trimStart().length;
-
+
// Extract type and optional label
const headerMatch = line.match(/^\s*(?:-\s*)?'?([a-z0-9_-]+)(?:\s+"((?:[^"\\]|\\.)*)")?/i);
if (!headerMatch) continue;
@@ -45,11 +45,11 @@ export function parseSnapshotHierarchy(snapshotText: string): Map 0 && parentStack[parentStack.length - 1].indent >= indent) {
parentStack.pop();
}
-
+
const node: SnapshotNode = {
ref,
type: type.toLowerCase(),
@@ -58,16 +58,16 @@ export function parseSnapshotHierarchy(snapshotText: string): Map 0 ? parentStack[parentStack.length - 1].ref : undefined
};
-
+
if (node.parent && nodes.has(node.parent)) {
nodes.get(node.parent)!.children.push(ref);
}
-
+
nodes.set(ref, node);
-
+
parentStack.push({ ref, indent });
}
-
+
return nodes;
}
@@ -81,7 +81,7 @@ export function filterClickableByHierarchy(
const hierarchy = parseSnapshotHierarchy(snapshotText);
const filtered = new Set(clickableElements);
const debugInfo: any[] = [];
-
+
// Debug clickable elements when enabled
const DEBUG_SNAPSHOT_PARSER = process.env.DEBUG_SNAPSHOT_PARSER === 'true';
if (DEBUG_SNAPSHOT_PARSER) {
@@ -94,14 +94,14 @@ export function filterClickableByHierarchy(
}
});
}
-
+
// First pass: identify parent-child relationships where both are clickable
const parentChildPairs: Array<{parent: string, child: string, parentType: string, childType: string}> = [];
-
+
for (const childRef of clickableElements) {
const childNode = hierarchy.get(childRef);
if (!childNode || !childNode.parent) continue;
-
+
const parentRef = childNode.parent;
if (clickableElements.has(parentRef)) {
const parentNode = hierarchy.get(parentRef);
@@ -112,7 +112,7 @@ export function filterClickableByHierarchy(
parentType: parentNode.type.toLowerCase(),
childType: childNode.type.toLowerCase()
});
-
+
// Debug specific pairs
if ((parentRef === 'e296' && childRef === 'e297') ||
(parentRef === 'e361' && childRef === 'e363') ||
@@ -124,19 +124,19 @@ export function filterClickableByHierarchy(
}
}
}
-
+
// Decide which elements to filter based on parent-child relationships
for (const pair of parentChildPairs) {
const { parent, child, parentType, childType } = pair;
-
+
// Rules for what to filter:
// 1. link > img: filter img (keep link)
- // 2. button > generic: filter generic (keep button)
+ // 2. button > generic: filter generic (keep button)
// 3. generic > button: filter generic (keep button)
// 4. link > generic: filter generic (keep link)
// 5. generic > generic: filter child (keep parent)
// 6. generic > unknown: filter child (keep parent)
-
+
if ((parentType === 'link' && childType === 'img') ||
(parentType === 'button' && childType === 'generic') ||
(parentType === 'link' && childType === 'generic') ||
@@ -151,14 +151,14 @@ export function filterClickableByHierarchy(
console.log(`[Hierarchy Filter] Filtered ${parent} (${parentType}) - keeping child ${child} (${childType})`);
}
}
-
+
// Original logic for nested hierarchies (keep for deep nesting)
for (const childRef of clickableElements) {
if (!filtered.has(childRef)) continue; // Already filtered
-
+
const childNode = hierarchy.get(childRef);
if (!childNode || !childNode.parent) continue;
-
+
// Check if any ancestor is a propagating element
let currentParent: string | undefined = childNode.parent;
while (currentParent) {
@@ -169,21 +169,21 @@ export function filterClickableByHierarchy(
const parentType = parentNode.type.toLowerCase();
const isPropagating = ['button', 'link', 'a'].includes(parentType) ||
(parentType === 'generic' && parentNode.attributes.cursor === 'pointer' && !parentNode.text);
-
+
if (isPropagating) {
// Filter child elements that should be contained within propagating parents
const childType = childNode.type.toLowerCase();
-
+
// Filter these types of children:
// 1. Generic elements with cursor=pointer
// 2. Images within links/buttons
// 3. Text elements (span, generic without specific role)
- const shouldFilter =
+ const shouldFilter =
(childType === 'generic' && childNode.attributes.cursor === 'pointer') ||
childType === 'img' ||
childType === 'span' ||
(childType === 'generic' && !childNode.attributes.role);
-
+
if (shouldFilter) {
filtered.delete(childRef);
console.log(`[Hierarchy Filter] Filtered ${childRef} (${childType}) contained in ${currentParent} (${parentType})`);
@@ -197,23 +197,23 @@ export function filterClickableByHierarchy(
currentParent = nextParent?.parent;
}
}
-
+
// Additional pass: if a generic parent contains only one button child, filter the parent
for (const ref of Array.from(filtered)) {
const node = hierarchy.get(ref);
if (!node || node.type.toLowerCase() !== 'generic') continue;
-
+
// Check if this generic has exactly one clickable child that's a button
- const clickableChildren = node.children.filter(childRef =>
+ const clickableChildren = node.children.filter(childRef =>
filtered.has(childRef) && hierarchy.get(childRef)?.type.toLowerCase() === 'button'
);
-
+
if (clickableChildren.length === 1) {
// This generic wraps a single button - filter it out
filtered.delete(ref);
console.log(`[Hierarchy Filter] Filtered ${ref} (generic wrapper around button ${clickableChildren[0]})`);
}
}
-
+
return filtered;
-}
\ No newline at end of file
+}
diff --git a/camel/toolkits/hybrid_browser_toolkit/ts/src/som-screenshot-injected.ts b/camel/toolkits/hybrid_browser_toolkit/ts/src/som-screenshot-injected.ts
index eaf3187416..a649a4e6a8 100644
--- a/camel/toolkits/hybrid_browser_toolkit/ts/src/som-screenshot-injected.ts
+++ b/camel/toolkits/hybrid_browser_toolkit/ts/src/som-screenshot-injected.ts
@@ -19,29 +19,29 @@ export class SomScreenshotInjected {
exportPath?: string
): Promise {
const startTime = Date.now();
-
+
try {
// Use the already filtered clickableElements directly
const filterStartTime = Date.now();
const filterTime = Date.now() - filterStartTime;
console.log(`Using pre-filtered clickable elements: ${clickableElements.size} elements`);
-
+
// Prepare element geometry data for export
const elementGeometry: any[] = [];
// Inject and capture in one go
// Collect visibility debug info
const visibilityDebugInfo: any[] = [];
-
+
const result = await page.evaluate(async (data) => {
const { elements, clickable, filterDebugInfo } = data;
const markedElements: any[] = [];
-
+
// Debug info collector - include filter debug info
const debugInfo: any[] = [...filterDebugInfo];
-
+
// Helper function to check element visibility based on coordinates
function checkElementVisibilityByCoords(
- coords: { x: number, y: number, width: number, height: number },
+ coords: { x: number, y: number, width: number, height: number },
ref: string,
elementInfo: any
): 'visible' | 'partial' | 'hidden' {
@@ -50,71 +50,71 @@ export class SomScreenshotInjected {
coords.x + coords.width < 0 || coords.x > window.innerWidth) {
return 'hidden';
}
-
+
// Simple approach: just check the center point
// If center is visible and shows our element (or its child), consider it visible
const centerX = coords.x + coords.width * 0.5;
const centerY = coords.y + coords.height * 0.5;
-
+
try {
const elementsAtCenter = document.elementsFromPoint(centerX, centerY);
if (!elementsAtCenter || elementsAtCenter.length === 0) {
return 'hidden';
}
-
+
// Find our target element in the stack
let targetFound = false;
let targetIsTopmost = false;
-
+
for (let i = 0; i < elementsAtCenter.length; i++) {
const elem = elementsAtCenter[i];
const rect = elem.getBoundingClientRect();
-
+
// Check if this element matches our expected bounds (within tolerance)
- if (Math.abs(rect.left - coords.x) < 5 &&
+ if (Math.abs(rect.left - coords.x) < 5 &&
Math.abs(rect.top - coords.y) < 5 &&
Math.abs(rect.width - coords.width) < 10 &&
Math.abs(rect.height - coords.height) < 10) {
targetFound = true;
targetIsTopmost = (i === 0);
-
+
// If target is topmost, it's definitely visible
if (targetIsTopmost) {
return 'visible';
}
-
+
// If not topmost, check if the topmost element is a child of our target
const topmostElem = elementsAtCenter[0];
if (elem.contains(topmostElem)) {
// Topmost is our child - element is visible
return 'visible';
}
-
+
// Otherwise, we're obscured
return 'hidden';
}
}
-
+
// If we didn't find our target element at all
if (!targetFound) {
// Special handling for composite widgets
const topElement = elementsAtCenter[0];
const tagName = topElement.tagName.toUpperCase();
-
+
// Get element role/type for better decision making
const elementRole = elementInfo?.role || '';
const elementTagName = elementInfo?.tagName || '';
-
+
// Only apply special handling for form controls that are part of composite widgets
if (['SELECT', 'INPUT', 'TEXTAREA', 'BUTTON'].includes(tagName)) {
const isFormRelatedElement = ['combobox', 'select', 'textbox', 'searchbox', 'spinbutton'].includes(elementRole) ||
['SELECT', 'INPUT', 'TEXTAREA', 'BUTTON', 'OPTION'].includes(elementTagName.toUpperCase());
-
+
// Check if the form control approximately matches our area
const rect = topElement.getBoundingClientRect();
const overlap = Math.min(rect.right, coords.x + coords.width) - Math.max(rect.left, coords.x) > 0 &&
Math.min(rect.bottom, coords.y + coords.height) - Math.max(rect.top, coords.y) > 0;
-
+
if (overlap && isFormRelatedElement) {
// Check for specific composite widget patterns
// For combobox with search input (like Amazon search)
@@ -122,11 +122,11 @@ export class SomScreenshotInjected {
// This is likely a search box with category selector - mark as visible
return 'visible';
}
-
+
// For button/generic elements covered by INPUT with exact same bounds
// This usually indicates the INPUT is the actual interactive element for the button
if ((elementRole === 'button' || elementRole === 'generic') && tagName === 'INPUT') {
- const rectMatch = Math.abs(rect.left - coords.x) < 2 &&
+ const rectMatch = Math.abs(rect.left - coords.x) < 2 &&
Math.abs(rect.top - coords.y) < 2 &&
Math.abs(rect.width - coords.width) < 2 &&
Math.abs(rect.height - coords.height) < 2;
@@ -135,7 +135,7 @@ export class SomScreenshotInjected {
return 'visible';
}
}
-
+
// For other form-related elements, only mark as visible if they share similar positioning
// (i.e., they're likely part of the same widget)
const sizeDiff = Math.abs(rect.width - coords.width) + Math.abs(rect.height - coords.height);
@@ -144,28 +144,28 @@ export class SomScreenshotInjected {
}
}
}
-
+
return 'hidden';
}
-
+
return 'partial';
-
+
} catch (e) {
// Fallback: use simple elementFromPoint check
const elem = document.elementFromPoint(centerX, centerY);
if (!elem) return 'hidden';
-
+
const rect = elem.getBoundingClientRect();
// Check if the element at center matches our bounds
- if (Math.abs(rect.left - coords.x) < 5 &&
+ if (Math.abs(rect.left - coords.x) < 5 &&
Math.abs(rect.top - coords.y) < 5) {
return 'visible';
}
return 'partial';
}
}
-
-
+
+
// Create overlay
const overlay = document.createElement('div');
overlay.id = 'camel-som-overlay-temp'; // Set ID immediately for cleanup
@@ -178,23 +178,23 @@ export class SomScreenshotInjected {
pointer-events: none;
z-index: 2147483647;
`;
-
+
// Check visibility for each element using coordinates
const elementStates = new Map();
-
+
Object.entries(elements).forEach(([ref, element]: [string, any]) => {
if (element.coordinates && clickable.includes(ref)) {
const visibility = checkElementVisibilityByCoords(element.coordinates, ref, element);
elementStates.set(ref, visibility);
-
+
// Add debug info
const centerX = element.coordinates.x + element.coordinates.width * 0.5;
const centerY = element.coordinates.y + element.coordinates.height * 0.5;
-
+
try {
const elementsAtCenter = document.elementsFromPoint(centerX, centerY);
const topmostElement = elementsAtCenter[0];
-
+
debugInfo.push({
ref,
coords: element.coordinates,
@@ -221,27 +221,27 @@ export class SomScreenshotInjected {
}
}
});
-
+
// Track label positions to avoid overlap
const labelPositions: Array<{x: number, y: number, width: number, height: number, ref: string}> = [];
-
+
// Helper to check if two rectangles overlap
function rectsOverlap(r1: any, r2: any): boolean {
- return !(r1.x + r1.width < r2.x ||
- r2.x + r2.width < r1.x ||
- r1.y + r1.height < r2.y ||
+ return !(r1.x + r1.width < r2.x ||
+ r2.x + r2.width < r1.x ||
+ r1.y + r1.height < r2.y ||
r2.y + r2.height < r1.y);
}
-
+
// Helper to find non-overlapping position for label
function findLabelPosition(element: any, labelWidth: number, labelHeight: number): {x: number, y: number} {
const { x, y, width, height } = element.coordinates;
const isSmallElement = height < 70;
const margin = 2; // Space between label and element
-
+
// Try different positions in order of preference
const positions = [];
-
+
if (isSmallElement) {
// For small elements, try outside positions
// 1. Above element
@@ -256,32 +256,32 @@ export class SomScreenshotInjected {
// For large elements, inside top-left
positions.push({ x: x + 4, y: y + 4 });
}
-
+
// Check each position
for (const pos of positions) {
// Adjust for viewport boundaries
const adjustedPos = { ...pos };
-
+
// Keep within viewport
adjustedPos.x = Math.max(0, Math.min(adjustedPos.x, window.innerWidth - labelWidth));
adjustedPos.y = Math.max(0, Math.min(adjustedPos.y, window.innerHeight - labelHeight));
-
+
// Check for overlaps with existing labels
const testRect = { x: adjustedPos.x, y: adjustedPos.y, width: labelWidth, height: labelHeight };
let hasOverlap = false;
-
+
for (const existing of labelPositions) {
if (rectsOverlap(testRect, existing)) {
hasOverlap = true;
break;
}
}
-
+
if (!hasOverlap) {
return adjustedPos;
}
}
-
+
// If all positions overlap, try to find space by offsetting
// Try positions around the element in a spiral pattern
const offsets = [
@@ -294,28 +294,28 @@ export class SomScreenshotInjected {
{ dx: -labelWidth - margin, dy: height + margin }, // Bottom-left
{ dx: width + margin, dy: height + margin }, // Bottom-right
];
-
+
for (const offset of offsets) {
const pos = {
x: Math.max(0, Math.min(x + offset.dx, window.innerWidth - labelWidth)),
y: Math.max(0, Math.min(y + offset.dy, window.innerHeight - labelHeight))
};
-
+
const testRect = { x: pos.x, y: pos.y, width: labelWidth, height: labelHeight };
let hasOverlap = false;
-
+
for (const existing of labelPositions) {
if (rectsOverlap(testRect, existing)) {
hasOverlap = true;
break;
}
}
-
+
if (!hasOverlap) {
return pos;
}
}
-
+
// Fallback: use original logic but ensure within viewport
if (isSmallElement) {
const fallbackY = y >= 25 ? y - 25 : y + height + 2;
@@ -327,17 +327,17 @@ export class SomScreenshotInjected {
return { x: x + 4, y: y + 4 };
}
}
-
+
// Add labels and collect geometry data (only for filtered elements)
Object.entries(elements).forEach(([ref, element]: [string, any]) => {
if (element.coordinates && clickable.includes(ref)) {
const state = elementStates.get(ref);
-
+
// Skip completely hidden elements
if (state === 'hidden') return;
const label = document.createElement('div');
const { x, y, width, height } = element.coordinates;
-
+
label.style.cssText = `
position: absolute;
left: ${x}px;
@@ -348,11 +348,11 @@ export class SomScreenshotInjected {
border-radius: 4px;
box-shadow: 0 2px 8px rgba(0,0,0,0.3);
`;
-
+
// Add ref number with smart positioning
const refLabel = document.createElement('div');
refLabel.textContent = ref;
-
+
// Create temporary label to measure its size
refLabel.style.cssText = `
position: absolute;
@@ -370,10 +370,10 @@ export class SomScreenshotInjected {
const labelWidth = refLabel.offsetWidth;
const labelHeight = refLabel.offsetHeight;
document.body.removeChild(refLabel);
-
+
// Find non-overlapping position
const labelPos = findLabelPosition(element, labelWidth, labelHeight);
-
+
// Apply final position
refLabel.style.cssText = `
position: absolute;
@@ -391,7 +391,7 @@ export class SomScreenshotInjected {
z-index: 1;
white-space: nowrap;
`;
-
+
// Track this label position
labelPositions.push({
x: labelPos.x,
@@ -400,10 +400,10 @@ export class SomScreenshotInjected {
height: labelHeight,
ref: ref
});
-
+
label.appendChild(refLabel);
overlay.appendChild(label);
-
+
// Collect geometry data
markedElements.push({
ref,
@@ -424,13 +424,13 @@ export class SomScreenshotInjected {
});
}
});
-
+
document.body.appendChild(overlay);
-
+
// Force repaint
await new Promise(resolve => requestAnimationFrame(resolve));
-
- return {
+
+ return {
overlayId: overlay.id, // Use the ID that was set earlier
elementCount: overlay.children.length,
markedElements,
@@ -441,22 +441,22 @@ export class SomScreenshotInjected {
clickable: Array.from(clickableElements),
filterDebugInfo: []
});
-
+
// Take screenshot
const screenshotBuffer = await page.screenshot({
fullPage: false,
type: 'png'
});
-
+
// Keep the overlay visible for 1 second before cleanup
await page.waitForTimeout(1000);
-
+
// Clean up
await page.evaluate((overlayId) => {
const overlay = document.getElementById(overlayId);
if (overlay) overlay.remove();
}, result.overlayId);
-
+
// Export element geometry if path is provided
if (exportPath && result.markedElements) {
try {
@@ -479,16 +479,16 @@ export class SomScreenshotInjected {
}
}
};
-
+
if (typeof writeFile !== 'undefined') {
await writeFile(exportPath, JSON.stringify(exportData, null, 2));
console.log(`Element geometry exported to: ${exportPath}`);
}
-
+
// Also save visibility debug info
if (result.debugInfo) {
const debugPath = exportPath.replace('.json', '-visibility-debug.json');
-
+
const debugData = {
timestamp,
url: pageUrl,
@@ -504,7 +504,7 @@ export class SomScreenshotInjected {
return order[a.visibilityResult] - order[b.visibilityResult];
})
};
-
+
if (typeof writeFile !== 'undefined') {
await writeFile(debugPath, JSON.stringify(debugData, null, 2));
console.log(`Visibility debug info exported to: ${debugPath}`);
@@ -514,10 +514,10 @@ export class SomScreenshotInjected {
console.error('Failed to export element geometry:', error);
}
}
-
+
const base64Image = screenshotBuffer.toString('base64');
const dataUrl = `data:image/png;base64,${base64Image}`;
-
+
return {
text: `Visual webpage screenshot captured with ${result.elementCount} interactive elements marked`,
images: [dataUrl],
@@ -534,10 +534,10 @@ export class SomScreenshotInjected {
filtered_count: 0 // Filtering is done before this method is called
}
};
-
+
} catch (error) {
console.error('SOM screenshot injection error:', error);
throw error;
}
}
-}
\ No newline at end of file
+}
diff --git a/camel/toolkits/hybrid_browser_toolkit/ts/src/types.ts b/camel/toolkits/hybrid_browser_toolkit/ts/src/types.ts
index 75afea7e20..8cc89a6daf 100644
--- a/camel/toolkits/hybrid_browser_toolkit/ts/src/types.ts
+++ b/camel/toolkits/hybrid_browser_toolkit/ts/src/types.ts
@@ -106,8 +106,8 @@ export interface EnterAction {
export interface MouseAction {
type: 'mouse_control';
control: 'click' | 'right_click' | 'dblclick';
- x: number;
- y: number;
+ x: number;
+ y: number;
}
export interface MouseDragAction {
@@ -127,4 +127,3 @@ export interface VisualMarkResult {
text: string;
images: string[];
}
-
diff --git a/camel/toolkits/hybrid_browser_toolkit/ts/tsconfig.json b/camel/toolkits/hybrid_browser_toolkit/ts/tsconfig.json
index 9e5e0da466..989f7f3031 100644
--- a/camel/toolkits/hybrid_browser_toolkit/ts/tsconfig.json
+++ b/camel/toolkits/hybrid_browser_toolkit/ts/tsconfig.json
@@ -24,4 +24,4 @@
"dist",
"**/*.test.ts"
]
-}
\ No newline at end of file
+}
diff --git a/camel/toolkits/hybrid_browser_toolkit/ts/websocket-server.js b/camel/toolkits/hybrid_browser_toolkit/ts/websocket-server.js
index 679c6a508c..aec2d3877f 100644
--- a/camel/toolkits/hybrid_browser_toolkit/ts/websocket-server.js
+++ b/camel/toolkits/hybrid_browser_toolkit/ts/websocket-server.js
@@ -10,7 +10,7 @@ class WebSocketBrowserServer {
async start() {
return new Promise((resolve, reject) => {
- this.server = new WebSocket.Server({
+ this.server = new WebSocket.Server({
port: this.port,
maxPayload: 50 * 1024 * 1024 // 50MB limit instead of default 1MB
}, () => {
@@ -21,33 +21,33 @@ class WebSocketBrowserServer {
this.server.on('connection', (ws) => {
console.log('Client connected');
-
+
ws.on('message', async (message) => {
try {
const data = JSON.parse(message.toString());
const { id, command, params } = data;
-
+
console.log(`Received command: ${command} with id: ${id}`);
-
+
const result = await this.handleCommand(command, params);
-
+
const response = {
id,
success: true,
result
};
-
+
ws.send(JSON.stringify(response));
} catch (error) {
console.error('Error handling command:', error);
-
+
const errorResponse = {
id: data?.id || 'unknown',
success: false,
error: error.message,
stack: error.stack
};
-
+
ws.send(JSON.stringify(errorResponse));
}
});
@@ -78,14 +78,14 @@ class WebSocketBrowserServer {
switch (command) {
case 'init':
console.log('Initializing toolkit with params:', JSON.stringify(params, null, 2));
-
+
// Check if CDP is available first
let useCdp = false;
let cdpUrl = params.cdpUrl || 'http://localhost:9222';
-
+
// Extract base URL and port for validation
const baseUrl = cdpUrl.includes('/devtools/') ? cdpUrl.split('/devtools/')[0] : cdpUrl;
-
+
try {
// Test if Chrome debug port is accessible and get page URL
const response = await fetch(`${baseUrl}/json`);
@@ -101,7 +101,7 @@ class WebSocketBrowserServer {
const firstPage = pages[0];
const pageUrl = firstPage.devtoolsFrontendUrl;
const pageId = pageUrl.match(/ws=localhost:\d+(.*)$/)?.[1];
-
+
if (pageId) {
useCdp = true;
cdpUrl = `${baseUrl}${pageId}`;
@@ -113,14 +113,14 @@ class WebSocketBrowserServer {
} catch (error) {
console.log('Chrome debug port not accessible, will start new browser instance');
}
-
+
const config = {
connectOverCdp: useCdp,
cdpUrl: useCdp ? cdpUrl : undefined,
headless: false,
...params
};
-
+
console.log('Final config:', JSON.stringify(config, null, 2));
this.toolkit = new HybridBrowserToolkit(config);
return { message: 'Toolkit initialized with CDP connection' };
@@ -184,7 +184,7 @@ class WebSocketBrowserServer {
case 'enter':
if (!this.toolkit) throw new Error('Toolkit not initialized');
return await this.toolkit.enter();
-
+
case 'mouse_control':
if (!this.toolkit) throw new Error('Toolkit not initialized');
return await this.toolkit.mouseControl(params.control, params.x, params.y);
@@ -236,7 +236,7 @@ class WebSocketBrowserServer {
case 'shutdown': {
console.log('Shutting down server...');
-
+
// Close browser first
if (this.toolkit) {
try {
@@ -245,10 +245,10 @@ class WebSocketBrowserServer {
console.error('Error closing browser:', error);
}
}
-
+
// Return response immediately
const shutdownResponse = { message: 'Server shutting down' };
-
+
// Schedule server shutdown after a short delay to ensure response is sent
setTimeout(() => {
// Close the WebSocket server properly
@@ -262,7 +262,7 @@ class WebSocketBrowserServer {
console.log('Exiting process...');
process.exit(0);
});
-
+
// Fallback timeout in case server close hangs
setTimeout(() => {
console.log('Server close timeout, forcing exit...');
@@ -273,7 +273,7 @@ class WebSocketBrowserServer {
process.exit(0);
}
}, 100); // Delay to ensure response is sent
-
+
return shutdownResponse;
}
@@ -293,7 +293,7 @@ class WebSocketBrowserServer {
// Start server if this file is run directly
if (require.main === module) {
const server = new WebSocketBrowserServer();
-
+
server.start().then((port) => {
// Output the port so the Python client can connect
console.log(`SERVER_READY:${port}`);
@@ -316,4 +316,4 @@ if (require.main === module) {
});
}
-module.exports = WebSocketBrowserServer;
\ No newline at end of file
+module.exports = WebSocketBrowserServer;
diff --git a/camel/toolkits/hybrid_browser_toolkit_py/agent.py b/camel/toolkits/hybrid_browser_toolkit_py/agent.py
index d8c7ff6d4f..0bf6f2807f 100644
--- a/camel/toolkits/hybrid_browser_toolkit_py/agent.py
+++ b/camel/toolkits/hybrid_browser_toolkit_py/agent.py
@@ -39,35 +39,35 @@ class PlaywrightLLMAgent:
"then output the FIRST action to start with.\n\n"
"Return a JSON object in *exactly* this shape:\n"
"Action format json_object examples:\n"
-"{\n \"plan\": [\"Step 1\", \"Step 2\"],\n \"action\": {\n \"type\":
+"{\n \"plan\": [\"Step 1\", \"Step 2\"],\n \"action\": {\n \"type\":
\"click\",\n \"ref\": \"e1\"\n }\n}\n\n"
"If task is already complete:\n"
"{\n \"plan\": [],\n \"action\": {\n \"type\": \"finish\",
-\n \"ref\": null,\n \"summary\": \"Task was already completed. Summary
+\n \"ref\": null,\n \"summary\": \"Task was already completed. Summary
of what was found...\"\n }\n}"
Available action types:
-- 'click': {"type": "click", "ref": "e1"} or {"type": "click", "text":
+- 'click': {"type": "click", "ref": "e1"} or {"type": "click", "text":
"Button Text"} or {"type": "click", "selector": "button"}
-- 'type': {"type": "type", "ref": "e1", "text": "search text"} or {"type":
+- 'type': {"type": "type", "ref": "e1", "text": "search text"} or {"type":
"type", "selector": "input", "text": "search text"}
-- 'select': {"type": "select", "ref": "e1", "value": "option"} or {"type":
+- 'select': {"type": "select", "ref": "e1", "value": "option"} or {"type":
"select", "selector": "select", "value": "option"}
-- 'wait': {"type": "wait", "timeout": 2000} or {"type": "wait", "selector":
+- 'wait': {"type": "wait", "timeout": 2000} or {"type": "wait", "selector":
"#element"}
- 'scroll': {"type": "scroll", "direction": "down", "amount": 300}
-- 'enter': {"type": "enter", "ref": "e1"} or {"type": "enter", "selector":
+- 'enter': {"type": "enter", "ref": "e1"} or {"type": "enter", "selector":
"input[name=q]"} or {"type": "enter"}
- 'navigate': {"type": "navigate", "url": "https://example.com"}
-- 'finish': {"type": "finish", "ref": null, "summary": "task completion
+- 'finish': {"type": "finish", "ref": null, "summary": "task completion
summary"}
-IMPORTANT:
-- For 'click': Use 'ref' from snapshot, or 'text' for visible text,
+IMPORTANT:
+- For 'click': Use 'ref' from snapshot, or 'text' for visible text,
or 'selector' for CSS selectors
- For 'type'/'select': Use 'ref' from snapshot or 'selector' for CSS selectors
- Only use 'ref' values that exist in the snapshot (e.g., ref=e1, ref=e2, etc.)
-- Use 'finish' when the task is completed successfully with a summary of
+- Use 'finish' when the task is completed successfully with a summary of
what was accomplished
- Use 'enter' to press the Enter key (optionally focus an element first)
- Use 'navigate' to open a new URL before interacting further
diff --git a/camel/toolkits/hybrid_browser_toolkit_py/hybrid_browser_toolkit.py b/camel/toolkits/hybrid_browser_toolkit_py/hybrid_browser_toolkit.py
index 3f918e5ecc..04a397fec3 100644
--- a/camel/toolkits/hybrid_browser_toolkit_py/hybrid_browser_toolkit.py
+++ b/camel/toolkits/hybrid_browser_toolkit_py/hybrid_browser_toolkit.py
@@ -2054,7 +2054,7 @@ async def browser_console_exec(self, code: str) -> Dict[str, Any]:
}).join(' '));
originalLog.apply(console, args);
};
-
+
let result;
try {
// First try to evaluate as an expression
@@ -2073,7 +2073,7 @@ async def browser_console_exec(self, code: str) -> Dict[str, Any]:
throw error;
}
}
-
+
console.log = originalLog;
return { result, logs: _logs };
})()
diff --git a/camel/toolkits/hybrid_browser_toolkit_py/stealth_script.js b/camel/toolkits/hybrid_browser_toolkit_py/stealth_script.js
index ba9296cf54..895706a6c3 100644
Binary files a/camel/toolkits/hybrid_browser_toolkit_py/stealth_script.js and b/camel/toolkits/hybrid_browser_toolkit_py/stealth_script.js differ
diff --git a/camel/toolkits/hybrid_browser_toolkit_py/unified_analyzer.js b/camel/toolkits/hybrid_browser_toolkit_py/unified_analyzer.js
index f1a0199c7d..286df50abb 100644
--- a/camel/toolkits/hybrid_browser_toolkit_py/unified_analyzer.js
+++ b/camel/toolkits/hybrid_browser_toolkit_py/unified_analyzer.js
@@ -12,7 +12,7 @@
let elementRefMap = window.__camelElementRefMap || new WeakMap();
let refElementMap = window.__camelRefElementMap || new Map();
let elementSignatureMap = window.__camelElementSignatureMap || new Map();
-
+
// LRU tracking for ref access times
let refAccessTimes = window.__camelRefAccessTimes || new Map();
let lastNavigationUrl = window.__camelLastNavigationUrl || window.location.href;
@@ -20,28 +20,28 @@
// Initialize navigation event listeners for automatic cleanup
if (!window.__camelNavigationListenersInitialized) {
window.__camelNavigationListenersInitialized = true;
-
+
// Listen for page navigation events
window.addEventListener('beforeunload', clearAllRefs);
window.addEventListener('pagehide', clearAllRefs);
-
+
// Listen for pushState/replaceState navigation (SPA navigation)
const originalPushState = history.pushState;
const originalReplaceState = history.replaceState;
-
+
history.pushState = function(...args) {
clearAllRefs();
return originalPushState.apply(this, args);
};
-
+
history.replaceState = function(...args) {
clearAllRefs();
return originalReplaceState.apply(this, args);
};
-
+
// Listen for popstate (back/forward navigation)
window.addEventListener('popstate', clearAllRefs);
-
+
// Check for URL changes periodically (fallback for other navigation types)
setInterval(() => {
if (window.location.href !== lastNavigationUrl) {
@@ -66,23 +66,23 @@
document.querySelectorAll('[aria-ref]').forEach(element => {
element.removeAttribute('aria-ref');
});
-
+
// Clear all maps and reset counters
elementRefMap.clear();
refElementMap.clear();
elementSignatureMap.clear();
refAccessTimes.clear();
-
+
// Reset global state
window.__camelElementRefMap = elementRefMap;
window.__camelRefElementMap = refElementMap;
window.__camelElementSignatureMap = elementSignatureMap;
window.__camelRefAccessTimes = refAccessTimes;
-
+
// Clear cached analysis results
delete window.__camelLastAnalysisResult;
delete window.__camelLastAnalysisTime;
-
+
console.log('CAMEL: Cleared all refs due to navigation');
} catch (error) {
console.warn('CAMEL: Error clearing refs:', error);
@@ -110,14 +110,14 @@
// Element might be detached from DOM
}
elementRefMap.delete(element);
-
+
// Remove from signature map
const signature = generateElementSignature(element);
if (signature && elementSignatureMap.get(signature) === ref) {
elementSignatureMap.delete(signature);
}
}
-
+
refElementMap.delete(ref);
refAccessTimes.delete(ref);
evictedCount++;
@@ -250,11 +250,11 @@
// Remove refs for elements that are hidden or have no meaningful content
try {
const style = window.getComputedStyle(element);
- const hasNoVisibleContent = !element.textContent?.trim() &&
- !element.value?.trim() &&
- !element.src &&
+ const hasNoVisibleContent = !element.textContent?.trim() &&
+ !element.value?.trim() &&
+ !element.src &&
!element.href;
-
+
if ((style.display === 'none' || style.visibility === 'hidden') && hasNoVisibleContent) {
shouldRemove = true;
}
@@ -280,7 +280,7 @@
// Element might be detached from DOM
}
elementRefMap.delete(element);
-
+
// Remove from signature map
const signature = generateElementSignature(element);
if (signature && elementSignatureMap.get(signature) === ref) {
@@ -406,7 +406,7 @@
if (tagName === 'header') return 'banner';
if (tagName === 'footer') return 'contentinfo';
if (tagName === 'fieldset') return 'group';
-
+
// Enhanced role mappings for table elements
if (tagName === 'table') return 'table';
if (tagName === 'tr') return 'row';
@@ -489,9 +489,9 @@
// Add a heuristic to ignore code-like text that might be in the DOM
if ((text.match(/[;:{}]/g)?.length || 0) > 2) return '';
-
-
+
+
return text;
}
@@ -587,7 +587,7 @@
if (level > 0) node.level = level;
-
+
return node;
}
@@ -735,9 +735,9 @@
if (isRedundantWrapper) {
return node.children;
}
-
-
+
+
return [node];
}
@@ -831,7 +831,7 @@
// Check if element is within the current viewport
function isInViewport(element) {
if (!element || element.nodeType !== Node.ELEMENT_NODE) return false;
-
+
try {
const rect = element.getBoundingClientRect();
return (
@@ -1040,4 +1040,4 @@
// Execute analysis and return result
return analyzePageElements();
-})();
\ No newline at end of file
+})();
diff --git a/camel/toolkits/image_analysis_toolkit.py b/camel/toolkits/image_analysis_toolkit.py
index 58bc80aaef..5ee9c09375 100644
--- a/camel/toolkits/image_analysis_toolkit.py
+++ b/camel/toolkits/image_analysis_toolkit.py
@@ -76,7 +76,7 @@ def image_to_text(
Returns:
str: Natural language description of the image.
"""
- default_content = '''You are an image analysis expert. Provide a
+ default_content = '''You are an image analysis expert. Provide a
detailed description including text if present.'''
system_msg = BaseMessage.make_assistant_message(
diff --git a/camel/toolkits/open_api_specs/biztoc/ai-plugin.json b/camel/toolkits/open_api_specs/biztoc/ai-plugin.json
index ab873b80b2..6c803df6f0 100644
--- a/camel/toolkits/open_api_specs/biztoc/ai-plugin.json
+++ b/camel/toolkits/open_api_specs/biztoc/ai-plugin.json
@@ -31,4 +31,4 @@
"title": "New"
}
]
-}
\ No newline at end of file
+}
diff --git a/camel/toolkits/open_api_specs/outschool/ai-plugin.json b/camel/toolkits/open_api_specs/outschool/ai-plugin.json
index 1189675d55..59f40ebbd2 100644
--- a/camel/toolkits/open_api_specs/outschool/ai-plugin.json
+++ b/camel/toolkits/open_api_specs/outschool/ai-plugin.json
@@ -31,4 +31,4 @@
"title": "New"
}
]
-}
\ No newline at end of file
+}
diff --git a/camel/toolkits/open_api_specs/outschool/openapi.yaml b/camel/toolkits/open_api_specs/outschool/openapi.yaml
index 422e9422fc..85d58dd0ae 100644
--- a/camel/toolkits/open_api_specs/outschool/openapi.yaml
+++ b/camel/toolkits/open_api_specs/outschool/openapi.yaml
@@ -1 +1 @@
-{"openapi":"3.0.1","info":{"title":"Outschool Plugin","description":"Search for top-quality online classes and teachers on Outschool.","version":"v1"},"servers":[{"url":"https://chatgpt-plugin.outschool.com/api"}],"paths":{"/classes":{"get":{"operationId":"searchClasses","description":"Returns a list of online classes","parameters":[{"name":"timeZone","in":"query","required":true,"description":"IANA Time Zone identifier of the user. Either provided by user or derived from their location. Since Outschool parents and teachers can be from different time zones, this is required to search classes that are available in parent's timezone at reasonable hours. Only IANA format is accepted.","schema":{"type":"string"},"examples":{"losAngeles":{"value":"America/Los_Angeles"},"newYork":{"value":"America/New_York"},"london":{"value":"Europe/London"}}},{"name":"age","in":"query","required":true,"description":"Outschool has several classes serving different age groups. The age of the learner(s) helps to find classes that match the best. This is a comma separated list. If the age difference between the children is more than 5 years, it may be better to search for different ages separately to get better search results.","schema":{"type":"string","minimum":3,"maximum":18},"examples":{"12":{"value":"12"},"1213":{"value":"12,13"},"5617":{"value":"5,6,17"}}},{"name":"q","in":"query","required":false,"description":"Keywords to use to search in the class list. Classes matching the keyword closest will be returned.","schema":{"type":"string"}},{"name":"delivery","in":"query","required":false,"explode":true,"description":"Filters classes by delivery type. Description for different enum values:\n One-time: Classes that meets once\n Ongoing: Weekly classes that learners can enroll in any week\n Semester course: Multi-week/session classes, usually more than 4 weeks\n Short course: Multi-week/session classes, usually around 4 weeks\n Camp: Semester or short courses during summer and school breaks\n Group: Async chat groups on a specific topic where learners share ideas and experiences, like clubs","schema":{"type":"array","items":{"type":"string","enum":["One-time","Ongoing","Semester course","Short course","Camp","Group"]}}},{"name":"userUid","in":"query","required":false,"description":"Only search classes taught by a specific teacher. The userUid is the id of the teacher","schema":{"type":"string","format":"uuid"}},{"name":"order","in":"query","description":"Sort results by either upcoming, new, or relevance. Upcoming sorts by next section start date in ascending order, new sorts by class published date in descending order, and relevance sorts by the keyword relevance and popularity of the class.","schema":{"type":"string","enum":["upcoming","new","relevance"],"default":"relevance"}},{"name":"offset","in":"query","required":false,"description":"The offset for the results. Offset and limit used in combination to paginate in results. For instance, if limit is 10, to get next 10 results, the offset should be set to 10.","schema":{"type":"number","default":0}},{"name":"limit","in":"query","required":false,"description":"Number of results to return.","schema":{"type":"number","default":10}},{"name":"startAfter","in":"query","required":false,"description":"Search classes that have a section starting on or after a given date. Only today or future dates are allowed.","schema":{"type":"string","format":"date"},"examples":{"April152023":{"value":"2023-04-15"}}},{"name":"dow","in":"query","description":"The day of week to filter classes and only return classes that have a section on given days of the week.","schema":{"type":"array","items":{"type":"string","enum":["Mon","Tue","Wed","Thu","Fri","Sat","Sun"]}},"style":"form","explode":true,"required":false,"examples":{"Mon":{"value":"Mon"},"Mon_Tue":{"value":"Mon,Tue"},"Mon_Thu":{"value":"Mon,Tue,Wed,Thu"},"Weekdays":{"value":"Mon,Tue,Wed,Thu,Fri"},"Weekend":{"value":"Sat, Sun"}}},{"name":"startAfterTime","in":"query","description":"The start time of the class in 24 hour format as hour of the day normalized by the user's timezone","schema":{"type":"number","minimum":6,"maximum":22}},{"name":"endByTime","in":"query","description":"The end time of the class in 24 hour format as hour of the day normalized by the user's timezone","schema":{"type":"number","minimum":6,"maximum":22}}],"responses":{"200":{"description":"A list of classes","content":{"application/json":{"schema":{"type":"array","items":{"$ref":"#/components/schemas/class"}}}}}}}},"/teachers":{"get":{"operationId":"searchTeachers","description":"Returns a list of teachers","parameters":[{"name":"name","in":"query","required":true,"description":"Name of the teacher to search for","schema":{"type":"string"}},{"name":"limit","in":"query","required":false,"description":"Number of results to return.","schema":{"type":"number","default":10}}],"responses":{"200":{"description":"A list of teachers","content":{"application/json":{"schema":{"type":"array","items":{"$ref":"#/components/schemas/teacher"}}}}}}}}},"components":{"schemas":{"class":{"type":"object","properties":{"uid":{"type":"string","format":"uuid","description":"Unique ID of the class in the system that can be used in other API end points"},"title":{"type":"string","description":"Title of the class"},"summary":{"type":"string","description":"Summary of the class"},"url":{"type":"string","format":"uri","description":"URL to the class detail page"},"photo":{"type":"string","format":"uri","description":"Photo of the class"},"is_ongoing_weekly":{"type":"boolean","description":"Whether this class is an ongoing class or not. When a class is an ongoing class, parents can enroll their children for any week of an ongoing class, because the sections of that class meet every week and the weeks don't depend on each other."},"age_min":{"type":"number","description":"The minimum age a learner should be to enroll in the class. Although Outschool has classes for different age groups, individual classes may only be appropriate for a certain age range."},"age_max":{"type":"number","description":"The maximum age a learner should be to enroll in the class. Although Outschool has classes for different age groups, individual classes may only be appropriate for a certain age range."},"teacher":{"$ref":"#/components/schemas/teacher"},"nextSection":{"$ref":"#/components/schemas/section","nullable":true,"description":"The next section of the class that the parent/caregiver can enroll their children in. This is usually what parents are looking for to enroll in a class."}}},"teacher":{"type":"object","properties":{"uid":{"type":"string","format":"uuid","description":"Unique ID of the teacher in the system that can be used in other API end points"},"name":{"type":"string","description":"Name of the teacher"},"about":{"type":"string","description":"A short summary the teacher provides about themselves"},"photo":{"type":"string","format":"uri","description":"Photo of the teacher"},"url":{"type":"string","format":"uri","description":"URL to the Outschool profile page of the teacher"}}},"section":{"type":"object","description":"Sections are what parents enroll their children in for a given class. They are separate cohorts of a class.","properties":{"uid":{"type":"string","format":"uuid","description":"Unique ID of the section in the system that can be used in other API end points"},"url":{"type":"string","format":"uri","description":"URL pointing to the section page"},"start_time":{"type":"string","format":"datetime","description":"The start time for the first meeting of a section."},"end_time":{"type":"string","format":"datetime","description":"The end time for the last meeting of a section."},"size_max":{"type":"number","description":"How many learners can enroll in the section."},"filledSpaceCount":{"type":"number","description":"How many learners are enrolled in the section. size_max - filledSpaceCount gives how many seats are left to enroll in."},"nextOngoingMeeting":{"$ref":"#/components/schemas/meeting","nullable":true,"description":"If the class is an ongoing class, this points to the next meeting for the section."}}},"meeting":{"type":"object","description":"The online meeting for a section. Meetings are held on Zoom.","properties":{"uid":{"type":"string","format":"uuid","description":"Unique ID of the meeting in the system that can be used in other API end points"},"start_time":{"type":"string","format":"datetime","description":"The start time of the meeting."},"end_time":{"type":"string","format":"datetime","description":"The end time of the meeting."}}}}}}
\ No newline at end of file
+{"openapi":"3.0.1","info":{"title":"Outschool Plugin","description":"Search for top-quality online classes and teachers on Outschool.","version":"v1"},"servers":[{"url":"https://chatgpt-plugin.outschool.com/api"}],"paths":{"/classes":{"get":{"operationId":"searchClasses","description":"Returns a list of online classes","parameters":[{"name":"timeZone","in":"query","required":true,"description":"IANA Time Zone identifier of the user. Either provided by user or derived from their location. Since Outschool parents and teachers can be from different time zones, this is required to search classes that are available in parent's timezone at reasonable hours. Only IANA format is accepted.","schema":{"type":"string"},"examples":{"losAngeles":{"value":"America/Los_Angeles"},"newYork":{"value":"America/New_York"},"london":{"value":"Europe/London"}}},{"name":"age","in":"query","required":true,"description":"Outschool has several classes serving different age groups. The age of the learner(s) helps to find classes that match the best. This is a comma separated list. If the age difference between the children is more than 5 years, it may be better to search for different ages separately to get better search results.","schema":{"type":"string","minimum":3,"maximum":18},"examples":{"12":{"value":"12"},"1213":{"value":"12,13"},"5617":{"value":"5,6,17"}}},{"name":"q","in":"query","required":false,"description":"Keywords to use to search in the class list. Classes matching the keyword closest will be returned.","schema":{"type":"string"}},{"name":"delivery","in":"query","required":false,"explode":true,"description":"Filters classes by delivery type. Description for different enum values:\n One-time: Classes that meets once\n Ongoing: Weekly classes that learners can enroll in any week\n Semester course: Multi-week/session classes, usually more than 4 weeks\n Short course: Multi-week/session classes, usually around 4 weeks\n Camp: Semester or short courses during summer and school breaks\n Group: Async chat groups on a specific topic where learners share ideas and experiences, like clubs","schema":{"type":"array","items":{"type":"string","enum":["One-time","Ongoing","Semester course","Short course","Camp","Group"]}}},{"name":"userUid","in":"query","required":false,"description":"Only search classes taught by a specific teacher. The userUid is the id of the teacher","schema":{"type":"string","format":"uuid"}},{"name":"order","in":"query","description":"Sort results by either upcoming, new, or relevance. Upcoming sorts by next section start date in ascending order, new sorts by class published date in descending order, and relevance sorts by the keyword relevance and popularity of the class.","schema":{"type":"string","enum":["upcoming","new","relevance"],"default":"relevance"}},{"name":"offset","in":"query","required":false,"description":"The offset for the results. Offset and limit used in combination to paginate in results. For instance, if limit is 10, to get next 10 results, the offset should be set to 10.","schema":{"type":"number","default":0}},{"name":"limit","in":"query","required":false,"description":"Number of results to return.","schema":{"type":"number","default":10}},{"name":"startAfter","in":"query","required":false,"description":"Search classes that have a section starting on or after a given date. Only today or future dates are allowed.","schema":{"type":"string","format":"date"},"examples":{"April152023":{"value":"2023-04-15"}}},{"name":"dow","in":"query","description":"The day of week to filter classes and only return classes that have a section on given days of the week.","schema":{"type":"array","items":{"type":"string","enum":["Mon","Tue","Wed","Thu","Fri","Sat","Sun"]}},"style":"form","explode":true,"required":false,"examples":{"Mon":{"value":"Mon"},"Mon_Tue":{"value":"Mon,Tue"},"Mon_Thu":{"value":"Mon,Tue,Wed,Thu"},"Weekdays":{"value":"Mon,Tue,Wed,Thu,Fri"},"Weekend":{"value":"Sat, Sun"}}},{"name":"startAfterTime","in":"query","description":"The start time of the class in 24 hour format as hour of the day normalized by the user's timezone","schema":{"type":"number","minimum":6,"maximum":22}},{"name":"endByTime","in":"query","description":"The end time of the class in 24 hour format as hour of the day normalized by the user's timezone","schema":{"type":"number","minimum":6,"maximum":22}}],"responses":{"200":{"description":"A list of classes","content":{"application/json":{"schema":{"type":"array","items":{"$ref":"#/components/schemas/class"}}}}}}}},"/teachers":{"get":{"operationId":"searchTeachers","description":"Returns a list of teachers","parameters":[{"name":"name","in":"query","required":true,"description":"Name of the teacher to search for","schema":{"type":"string"}},{"name":"limit","in":"query","required":false,"description":"Number of results to return.","schema":{"type":"number","default":10}}],"responses":{"200":{"description":"A list of teachers","content":{"application/json":{"schema":{"type":"array","items":{"$ref":"#/components/schemas/teacher"}}}}}}}}},"components":{"schemas":{"class":{"type":"object","properties":{"uid":{"type":"string","format":"uuid","description":"Unique ID of the class in the system that can be used in other API end points"},"title":{"type":"string","description":"Title of the class"},"summary":{"type":"string","description":"Summary of the class"},"url":{"type":"string","format":"uri","description":"URL to the class detail page"},"photo":{"type":"string","format":"uri","description":"Photo of the class"},"is_ongoing_weekly":{"type":"boolean","description":"Whether this class is an ongoing class or not. When a class is an ongoing class, parents can enroll their children for any week of an ongoing class, because the sections of that class meet every week and the weeks don't depend on each other."},"age_min":{"type":"number","description":"The minimum age a learner should be to enroll in the class. Although Outschool has classes for different age groups, individual classes may only be appropriate for a certain age range."},"age_max":{"type":"number","description":"The maximum age a learner should be to enroll in the class. Although Outschool has classes for different age groups, individual classes may only be appropriate for a certain age range."},"teacher":{"$ref":"#/components/schemas/teacher"},"nextSection":{"$ref":"#/components/schemas/section","nullable":true,"description":"The next section of the class that the parent/caregiver can enroll their children in. This is usually what parents are looking for to enroll in a class."}}},"teacher":{"type":"object","properties":{"uid":{"type":"string","format":"uuid","description":"Unique ID of the teacher in the system that can be used in other API end points"},"name":{"type":"string","description":"Name of the teacher"},"about":{"type":"string","description":"A short summary the teacher provides about themselves"},"photo":{"type":"string","format":"uri","description":"Photo of the teacher"},"url":{"type":"string","format":"uri","description":"URL to the Outschool profile page of the teacher"}}},"section":{"type":"object","description":"Sections are what parents enroll their children in for a given class. They are separate cohorts of a class.","properties":{"uid":{"type":"string","format":"uuid","description":"Unique ID of the section in the system that can be used in other API end points"},"url":{"type":"string","format":"uri","description":"URL pointing to the section page"},"start_time":{"type":"string","format":"datetime","description":"The start time for the first meeting of a section."},"end_time":{"type":"string","format":"datetime","description":"The end time for the last meeting of a section."},"size_max":{"type":"number","description":"How many learners can enroll in the section."},"filledSpaceCount":{"type":"number","description":"How many learners are enrolled in the section. size_max - filledSpaceCount gives how many seats are left to enroll in."},"nextOngoingMeeting":{"$ref":"#/components/schemas/meeting","nullable":true,"description":"If the class is an ongoing class, this points to the next meeting for the section."}}},"meeting":{"type":"object","description":"The online meeting for a section. Meetings are held on Zoom.","properties":{"uid":{"type":"string","format":"uuid","description":"Unique ID of the meeting in the system that can be used in other API end points"},"start_time":{"type":"string","format":"datetime","description":"The start time of the meeting."},"end_time":{"type":"string","format":"datetime","description":"The end time of the meeting."}}}}}}
diff --git a/camel/toolkits/open_api_specs/web_scraper/ai-plugin.json b/camel/toolkits/open_api_specs/web_scraper/ai-plugin.json
index 92f6b20807..549179e7c7 100644
--- a/camel/toolkits/open_api_specs/web_scraper/ai-plugin.json
+++ b/camel/toolkits/open_api_specs/web_scraper/ai-plugin.json
@@ -31,4 +31,4 @@
"title": "New"
}
]
-}
\ No newline at end of file
+}
diff --git a/camel/toolkits/page_script.js b/camel/toolkits/page_script.js
index 6f7bc390a7..46ddca0ed2 100644
--- a/camel/toolkits/page_script.js
+++ b/camel/toolkits/page_script.js
@@ -1,6 +1,6 @@
var MultimodalWebSurfer = MultimodalWebSurfer || (function() {
let nextLabel = 10;
-
+
let roleMapping = {
"a": "link",
"area": "link",
@@ -22,23 +22,23 @@ var MultimodalWebSurfer = MultimodalWebSurfer || (function() {
"option": "option",
"textarea": "textbox"
};
-
+
let getCursor = function(elm) {
return window.getComputedStyle(elm)["cursor"];
};
-
+
let getInteractiveElements = function() {
-
+
let results = []
let roles = ["scrollbar", "searchbox", "slider", "spinbutton", "switch", "tab", "treeitem", "button", "checkbox", "gridcell", "link", "menuitem", "menuitemcheckbox", "menuitemradio", "option", "progressbar", "radio", "textbox", "combobox", "menu", "tree", "treegrid", "grid", "listbox", "radiogroup", "widget"];
let inertCursors = ["auto", "default", "none", "text", "vertical-text", "not-allowed", "no-drop"];
-
+
// Get the main interactive elements
let nodeList = document.querySelectorAll("input, select, textarea, button, [href], [onclick], [contenteditable], [tabindex]:not([tabindex='-1'])");
for (let i=0; i= 0) {
continue;
}
-
+
// Move up to the first instance of this cursor change
parent = node.parentNode;
while (parent && getCursor(parent) == cursor) {
node = parent;
parent = node.parentNode;
}
-
+
// Add the node if it is new
if (results.indexOf(node) == -1) {
results.push(node);
}
}
-
+
return results;
};
-
+
let labelElements = function(elements) {
for (let i=0; i= 1;
-
+
let record = {
"tag_name": ariaRole[1],
"role": ariaRole[0],
@@ -207,7 +207,7 @@ var MultimodalWebSurfer = MultimodalWebSurfer || (function() {
"v-scrollable": vScrollable,
"rects": []
};
-
+
for (const rect of rects) {
let x = rect.left + rect.width / 2;
let y = rect.top + rect.height / 2;
@@ -224,15 +224,15 @@ var MultimodalWebSurfer = MultimodalWebSurfer || (function() {
});
}
}
-
+
if (record["rects"].length > 0) {
results[key] = record;
}
}
-
+
return results;
- };
-
+ };
+
let getVisualViewport = function() {
let vv = window.visualViewport;
let de = document.documentElement;
@@ -250,7 +250,7 @@ var MultimodalWebSurfer = MultimodalWebSurfer || (function() {
"scrollHeight": de ? de.scrollHeight : 0
};
};
-
+
let _getMetaTags = function() {
let meta = document.querySelectorAll("meta");
let results = {};
@@ -271,7 +271,7 @@ var MultimodalWebSurfer = MultimodalWebSurfer || (function() {
}
return results;
};
-
+
let _getJsonLd = function() {
let jsonld = [];
let scripts = document.querySelectorAll('script[type="application/ld+json"]');
@@ -280,13 +280,13 @@ var MultimodalWebSurfer = MultimodalWebSurfer || (function() {
}
return jsonld;
};
-
+
// From: https://www.stevefenton.co.uk/blog/2022/12/parse-microdata-with-javascript/
let _getMicrodata = function() {
function sanitize(input) {
return input.replace(/\s/gi, ' ').trim();
}
-
+
function addValue(information, name, value) {
if (information[name]) {
if (typeof information[name] === 'array') {
@@ -301,29 +301,29 @@ var MultimodalWebSurfer = MultimodalWebSurfer || (function() {
information[name] = value;
}
}
-
+
function traverseItem(item, information) {
const children = item.children;
-
+
for (let i = 0; i < children.length; i++) {
const child = children[i];
-
+
if (child.hasAttribute('itemscope')) {
if (child.hasAttribute('itemprop')) {
const itemProp = child.getAttribute('itemprop');
const itemType = child.getAttribute('itemtype');
-
+
const childInfo = {
itemType: itemType
};
-
+
traverseItem(child, childInfo);
-
+
itemProp.split(' ').forEach(propName => {
addValue(information, propName, childInfo);
});
}
-
+
} else if (child.hasAttribute('itemprop')) {
const itemProp = child.getAttribute('itemprop');
itemProp.split(' ').forEach(propName => {
@@ -339,9 +339,9 @@ var MultimodalWebSurfer = MultimodalWebSurfer || (function() {
}
}
}
-
+
const microdata = [];
-
+
document.querySelectorAll("[itemscope]").forEach(function(elem, i) {
const itemType = elem.getAttribute('itemtype');
const information = {
@@ -350,10 +350,10 @@ var MultimodalWebSurfer = MultimodalWebSurfer || (function() {
traverseItem(elem, information);
microdata.push(information);
});
-
+
return microdata;
};
-
+
let getPageMetadata = function() {
let jsonld = _getJsonLd();
let metaTags = _getMetaTags();
@@ -362,7 +362,7 @@ var MultimodalWebSurfer = MultimodalWebSurfer || (function() {
if (jsonld.length > 0) {
try {
results["jsonld"] = JSON.parse(jsonld);
- }
+ }
catch (e) {
results["jsonld"] = jsonld;
}
@@ -377,8 +377,8 @@ var MultimodalWebSurfer = MultimodalWebSurfer || (function() {
}
}
return results;
- };
-
+ };
+
return {
getInteractiveRects: getInteractiveRects,
getVisualViewport: getVisualViewport,
diff --git a/camel/toolkits/search_toolkit.py b/camel/toolkits/search_toolkit.py
index 1538fbb194..728b5f8366 100644
--- a/camel/toolkits/search_toolkit.py
+++ b/camel/toolkits/search_toolkit.py
@@ -199,9 +199,7 @@ def search_duckduckgo(
if source == "text":
try:
- results = ddgs.text(
- query, max_results=number_of_result_pages
- )
+ results = ddgs.text(query, max_results=number_of_result_pages)
# Iterate over results found
for i, result in enumerate(results, start=1):
# Creating a response object with a similar structure
diff --git a/camel/toolkits/video_analysis_toolkit.py b/camel/toolkits/video_analysis_toolkit.py
index 0769ca2794..ccb23f22e4 100644
--- a/camel/toolkits/video_analysis_toolkit.py
+++ b/camel/toolkits/video_analysis_toolkit.py
@@ -77,7 +77,7 @@
5. Important Considerations:
- Pay close attention to subtle differences that could distinguish \
-similar-looking species or objects
+similar-looking species or objects
(e.g., juveniles vs. adults, closely related species).
- Provide concise yet complete explanations to ensure clarity.
diff --git a/data/ai_society/assistant_roles.txt b/data/ai_society/assistant_roles.txt
index a03be3fa45..85c4d401a2 100644
--- a/data/ai_society/assistant_roles.txt
+++ b/data/ai_society/assistant_roles.txt
@@ -47,4 +47,4 @@
47. Virtual Assistant
48. Web Developer
49. Writer
-50. Zoologist
\ No newline at end of file
+50. Zoologist
diff --git a/data/ai_society/user_roles.txt b/data/ai_society/user_roles.txt
index 2ad6f9274d..93c4bafc32 100644
--- a/data/ai_society/user_roles.txt
+++ b/data/ai_society/user_roles.txt
@@ -47,4 +47,4 @@
47. Writer
48. Yoga instructor
49. YouTuber
-50. Zoologist
\ No newline at end of file
+50. Zoologist
diff --git a/data/code/domains.txt b/data/code/domains.txt
index 13adbbcfa2..7ac1271381 100644
--- a/data/code/domains.txt
+++ b/data/code/domains.txt
@@ -47,4 +47,4 @@
47. Sports Science
48. Statistics
49. Theater
-50. Urban Planning
\ No newline at end of file
+50. Urban Planning
diff --git a/data/code/languages.txt b/data/code/languages.txt
index af954c1303..71e42eaacf 100644
--- a/data/code/languages.txt
+++ b/data/code/languages.txt
@@ -17,4 +17,4 @@
17. Shell
18. Visual Basic
19. Assembly
-20. Dart
\ No newline at end of file
+20. Dart
diff --git a/docs/cookbooks/advanced_features/agents_with_tools_from_ACI.ipynb b/docs/cookbooks/advanced_features/agents_with_tools_from_ACI.ipynb
index 903eb55327..389f49960a 100644
--- a/docs/cookbooks/advanced_features/agents_with_tools_from_ACI.ipynb
+++ b/docs/cookbooks/advanced_features/agents_with_tools_from_ACI.ipynb
@@ -266,4 +266,4 @@
},
"nbformat": 4,
"nbformat_minor": 0
-}
\ No newline at end of file
+}
diff --git a/docs/cookbooks/applications/index.rst b/docs/cookbooks/applications/index.rst
index 78afe52f0b..fdc1991cfa 100644
--- a/docs/cookbooks/applications/index.rst
+++ b/docs/cookbooks/applications/index.rst
@@ -17,4 +17,4 @@ Applications
customer_service_Discord_bot_using_SambaNova_with_agentic_RAG
customer_service_Discord_bot_using_local_model_with_agentic_RAG
finance_discord_bot
- pptx_toolkit
\ No newline at end of file
+ pptx_toolkit
diff --git a/docs/cookbooks/data_generation/self_improving_cot_generation.md b/docs/cookbooks/data_generation/self_improving_cot_generation.md
index 7d2a4274eb..654d97394c 100644
--- a/docs/cookbooks/data_generation/self_improving_cot_generation.md
+++ b/docs/cookbooks/data_generation/self_improving_cot_generation.md
@@ -12,7 +12,7 @@ CAMEL developed an approach leverages iterative refinement, self-assessment, and
## 1. Overview of the End-to-End Pipeline 🔍
-### 1.1 Why an Iterative CoT Pipeline?
+### 1.1 Why an Iterative CoT Pipeline?
One-time CoT generation often leads to incomplete or suboptimal solutions. CAMEL addresses this challenge by employing a multi-step, iterative approach:
@@ -22,7 +22,7 @@ One-time CoT generation often leads to incomplete or suboptimal solutions. CAMEL
This self-improving methodology ensures that the reasoning process improves progressively, meeting specific thresholds for correctness, clarity, and completeness. Each iteration enhances the model's ability to solve the problem by learning from the previous outputs and evaluations.
-### 1.2 Core Components
+### 1.2 Core Components
The self-improving pipeline consists of three key components:
1. **`reason_agent`:** This agent is responsible for generating or improving reasoning traces.
@@ -57,7 +57,7 @@ Once the reasoning trace is generated, it is evaluated for its quality. This eva
- **Detecting weaknesses**: The evaluation identifies areas where the reasoning trace could be further improved.
- **Providing feedback**: The evaluation produces feedback that guides the agent in refining the reasoning trace. This feedback can come from either the **`evaluate_agent`** or a **`reward_model`**.
-#### 2.2.1 Agent-Based Evaluation
+#### 2.2.1 Agent-Based Evaluation
If an **`evaluate_agent`** is available, it examines the reasoning trace for:
1. **Correctness**: Does the trace logically solve the problem?
@@ -66,7 +66,7 @@ If an **`evaluate_agent`** is available, it examines the reasoning trace for:
The feedback from the agent provides insights into areas for improvement, such as unclear reasoning or incorrect answers, offering a more generalized approach compared to rule-based matching.
-#### 2.2.2 Reward Model Evaluation
+#### 2.2.2 Reward Model Evaluation
Alternatively, the pipeline supports using a **reward model** to evaluate the trace. The reward model outputs scores based on predefined dimensions such as correctness, coherence, complexity, and verbosity.
@@ -79,7 +79,7 @@ The key to CAMEL's success in CoT generation is its **self-improving loop**. Aft
#### How does this iterative refinement work?
1. **Feedback Integration**: The feedback from the evaluation phase is used to refine the reasoning. This could involve rewording unclear parts, adding missing steps, or adjusting the logic to make it more correct or complete.
-
+
2. **Improvement through Reasoning**: After receiving feedback, the **`reason_agent`** is used again to generate an improved version of the reasoning trace. This trace incorporates the feedback provided, refining the earlier steps and enhancing the overall reasoning.
3. **Re-evaluation**: Once the trace is improved, the new version is evaluated again using the same process (either agent-based evaluation or reward model). This new trace is assessed against the same criteria to ensure the improvements have been made.
@@ -159,7 +159,7 @@ from camel.datagen import SelfImprovingCoTPipeline
# Initialize agents
reason_agent = ChatAgent(
- """Answer my question and give your
+ """Answer my question and give your
final answer within \\boxed{}."""
)
@@ -341,4 +341,4 @@ _Stay tuned for more updates on CAMEL's journey in advancing agentic synthetic d
- [Self-Improving Math Reasoning Data Distillation](https://docs.camel-ai.org/cookbooks/data_generation/self_improving_math_reasoning_data_distillation_from_deepSeek_r1.html)
- [Generating High-Quality SFT Data with CAMEL](https://docs.camel-ai.org/cookbooks/data_generation/sft_data_generation_and_unsloth_finetuning_Qwen2_5_7B.html)
- [Function Call Data Generation and Evaluation](https://docs.camel-ai.org/cookbooks/data_generation/data_gen_with_real_function_calls_and_hermes_format.html)
-- [Agentic Data Generation, Evaluation & Filtering with Reward Models](https://docs.camel-ai.org/cookbooks/data_generation/synthetic_dataevaluation%26filter_with_reward_model.html)
\ No newline at end of file
+- [Agentic Data Generation, Evaluation & Filtering with Reward Models](https://docs.camel-ai.org/cookbooks/data_generation/synthetic_dataevaluation%26filter_with_reward_model.html)
diff --git a/docs/cookbooks/data_processing/summarisation_agent_with_mistral_ocr.md b/docs/cookbooks/data_processing/summarisation_agent_with_mistral_ocr.md
index c39379bd58..cbe9022478 100644
--- a/docs/cookbooks/data_processing/summarisation_agent_with_mistral_ocr.md
+++ b/docs/cookbooks/data_processing/summarisation_agent_with_mistral_ocr.md
@@ -24,7 +24,7 @@ In this cookbook, we’ll explore [**Mistral OCR**](https://mistral.ai/news/mist
-
+
⭐ *Star us on [GitHub](https://github.com/camel-ai/camel), join our [Discord](https://discord.camel-ai.org), or follow us on [X](https://x.com/camelaiorg)*
@@ -40,19 +40,19 @@ Throughout history, advancements in information abstraction and retrieval have d
#### **Key Features of Mistral OCR:**
-1. **State-of-the-art complex document understanding**
+1. **State-of-the-art complex document understanding**
- Extracts interleaved text, figures, tables, and mathematical expressions with high fidelity.
-2. **Natively multilingual & multimodal**
+2. **Natively multilingual & multimodal**
- Parses scripts and fonts from across the globe, handling right-to-left layouts and non-Latin characters seamlessly.
-3. **Doc-as-prompt, structured output**
+3. **Doc-as-prompt, structured output**
- Returns ordered Markdown, embedding images and bounding-box metadata ready for RAG and downstream AI workflows.
-4. **Top-tier benchmarks & speed**
+4. **Top-tier benchmarks & speed**
- Outperforms leading OCR systems in accuracy—especially in math, tables, and multilingual tests—while delivering fast batch inference (∼2000 pages/min).
-5. **Scalable & flexible deployment**
+5. **Scalable & flexible deployment**
- Available via `mistral-ocr-latest` on Mistral’s developer suite, cloud partners, and on-premises self-hosting for sensitive data.
Ready to unlock your documents? Let’s dive into the extraction guide.
@@ -71,10 +71,10 @@ First, install the CAMEL package with all its dependencies.
If you don’t have a Mistral API key, you can obtain one by following these steps:
-1. **Create an account:**
+1. **Create an account:**
Go to [Mistral Console](https://console.mistral.ai/home) and sign up for an organization account.
-2. **Get your API key:**
+2. **Get your API key:**
Once logged in, navigate to **Organization** → **API Keys**, generate a new key, copy it, and store it securely.
@@ -210,8 +210,6 @@ Thanks from everyone at 🐫 CAMEL-AI
-
+
⭐ *Star us on [GitHub](https://github.com/camel-ai/camel), join our [Discord](https://discord.camel-ai.org), or follow us on [X](https://x.com/camelaiorg)*
-
-
diff --git a/docs/cookbooks/mcp/agent_to_mcp_with_faiss.ipynb b/docs/cookbooks/mcp/agent_to_mcp_with_faiss.ipynb
index 81c40ed971..221f569e82 100644
--- a/docs/cookbooks/mcp/agent_to_mcp_with_faiss.ipynb
+++ b/docs/cookbooks/mcp/agent_to_mcp_with_faiss.ipynb
@@ -749,4 +749,4 @@
}
}
]
-}
\ No newline at end of file
+}
diff --git a/docs/cookbooks/multi_agent_society/azure_openai_claude_society.md b/docs/cookbooks/multi_agent_society/azure_openai_claude_society.md
index 7fdd87c6bd..7581305687 100644
--- a/docs/cookbooks/multi_agent_society/azure_openai_claude_society.md
+++ b/docs/cookbooks/multi_agent_society/azure_openai_claude_society.md
@@ -11,7 +11,7 @@ title: "🍳 CAMEL Cookbook: Building a Collaborative AI Research Society"
-
+
⭐ *Star us on [GitHub](https://github.com/camel-ai/camel), join our [Discord](https://discord.camel-ai.org), or follow us on [X](https://x.com/camelaiorg)*
@@ -635,9 +635,8 @@ Thanks from everyone at 🐫 CAMEL-AI
-
+
⭐ *Star us on [GitHub](https://github.com/camel-ai/camel), join our [Discord](https://discord.camel-ai.org), or follow us on [X](https://x.com/camelaiorg)*
---
-
diff --git a/docs/get_started/installation.md b/docs/get_started/installation.md
index a66a6115ac..5ff752bf51 100644
--- a/docs/get_started/installation.md
+++ b/docs/get_started/installation.md
@@ -104,7 +104,7 @@ We recommend starting with a simple role-playing scenario to understand CAMEL's
```bash
pip install -r requirements.txt
```
-
+
- Set up your environment variables by loading the `.env` file:
```python
from dotenv import load_dotenv
@@ -116,7 +116,7 @@ We recommend starting with a simple role-playing scenario to understand CAMEL's
python examples/role_playing.py
```
-
+
Want to see multi-agent collaboration at scale?
Try running the workforce example:
@@ -235,10 +235,10 @@ python examples/ai_society/role_playing.py
python examples/toolkits/code_execution_toolkit.py
# Generating knowledge graphs with agents
-python examples/knowledge_graph/knowledge_graph_agent_example.py
+python examples/knowledge_graph/knowledge_graph_agent_example.py
# Multiple agents collaborating on complex tasks
-python examples/workforce/multiple_single_agents.py
+python examples/workforce/multiple_single_agents.py
# Creative image generation with agents
python examples/vision/image_crafting.py
diff --git a/docs/get_started/introduction.md b/docs/get_started/introduction.md
index 87aebf2a90..bf4e0b7d6c 100644
--- a/docs/get_started/introduction.md
+++ b/docs/get_started/introduction.md
@@ -44,7 +44,7 @@ description: |
- OWL (Optimized Workforce Learning) is a multi-agent automation framework for real-world tasks. Built on CAMEL-AI,
+ OWL (Optimized Workforce Learning) is a multi-agent automation framework for real-world tasks. Built on CAMEL-AI,
it enables dynamic agent collaboration using tools like browsers, code interpreters, and multimodal models.
diff --git a/docs/key_modules/agents.md b/docs/key_modules/agents.md
index 008849af23..9561ed8d56 100644
--- a/docs/key_modules/agents.md
+++ b/docs/key_modules/agents.md
@@ -7,7 +7,7 @@ icon: user-helmet-safety
## Concept
-Agents in CAMEL are autonomous entities capable of performing specific tasks through interaction with language models and other components.
+Agents in CAMEL are autonomous entities capable of performing specific tasks through interaction with language models and other components.
Each agent is designed with a particular role and capability, allowing them to work independently or collaboratively to achieve complex goals.
@@ -38,26 +38,26 @@ The `ChatAgent` is the primary implementation that handles conversations with la
-
- **`CriticAgent`**
+
+ **`CriticAgent`**
Specialized agent for evaluating and critiquing responses or solutions. Used in scenarios requiring quality assessment or validation.
- **`DeductiveReasonerAgent`**
+ **`DeductiveReasonerAgent`**
Focused on logical reasoning and deduction. Breaks down complex problems into smaller, manageable steps.
- **`EmbodiedAgent`**
+ **`EmbodiedAgent`**
Designed for embodied AI scenarios, capable of understanding and responding to physical world contexts.
- **`KnowledgeGraphAgent`**
+ **`KnowledgeGraphAgent`**
Specialized in building and utilizing knowledge graphs for enhanced reasoning and information management.
- **`MultiHopGeneratorAgent`**
+ **`MultiHopGeneratorAgent`**
Handles multi-hop reasoning tasks, generating intermediate steps to reach conclusions.
- **`SearchAgent`**
+ **`SearchAgent`**
Focused on information retrieval and search tasks across various data sources.
- **`TaskAgent`**
+ **`TaskAgent`**
Handles task decomposition and management, breaking down complex tasks into manageable subtasks.
diff --git a/docs/key_modules/browsertoolkit.md b/docs/key_modules/browsertoolkit.md
index 29a4afc889..fdd8876fb5 100644
--- a/docs/key_modules/browsertoolkit.md
+++ b/docs/key_modules/browsertoolkit.md
@@ -136,4 +136,4 @@ answer = browser_toolkit.browser.ask_question_about_video(question=question)
print(answer)
```
-
\ No newline at end of file
+
diff --git a/docs/key_modules/datagen.md b/docs/key_modules/datagen.md
index 45247ad6d3..ed726f7dea 100644
--- a/docs/key_modules/datagen.md
+++ b/docs/key_modules/datagen.md
@@ -27,7 +27,7 @@ This page introduces CAMEL's **data generation modules** for creating high-quali
-**CoTDataGenerator Class**
+**CoTDataGenerator Class**
The main class that implements the CoT generation system with the following capabilities:
@@ -405,7 +405,7 @@ The main class that implements the CoT generation system with the following capa
# Initialize agents
reason_agent = ChatAgent(
- """Answer my question and give your
+ """Answer my question and give your
final answer within \\boxed{}."""
)
diff --git a/docs/key_modules/embeddings.md b/docs/key_modules/embeddings.md
index 27f5094f7f..6a3927f027 100644
--- a/docs/key_modules/embeddings.md
+++ b/docs/key_modules/embeddings.md
@@ -11,8 +11,8 @@ icon: vector-square
-Text embeddings turn sentences or documents into high-dimensional vectors that capture meaning.
-Example:
+Text embeddings turn sentences or documents into high-dimensional vectors that capture meaning.
+Example:
“A young boy is playing soccer in a park.”
“A child is kicking a football on a playground.”
diff --git a/docs/key_modules/loaders.md b/docs/key_modules/loaders.md
index fd06347a01..628d040ac0 100644
--- a/docs/key_modules/loaders.md
+++ b/docs/key_modules/loaders.md
@@ -289,7 +289,7 @@ That’s it. With just a couple of lines, you can turn any website into clean ma
---
-Chunkr Reader allows you to process PDFs (and other docs) in chunks, with built-in OCR and format control.
+Chunkr Reader allows you to process PDFs (and other docs) in chunks, with built-in OCR and format control.
Below is a basic usage pattern:
Initialize the `ChunkrReader` and `ChunkrReaderConfig`, set the file path and chunking options, then submit your task and fetch results:
diff --git a/docs/key_modules/memory.md b/docs/key_modules/memory.md
index ab473303c7..91e941da10 100644
--- a/docs/key_modules/memory.md
+++ b/docs/key_modules/memory.md
@@ -126,7 +126,7 @@ icon: memory
- **What it is:**
+ **What it is:**
The basic data unit in CAMEL’s memory system—everything stored/retrieved flows through this structure.
**Attributes:**
@@ -142,7 +142,7 @@ icon: memory
- **What it is:**
+ **What it is:**
Result of memory retrieval from `AgentMemory`, scored for context relevance.
**Attributes:**
@@ -151,7 +151,7 @@ icon: memory
- **What it is:**
+ **What it is:**
The core “building block” for agent memory, following the Composite design pattern (supports tree structures).
**Key methods:**
@@ -161,7 +161,7 @@ icon: memory
- **What it is:**
+ **What it is:**
Defines strategies for generating agent context when data exceeds model limits.
**Key methods/properties:**
@@ -171,7 +171,7 @@ icon: memory
- **What it is:**
+ **What it is:**
Specialized `MemoryBlock` for direct agent use.
**Key methods:**
@@ -188,7 +188,7 @@ icon: memory
- **What it does:**
+ **What it does:**
Stores and retrieves recent chat history (like a conversation timeline).
**Initialization:**
@@ -200,12 +200,12 @@ icon: memory
- `write_records()`: Add new records
- `clear()`: Remove all chat history
- **Use Case:**
+ **Use Case:**
Best for maintaining the most recent conversation flow/context.
- **What it does:**
+ **What it does:**
Uses vector embeddings for storing and retrieving information based on semantic similarity.
**Initialization:**
@@ -217,7 +217,7 @@ icon: memory
- `write_records()`: Add new records (converted to vectors)
- `clear()`: Remove all vector records
- **Use Case:**
+ **Use Case:**
Ideal for large histories or when semantic search is needed.
@@ -234,8 +234,8 @@ icon: memory
-**What is it?**
-An **AgentMemory** implementation that wraps `ChatHistoryBlock`.
+**What is it?**
+An **AgentMemory** implementation that wraps `ChatHistoryBlock`.
**Best for:** Sequential, recent chat context (simple conversation memory).
**Initialization:**
@@ -251,8 +251,8 @@ An **AgentMemory** implementation that wraps `ChatHistoryBlock`.
-**What is it?**
-An **AgentMemory** implementation that wraps `VectorDBBlock`.
+**What is it?**
+An **AgentMemory** implementation that wraps `VectorDBBlock`.
**Best for:** Semantic search—find relevant messages by meaning, not just recency.
**Initialization:**
@@ -267,8 +267,8 @@ An **AgentMemory** implementation that wraps `VectorDBBlock`.
-**What is it?**
-Combines **ChatHistoryMemory** and **VectorDBMemory** for hybrid memory.
+**What is it?**
+Combines **ChatHistoryMemory** and **VectorDBMemory** for hybrid memory.
**Best for:** Production bots that need both recency & semantic search.
**Initialization:**
@@ -348,7 +348,7 @@ You can subclass `BaseContextCreator` for advanced control.
@property
def token_counter(self):
# Implement your token counting logic
- return
+ return
@property
def token_limit(self):
diff --git a/docs/key_modules/models.md b/docs/key_modules/models.md
index 0cd751b4f3..7611640b5f 100644
--- a/docs/key_modules/models.md
+++ b/docs/key_modules/models.md
@@ -91,7 +91,7 @@ Integrate your favorite models into CAMEL-AI with straightforward Python calls.
-
+
Here's how you use OpenAI models such as GPT-4o-mini with CAMEL:
```python
@@ -118,7 +118,7 @@ Integrate your favorite models into CAMEL-AI with straightforward Python calls.
-
+
Using Google's Gemini models in CAMEL:
- **Google AI Studio** ([Quick Start](https://aistudio.google.com/)): Try models quickly in a no-code environment.
@@ -149,7 +149,7 @@ Integrate your favorite models into CAMEL-AI with straightforward Python calls.
-
+
Integrate Mistral AI models like Mistral Medium into CAMEL:
```python
@@ -176,7 +176,7 @@ Integrate your favorite models into CAMEL-AI with straightforward Python calls.
-
+
Leveraging Anthropic's Claude models within CAMEL:
```python
@@ -203,10 +203,10 @@ Integrate your favorite models into CAMEL-AI with straightforward Python calls.
-
+
Leverage [CometAPI](https://api.cometapi.com/)'s unified access to multiple frontier AI models:
- - **CometAPI Platform** ([CometAPI](https://www.cometapi.com/?utm_source=camel-ai&utm_campaign=integration&utm_medium=integration&utm_content=integration)):
+ - **CometAPI Platform** ([CometAPI](https://www.cometapi.com/?utm_source=camel-ai&utm_campaign=integration&utm_medium=integration&utm_content=integration)):
- **API Key Setup**: Obtain your CometAPI key to start integration.
- **OpenAI Compatible**: Use familiar OpenAI API patterns with advanced frontier models.
@@ -265,7 +265,7 @@ Integrate your favorite models into CAMEL-AI with straightforward Python calls.
ModelType.COMETAPI_QWEN3_30B_A3B,
ModelType.COMETAPI_QWEN3_CODER_PLUS_2025_07_22
]
-
+
for model_type in models_to_try:
model = ModelFactory.create(
model_platform=ModelPlatformType.COMETAPI,
@@ -277,7 +277,7 @@ Integrate your favorite models into CAMEL-AI with straightforward Python calls.
-
+
Leverage [Nebius AI Studio](https://nebius.com/)'s high-performance GPU cloud with OpenAI-compatible models:
- **Nebius AI Studio** ([Platform](https://studio.nebius.com/)): Access powerful models through their cloud infrastructure.
@@ -319,7 +319,7 @@ Integrate your favorite models into CAMEL-AI with straightforward Python calls.
- **Complete Access:** All models available on [Nebius AI Studio](https://studio.nebius.com/) are supported
- **Predefined Enums:** Common models like `NEBIUS_GPT_OSS_120B`, `NEBIUS_DEEPSEEK_V3`, etc.
- **String-based Access:** Use any model name directly as a string for maximum flexibility
-
+
**Example with any model:**
```python
# Use any model available on Nebius
@@ -393,7 +393,7 @@ Integrate your favorite models into CAMEL-AI with straightforward Python calls.
- `OPENROUTER_LLAMA_4_SCOUT` - Meta's Llama 4 Scout model
- `OPENROUTER_OLYMPICODER_7B` - Open R1's OlympicCoder 7B model
- `OPENROUTER_HORIZON_ALPHA` - Horizon Alpha model
-
+
Free versions are also available for some models (e.g., `OPENROUTER_LLAMA_4_MAVERICK_FREE`).
@@ -427,7 +427,7 @@ Integrate your favorite models into CAMEL-AI with straightforward Python calls.
-
+
Using [Groq](https://groq.com/)'s powerful models (e.g., Llama 3.3-70B):
```python
diff --git a/docs/key_modules/prompts.md b/docs/key_modules/prompts.md
index 12eb807a53..4b1dd9ccf7 100644
--- a/docs/key_modules/prompts.md
+++ b/docs/key_modules/prompts.md
@@ -186,7 +186,7 @@ prompt2 = TextPrompt('Welcome, {name}!')
# Concatenation
prompt3 = prompt1 + ' ' + prompt2
-print(prompt3)
+print(prompt3)
# >>> "Hello, {name}! Welcome, {name}!"
print(isinstance(prompt3, TextPrompt)) # >>> True
print(prompt3.key_words) # >>> {'name'}
@@ -298,5 +298,3 @@ print(prompt5.key_words) # >>> {'NAME'}
-
-
diff --git a/docs/key_modules/retrievers.md b/docs/key_modules/retrievers.md
index 3bf60812a6..133185baa8 100644
--- a/docs/key_modules/retrievers.md
+++ b/docs/key_modules/retrievers.md
@@ -109,8 +109,8 @@ Use AutoRetriever for fast experiments and RAG workflows; for advanced control,
-For simple, blazing-fast search by keyword—use KeywordRetriever.
-Great for small data, transparency, or keyword-driven tasks.
+For simple, blazing-fast search by keyword—use KeywordRetriever.
+Great for small data, transparency, or keyword-driven tasks.
*(API and code example coming soon—see RAG Cookbook for details.)*
@@ -130,4 +130,3 @@ Great for small data, transparency, or keyword-driven tasks.
Full configuration and options for all retriever classes.
-
diff --git a/docs/key_modules/runtimes.md b/docs/key_modules/runtimes.md
index 718002b362..977fb3ef0a 100644
--- a/docs/key_modules/runtimes.md
+++ b/docs/key_modules/runtimes.md
@@ -122,11 +122,10 @@ All runtimes inherit from BaseRuntime, which defines core methods:
## More Examples
-You’ll find runnable scripts for each runtime in [examples/runtime](https://github.com/camel-ai/camel/tree/master/examples/runtimes)/ in our main repo.
+You’ll find runnable scripts for each runtime in [examples/runtime](https://github.com/camel-ai/camel/tree/master/examples/runtimes)/ in our main repo.
Each script demonstrates how to initialize and use a specific runtime—perfect for experimentation or production setups.
## Final Note
-The runtime system primarily sandboxes FunctionTool-style tool functions.
+The runtime system primarily sandboxes FunctionTool-style tool functions.
For agent-level, dynamic code execution, always consider dedicated sandboxing—such as UbuntuDockerRuntime’s exec_python_file()—for running dynamically generated scripts with maximum isolation and safety.
-
diff --git a/docs/key_modules/storages.md b/docs/key_modules/storages.md
index 2a6a1cf6e0..9c374f95f5 100644
--- a/docs/key_modules/storages.md
+++ b/docs/key_modules/storages.md
@@ -76,8 +76,8 @@ The Storage module in CAMEL-AI gives you a **unified interface for saving
**BaseGraphStorage**
- Abstract base for graph database integrations
- - **Supports:**
- - Schema queries and refresh
+ - **Supports:**
+ - Schema queries and refresh
- Adding/deleting/querying triplets
**NebulaGraph**
@@ -99,7 +99,7 @@ Here are practical usage patterns for each storage type—pick the ones you need
---
- Use for: Fast, temporary storage. Data is lost when your program exits.
+ Use for: Fast, temporary storage. Data is lost when your program exits.
Perfect for: Prototyping, testing, in-memory caching.
@@ -125,7 +125,7 @@ Here are practical usage patterns for each storage type—pick the ones you need
---
- Use for: Persistent, human-readable storage on disk.
+ Use for: Persistent, human-readable storage on disk.
Perfect for: Logs, local settings, configs, or sharing small data sets.
@@ -152,7 +152,7 @@ Here are practical usage patterns for each storage type—pick the ones you need
---
- Use for: Scalable, high-performance vector search (RAG, embeddings).
+ Use for: Scalable, high-performance vector search (RAG, embeddings).
Perfect for: Semantic search and production AI retrieval.
@@ -187,7 +187,7 @@ Here are practical usage patterns for each storage type—pick the ones you need
---
- Use for: Hybrid cloud-native storage, vectors + SQL in one.
+ Use for: Hybrid cloud-native storage, vectors + SQL in one.
Perfect for: Combining AI retrieval with your business database.
@@ -223,7 +223,7 @@ Here are practical usage patterns for each storage type—pick the ones you need
- Use for: Fast, scalable open-source vector search.
+ Use for: Fast, scalable open-source vector search.
Perfect for: RAG, document search, and high-scale retrieval tasks.
@@ -260,7 +260,7 @@ Here are practical usage patterns for each storage type—pick the ones you need
---
- Use for: Fastest way to build LLM apps with memory and embeddings.
+ Use for: Fastest way to build LLM apps with memory and embeddings.
Perfect for: From prototyping in notebooks to production clusters with the same simple API.
@@ -353,7 +353,7 @@ Here are practical usage patterns for each storage type—pick the ones you need
---
- Use for: Massive vector storage with advanced analytics.
+ Use for: Massive vector storage with advanced analytics.
Perfect for: Batch operations, cloud or on-prem setups, and high-throughput search.
@@ -449,7 +449,7 @@ Here are practical usage patterns for each storage type—pick the ones you need
---
- Use for: Vector search with hybrid (vector + keyword) capabilities.
+ Use for: Vector search with hybrid (vector + keyword) capabilities.
Perfect for: Document retrieval and multimodal AI apps.
@@ -491,7 +491,7 @@ Here are practical usage patterns for each storage type—pick the ones you need
---
- Use for: Open-source, distributed graph storage and querying.
+ Use for: Open-source, distributed graph storage and querying.
Perfect for: Knowledge graphs, relationships, and fast distributed queries.
@@ -510,7 +510,7 @@ Here are practical usage patterns for each storage type—pick the ones you need
---
- Use for: Industry-standard graph database for large-scale relationships.
+ Use for: Industry-standard graph database for large-scale relationships.
Perfect for: Enterprise graph workloads, Cypher queries, analytics.
```python
diff --git a/docs/key_modules/tasks.md b/docs/key_modules/tasks.md
index a1269632e8..7fc0aae818 100644
--- a/docs/key_modules/tasks.md
+++ b/docs/key_modules/tasks.md
@@ -6,7 +6,7 @@ icon: list-check
For more detailed usage information, please refer to our cookbook: [Task Generation Cookbook](../cookbooks/multi_agent_society/task_generation.ipynb)
-A task in CAMEL is a structured assignment that can be given to one or more agents. Tasks are higher-level than prompts and managed by modules like the Planner and Workforce.
+A task in CAMEL is a structured assignment that can be given to one or more agents. Tasks are higher-level than prompts and managed by modules like the Planner and Workforce.
Key ideas:
- Tasks can be collaborative, requiring multiple agents.
- Tasks can be decomposed into subtasks or evolved over time.
diff --git a/docs/key_modules/terminaltoolkit.md b/docs/key_modules/terminaltoolkit.md
index e451384d85..b0b52c94a9 100644
--- a/docs/key_modules/terminaltoolkit.md
+++ b/docs/key_modules/terminaltoolkit.md
@@ -149,4 +149,4 @@ help_result = terminal_toolkit.ask_user_for_help(id='session_1')
# in the console. After the user types '/exit', the script will resume.
print(help_result)
```
-
\ No newline at end of file
+
diff --git a/docs/key_modules/tools.md b/docs/key_modules/tools.md
index c3042e4882..8686776079 100644
--- a/docs/key_modules/tools.md
+++ b/docs/key_modules/tools.md
@@ -6,12 +6,12 @@ icon: screwdriver-wrench
For more detailed usage information, please refer to our cookbook: [Tools Cookbook](../cookbooks/advanced_features/agents_with_tools.ipynb)
- A Tool in CAMEL is a callable function with a name, description, input parameters, and an output type.
+ A Tool in CAMEL is a callable function with a name, description, input parameters, and an output type.
Tools act as the interface between agents and the outside world—think of them like OpenAI Functions you can easily convert, extend, or use directly.
- A Toolkit is a curated collection of related tools designed to work together for a specific purpose.
+ A Toolkit is a curated collection of related tools designed to work together for a specific purpose.
CAMEL provides a range of built-in toolkits—covering everything from web search and data extraction to code execution, GitHub integration, and much more.
diff --git a/docs/mcp/camel_agents_as_an_mcp_clients.md b/docs/mcp/camel_agents_as_an_mcp_clients.md
index b217c214a9..033e8a71b3 100644
--- a/docs/mcp/camel_agents_as_an_mcp_clients.md
+++ b/docs/mcp/camel_agents_as_an_mcp_clients.md
@@ -100,7 +100,7 @@ You can use sse or streamable-http for ACI.dev, pick w
- Once connected, you can extend your setup with other servers from ACI.dev, Composio, or `npx`.
+ Once connected, you can extend your setup with other servers from ACI.dev, Composio, or `npx`.
- Use `stdio` for local testing, `sse` or `streamable-http` for cloud tools.
@@ -122,7 +122,7 @@ This diagram illustrates how CAMEL agents use MCPToolkit to seamlessly connect w
-Want your MCP agent discoverable by thousands of clients?
+Want your MCP agent discoverable by thousands of clients?
Register it with a hub like ACI.dev or similar.
```python Register with ACI Registry lines icon="python"
from camel.agents import MCPAgent
@@ -148,11 +148,11 @@ Your agent is now connected to the AC
-Finding MCP servers is now a breeze with PulseMCP integration.
+Finding MCP servers is now a breeze with PulseMCP integration.
You don’t have to guess which MCP servers are available, just search, browse, and connect.
-PulseMCP acts as a living directory of the entire MCP ecosystem.
+PulseMCP acts as a living directory of the entire MCP ecosystem.
CAMEL toolkits can plug directly into PulseMCP, letting you browse and connect to thousands of servers, all kept up to date in real time.
You can visit [PulseMCP.com](https://pulsemcp.com) to browse all available MCP servers—everything from file systems and search to specialized APIs.
@@ -172,7 +172,7 @@ PulseMCP does the heavy lifting of finding, categorizing, and keeping MCP server
-Don’t need advanced tool-calling?
+Don’t need advanced tool-calling?
See this example for a super-lightweight setup.
diff --git a/docs/mcp/camel_toolkits_as_an_mcp_server.md b/docs/mcp/camel_toolkits_as_an_mcp_server.md
index 20f4b7f472..843731b660 100644
--- a/docs/mcp/camel_toolkits_as_an_mcp_server.md
+++ b/docs/mcp/camel_toolkits_as_an_mcp_server.md
@@ -13,7 +13,7 @@ description: "Share any CAMEL toolkit as an MCP server so external clients and a
- With one command, you can flip any toolkit into an MCP server.
+ With one command, you can flip any toolkit into an MCP server.
Now, any MCP-compatible client or agent can call your tools—locally or over the network.
@@ -25,14 +25,14 @@ description: "Share any CAMEL toolkit as an MCP server so external clients and a
You can turn any CAMEL toolkit into a full-featured MCP server—making its tools instantly available to other AI agents or external apps via the Model Context Protocol.
-Why do this?
+Why do this?
- Instantly share your agent tools with external clients (e.g., Claude, Cursor, custom dashboards).
- Enable distributed, language-agnostic tool execution across different systems and teams.
- Easily test, debug, and reuse your tools—no need to change the toolkit or agent code.
### Launch a Toolkit Server
-Below is a minimal script to expose ArxivToolkit as an MCP server.
+Below is a minimal script to expose ArxivToolkit as an MCP server.
Swap in any other toolkit (e.g., SearchToolkit, MathToolkit), they all work the same way!
```python
diff --git a/docs/mcp/connecting_existing_mcp_tools.md b/docs/mcp/connecting_existing_mcp_tools.md
index dbedbe9527..f6adcebcac 100644
--- a/docs/mcp/connecting_existing_mcp_tools.md
+++ b/docs/mcp/connecting_existing_mcp_tools.md
@@ -6,17 +6,17 @@ icon: 'network'
## Overview
-You can connect any Model Context Protocol (MCP) tool—like the official filesystem server—directly to your CAMEL ChatAgent.
+You can connect any Model Context Protocol (MCP) tool—like the official filesystem server—directly to your CAMEL ChatAgent.
This gives your agents natural language access to external filesystems, databases, or any MCP-compatible service.
-Use Case:
+Use Case:
Let your agent list files or read documents by wiring up the official MCP Filesystem server as a tool—no code changes to the agent required!
- You can use any MCP-compatible tool.
+ You can use any MCP-compatible tool.
For this example, we'll use the official filesystem server from the Model Context Protocol community.
Install globally using npm:
@@ -150,6 +150,6 @@ Let your agent list files or read documents by wiring up the official MCP Filesy
-That's it!
+That's it!
Your CAMEL agent can now leverage any external tool (filesystem, APIs, custom scripts) that supports MCP. Plug and play!
diff --git a/docs/mcp/export_camel_agent_as_mcp_server.md b/docs/mcp/export_camel_agent_as_mcp_server.md
index c64c7c22b8..7ea13a9fbe 100644
--- a/docs/mcp/export_camel_agent_as_mcp_server.md
+++ b/docs/mcp/export_camel_agent_as_mcp_server.md
@@ -14,8 +14,8 @@ Any MCP-compatible client (Claude, Cursor, editors, or your own app) can connect
-Scripted Server:
-Launch your agent as an MCP server with the ready-made scripts in services/.
+Scripted Server:
+Launch your agent as an MCP server with the ready-made scripts in services/.
Configure your MCP client (Claude, Cursor, etc.) to connect:
```json mcp_servers_config.json Example highlight={5}
{
@@ -71,7 +71,7 @@ if __name__ == "__main__":
## Real-world Example
-You can use Claude, Cursor, or any other app to call your custom agent!
+You can use Claude, Cursor, or any other app to call your custom agent!
Just connect to your CAMEL MCP server
@@ -94,6 +94,6 @@ You can expose any number of custom tools, multi-agent workflows, or domain know
---
-Want to create your own tools and toolkits?
+Want to create your own tools and toolkits?
See Toolkits Reference for everything you can expose to the MCP ecosystem!
diff --git a/docs/mcp/mcp_hub.md b/docs/mcp/mcp_hub.md
index 4cc77dfc6a..4077a53ea6 100644
--- a/docs/mcp/mcp_hub.md
+++ b/docs/mcp/mcp_hub.md
@@ -2,4 +2,4 @@
title: "CAMEL-AI MCPHub"
icon: warehouse
url: "https://mcp.camel-ai.org/"
----
\ No newline at end of file
+---
diff --git a/docs/mcp/overview.md b/docs/mcp/overview.md
index e2ebcb79b8..60e3ef12ff 100644
--- a/docs/mcp/overview.md
+++ b/docs/mcp/overview.md
@@ -8,7 +8,7 @@ icon: 'play'
MCP (Model Context Protocol) originated from an [Anthropic article](https://www.anthropic.com/news/model-context-protocol) published on November 25, 2024: *Introducing the Model Context Protocol*.
-MCP defines **how applications and AI models exchange contextual information**.
+MCP defines **how applications and AI models exchange contextual information**.
It enables developers to connect data sources, tools, and functions to LLMs using a universal, standardized protocol—much like USB-C enables diverse devices to connect via a single interface.
@@ -70,26 +70,26 @@ MCP follows a **client-server model** with three main roles:

### How it works, step by step:
-1. **User asks:**
+1. **User asks:**
“What documents do I have on my desktop?” via the Host (e.g., Claude Desktop).
-2. **Host (MCP Host):**
+2. **Host (MCP Host):**
Receives your question and forwards it to the Claude model.
-3. **Client (MCP Client):**
+3. **Client (MCP Client):**
Claude model decides it needs more data, Client is activated to connect to a file system MCP Server.
-4. **Server (MCP Server):**
+4. **Server (MCP Server):**
The server reads your desktop directory and returns a list of documents.
-5. **Results:**
+5. **Results:**
Claude uses this info to answer your question, displayed in your desktop app.
-This architecture **lets agents dynamically call tools and access data**—local or remote—while developers only focus on building the relevant MCPServer.
+This architecture **lets agents dynamically call tools and access data**—local or remote—while developers only focus on building the relevant MCPServer.
**You don’t have to handle the nitty-gritty of connecting Hosts and Clients.**
-For deeper architecture details and diagrams, see the
+For deeper architecture details and diagrams, see the
official MCP docs: Architecture Concepts.
@@ -100,4 +100,3 @@ For deeper architecture details and diagrams, see the
- **Users** get safer, more flexible, and privacy-friendly AI workflows.
---
-
diff --git a/docs/mintlify/convert_notebook2mdx.py b/docs/mintlify/convert_notebook2mdx.py
index 195bb4a6d4..74c0ebbfbb 100644
--- a/docs/mintlify/convert_notebook2mdx.py
+++ b/docs/mintlify/convert_notebook2mdx.py
@@ -343,7 +343,7 @@ def standardize_html_blocks(content):
-
+
⭐ Star us on Github , join our [*Discord*](https://discord.camel-ai.org) or follow our [*X*](https://x.com/camelaiorg) ⭐
"""
new3 = """
@@ -354,7 +354,7 @@ def standardize_html_blocks(content):
-
+
⭐ *Star us on [GitHub](https://github.com/camel-ai/camel), join our [Discord](https://discord.camel-ai.org), or follow us on [X](https://x.com/camelaiorg)*
"""
diff --git a/docs/mintlify/cookbooks/advanced_features/agent_generate_structured_output.mdx b/docs/mintlify/cookbooks/advanced_features/agent_generate_structured_output.mdx
index 1f87e7bfa0..987fd13da9 100644
--- a/docs/mintlify/cookbooks/advanced_features/agent_generate_structured_output.mdx
+++ b/docs/mintlify/cookbooks/advanced_features/agent_generate_structured_output.mdx
@@ -11,7 +11,7 @@ You can also check this cookbook in colab [here](https://colab.research.google.c
-
+
⭐ *Star us on [GitHub](https://github.com/camel-ai/camel), join our [Discord](https://discord.camel-ai.org), or follow us on [X](https://x.com/camelaiorg)*
@@ -188,13 +188,13 @@ class CalculationResult(BaseModel):
```python
# Define the user's question
-user_msg = """Assume now is 2024 in the Gregorian calendar,
-estimate the current age of University of Oxford
+user_msg = """Assume now is 2024 in the Gregorian calendar,
+estimate the current age of University of Oxford
and then add 10 more years to this age."""
# Get the structured response
response = camel_agent.step(
- user_msg,
+ user_msg,
response_format=CalculationResult
)
@@ -287,13 +287,13 @@ def generate_recipe(dish: str) -> Recipe:
"\"instruction\": \"...\", \"duration\": \"...\"}], \"dietary_info\": [\"...\"]}\n\n"
"Return ONLY the JSON object, without any additional text or markdown formatting."
)
-
+
try:
# Extract JSON from the response
content = response.msgs[0].content.strip()
if content.startswith("```json"):
content = content[7:-3].strip() # Remove markdown code block if present
-
+
# Parse and validate the response
recipe_data = json.loads(content)
return Recipe(**recipe_data)
@@ -311,19 +311,19 @@ def generate_recipe(dish: str) -> Recipe:
# Cell 4: Generate and display a recipe
try:
recipe = generate_recipe("vegetable lasagna")
-
+
print(f"=== {recipe.name.upper()} ===")
print(recipe.description)
print(f"\nPreparation: {recipe.prep_time} | Cooking: {recipe.cook_time} | Servings: {recipe.servings}")
-
+
print("\nINGREDIENTS:")
for ing in recipe.ingredients:
print(f"- {ing.amount} {ing.unit} {ing.name}")
-
+
print("\nINSTRUCTIONS:")
for step in recipe.instructions:
print(f"{step.step_number}. {step.instruction} ({step.duration})")
-
+
print("\nDIETARY INFO:", ", ".join(recipe.dietary_info))
except Exception as e:
@@ -331,7 +331,7 @@ except Exception as e:
```
4.2. **Alternative approach**
-- Using response_format with the default model
+- Using response_format with the default model
- This shows how it would work with a model that supports structured output**
@@ -343,11 +343,11 @@ try:
"Give me a recipe for vegetable lasagna",
response_format=Recipe
)
-
+
print("\n=== Using response_format ===")
print("Recipe name:", response.msgs[0].parsed.name)
print("First ingredient:", response.msgs[0].parsed.ingredients[0].name)
-
+
except Exception as e:
print("\nNote: The default model might not support structured output natively.")
print("Error:", e)
@@ -405,9 +405,8 @@ Thanks from everyone at 🐫 CAMEL-AI
-
+
⭐ *Star us on [GitHub](https://github.com/camel-ai/camel), join our [Discord](https://discord.camel-ai.org), or follow us on [X](https://x.com/camelaiorg)*
---
-
diff --git a/docs/mintlify/cookbooks/advanced_features/agents_with_MCP.mdx b/docs/mintlify/cookbooks/advanced_features/agents_with_MCP.mdx
index 4b7aadc716..55710fd152 100644
--- a/docs/mintlify/cookbooks/advanced_features/agents_with_MCP.mdx
+++ b/docs/mintlify/cookbooks/advanced_features/agents_with_MCP.mdx
@@ -11,7 +11,7 @@ You can also check this cookbook in colab [here](https://colab.research.google.c
-
+
⭐ *Star us on [GitHub](https://github.com/camel-ai/camel), join our [Discord](https://discord.camel-ai.org), or follow us on [X](https://x.com/camelaiorg)*
@@ -420,9 +420,8 @@ Thanks from everyone at 🐫 CAMEL-AI
-
+
⭐ *Star us on [GitHub](https://github.com/camel-ai/camel), join our [Discord](https://discord.camel-ai.org), or follow us on [X](https://x.com/camelaiorg)*
---
-
diff --git a/docs/mintlify/cookbooks/advanced_features/agents_with_dkg.mdx b/docs/mintlify/cookbooks/advanced_features/agents_with_dkg.mdx
index 38afd49dfd..d31716a691 100644
--- a/docs/mintlify/cookbooks/advanced_features/agents_with_dkg.mdx
+++ b/docs/mintlify/cookbooks/advanced_features/agents_with_dkg.mdx
@@ -11,7 +11,7 @@ You can also check this cookbook in colab [here](https://drive.google.com/file/d
-
+
⭐ *Star us on [GitHub](https://github.com/camel-ai/camel), join our [Discord](https://discord.camel-ai.org), or follow us on [X](https://x.com/camelaiorg)*
@@ -488,7 +488,7 @@ This comprehensive setup allows you to adapt and expand the example for various
-
+
⭐ *Star us on [GitHub](https://github.com/camel-ai/camel), join our [Discord](https://discord.camel-ai.org), or follow us on [X](https://x.com/camelaiorg)*
diff --git a/docs/mintlify/cookbooks/advanced_features/agents_with_human_in_loop_and_tool_approval.mdx b/docs/mintlify/cookbooks/advanced_features/agents_with_human_in_loop_and_tool_approval.mdx
index 35979ab2db..37aa20d985 100644
--- a/docs/mintlify/cookbooks/advanced_features/agents_with_human_in_loop_and_tool_approval.mdx
+++ b/docs/mintlify/cookbooks/advanced_features/agents_with_human_in_loop_and_tool_approval.mdx
@@ -11,7 +11,7 @@ You can also check this cookbook in colab [here](https://colab.research.google.c
-
+
⭐ *Star us on [GitHub](https://github.com/camel-ai/camel), join our [Discord](https://discord.camel-ai.org), or follow us on [X](https://x.com/camelaiorg)*
@@ -199,7 +199,7 @@ Thanks from everyone at 🐫 CAMEL-AI
-
+
⭐ *Star us on [GitHub](https://github.com/camel-ai/camel), join our [Discord](https://discord.camel-ai.org), or follow us on [X](https://x.com/camelaiorg)*
diff --git a/docs/mintlify/cookbooks/advanced_features/agents_with_tools.mdx b/docs/mintlify/cookbooks/advanced_features/agents_with_tools.mdx
index 798c43c735..86052199d9 100644
--- a/docs/mintlify/cookbooks/advanced_features/agents_with_tools.mdx
+++ b/docs/mintlify/cookbooks/advanced_features/agents_with_tools.mdx
@@ -80,7 +80,7 @@ To do so, **comment out** the above **manual** API key prompt code block(s), and
# os.environ["OPENAI_API_KEY"] = userdata.get("OPENAI_API_KEY")
```
-Now you have done that, let’s customize a tool by taking the simple math calculator, functions add and sub, as an example. When you define your own function, make sure the argument name and docstring are clear so that the agent can understand what this function can do and when to use the function based on the function information you provide.
+Now you have done that, let’s customize a tool by taking the simple math calculator, functions add and sub, as an example. When you define your own function, make sure the argument name and docstring are clear so that the agent can understand what this function can do and when to use the function based on the function information you provide.
> This is just to demonstrate the use of custom tools, the built-in MathToolkit already includes tools for add and sub.
diff --git a/docs/mintlify/cookbooks/advanced_features/agents_with_tools_from_ACI.mdx b/docs/mintlify/cookbooks/advanced_features/agents_with_tools_from_ACI.mdx
index f0b4e92c67..117c2a6846 100644
--- a/docs/mintlify/cookbooks/advanced_features/agents_with_tools_from_ACI.mdx
+++ b/docs/mintlify/cookbooks/advanced_features/agents_with_tools_from_ACI.mdx
@@ -112,9 +112,8 @@ Thanks from everyone at 🐫 CAMEL-AI
-
+
⭐ *Star us on [GitHub](https://github.com/camel-ai/camel), join our [Discord](https://discord.camel-ai.org), or follow us on [X](https://x.com/camelaiorg)*
---
-
diff --git a/docs/mintlify/cookbooks/advanced_features/critic_agents_and_tree_search.mdx b/docs/mintlify/cookbooks/advanced_features/critic_agents_and_tree_search.mdx
index 21cf2bc9c9..c81a83d152 100644
--- a/docs/mintlify/cookbooks/advanced_features/critic_agents_and_tree_search.mdx
+++ b/docs/mintlify/cookbooks/advanced_features/critic_agents_and_tree_search.mdx
@@ -15,7 +15,7 @@ You can also check this cookbook in colab [here](https://colab.research.google.c
In this section, we will take a spite of the task-oriented `RolyPlaying()` class. We design this in an instruction-following manner. The essence is that to solve a complex task, you can enable two communicative agents collabratively working together step by step to reach solutions. The main concepts include:
- **Task**: a task can be as simple as an idea, initialized by an inception prompt.
- **AI User**: the agent who is expected to provide instructions.
-- **AI Assistant**: the agent who is expected to respond with solutions that fulfills the instructions.
+- **AI Assistant**: the agent who is expected to respond with solutions that fulfills the instructions.
**Prerequisite**: We assume that you have read the section on [intro to role-playing](https://colab.research.google.com/drive/1cmWPxXEsyMbmjPhD2bWfHuhd_Uz6FaJQ?usp=sharing).
diff --git a/docs/mintlify/cookbooks/applications/customer_service_Discord_bot_using_Cohere_model_with_agentic_RAG.mdx b/docs/mintlify/cookbooks/applications/customer_service_Discord_bot_using_Cohere_model_with_agentic_RAG.mdx
index 8c630858cf..db6655d16f 100644
--- a/docs/mintlify/cookbooks/applications/customer_service_Discord_bot_using_Cohere_model_with_agentic_RAG.mdx
+++ b/docs/mintlify/cookbooks/applications/customer_service_Discord_bot_using_Cohere_model_with_agentic_RAG.mdx
@@ -12,13 +12,13 @@ documentation sources and listings.
-
+
⭐ *Star us on [GitHub](https://github.com/camel-ai/camel), join our [Discord](https://discord.camel-ai.org), or follow us on [X](https://x.com/camelaiorg)*
---
-## Installation and Setup
+## Installation and Setup
Setting up environment, by installing the CAMEL package with all its dependencies
diff --git a/docs/mintlify/cookbooks/applications/customer_service_Discord_bot_using_SambaNova_with_agentic_RAG.mdx b/docs/mintlify/cookbooks/applications/customer_service_Discord_bot_using_SambaNova_with_agentic_RAG.mdx
index 0c5f1b2180..1d9e9a466b 100644
--- a/docs/mintlify/cookbooks/applications/customer_service_Discord_bot_using_SambaNova_with_agentic_RAG.mdx
+++ b/docs/mintlify/cookbooks/applications/customer_service_Discord_bot_using_SambaNova_with_agentic_RAG.mdx
@@ -393,8 +393,6 @@ Thanks from everyone at 🐫 CAMEL-AI
-
+
⭐ *Star us on [GitHub](https://github.com/camel-ai/camel), join our [Discord](https://discord.camel-ai.org), or follow us on [X](https://x.com/camelaiorg)*
-
-
diff --git a/docs/mintlify/cookbooks/applications/customer_service_Discord_bot_using_local_model_with_agentic_RAG.mdx b/docs/mintlify/cookbooks/applications/customer_service_Discord_bot_using_local_model_with_agentic_RAG.mdx
index ef56167180..447303d9dd 100644
--- a/docs/mintlify/cookbooks/applications/customer_service_Discord_bot_using_local_model_with_agentic_RAG.mdx
+++ b/docs/mintlify/cookbooks/applications/customer_service_Discord_bot_using_local_model_with_agentic_RAG.mdx
@@ -402,8 +402,6 @@ Thanks from everyone at 🐫 CAMEL-AI
-
+
⭐ *Star us on [GitHub](https://github.com/camel-ai/camel), join our [Discord](https://discord.camel-ai.org), or follow us on [X](https://x.com/camelaiorg)*
-
-
diff --git a/docs/mintlify/cookbooks/applications/customer_service_Discord_bot_with_agentic_RAG.mdx b/docs/mintlify/cookbooks/applications/customer_service_Discord_bot_with_agentic_RAG.mdx
index 69a7502802..e4b60439f1 100644
--- a/docs/mintlify/cookbooks/applications/customer_service_Discord_bot_with_agentic_RAG.mdx
+++ b/docs/mintlify/cookbooks/applications/customer_service_Discord_bot_with_agentic_RAG.mdx
@@ -259,7 +259,7 @@ async def on_message(message: discord.Message):
if message.author.bot:
return
user_input = message.content
-
+
agent.reset()
agent.update_memory(knowledge_message, "user")
assistant_response = agent.step(user_input)
diff --git a/docs/mintlify/cookbooks/applications/dynamic_travel_planner.mdx b/docs/mintlify/cookbooks/applications/dynamic_travel_planner.mdx
index 09ada8868d..7d562c8fe3 100644
--- a/docs/mintlify/cookbooks/applications/dynamic_travel_planner.mdx
+++ b/docs/mintlify/cookbooks/applications/dynamic_travel_planner.mdx
@@ -11,7 +11,7 @@ You can also check this cookbook in colab [here](https://colab.research.google.c
-
+
⭐ *Star us on [GitHub](https://github.com/camel-ai/camel), join our [Discord](https://discord.camel-ai.org), or follow us on [X](https://x.com/camelaiorg)*
@@ -325,7 +325,7 @@ Thanks from everyone at 🐫 CAMEL-AI
-
+
⭐ *Star us on [GitHub](https://github.com/camel-ai/camel), join our [Discord](https://discord.camel-ai.org), or follow us on [X](https://x.com/camelaiorg)*
diff --git a/docs/mintlify/cookbooks/applications/finance_discord_bot.mdx b/docs/mintlify/cookbooks/applications/finance_discord_bot.mdx
index dac6096139..100933bbb6 100644
--- a/docs/mintlify/cookbooks/applications/finance_discord_bot.mdx
+++ b/docs/mintlify/cookbooks/applications/finance_discord_bot.mdx
@@ -11,7 +11,7 @@ You can also check this cookbook in colab [here](https://colab.research.google.c
-
+
⭐ *Star us on [GitHub](https://github.com/camel-ai/camel), join our [Discord](https://discord.camel-ai.org), or follow us on [X](https://x.com/camelaiorg)*
@@ -352,8 +352,6 @@ Thanks from everyone at 🐫 CAMEL-AI
-
+
⭐ *Star us on [GitHub](https://github.com/camel-ai/camel), join our [Discord](https://discord.camel-ai.org), or follow us on [X](https://x.com/camelaiorg)*
-
-
diff --git a/docs/mintlify/cookbooks/applications/pptx_toolkit.mdx b/docs/mintlify/cookbooks/applications/pptx_toolkit.mdx
index 0b2ff22968..3b21a93ae0 100644
--- a/docs/mintlify/cookbooks/applications/pptx_toolkit.mdx
+++ b/docs/mintlify/cookbooks/applications/pptx_toolkit.mdx
@@ -4,33 +4,33 @@ title: "🐫 📊 CAMEL-AI PPTXToolkit Cookbook"
This notebook shows you how to **automatically generate** and **assemble** professional PowerPoint decks using CAMEL-AI’s PPTXToolkit. You’ll learn how to:
-- Prompt an LLM to produce **fully structured JSON** for every slide
-- Turn that JSON into a polished `.pptx` with **titles**, **bullets**, **step diagrams**, **tables**, and **images**
-- Leverage **Markdown** styling (`**bold**`, `*italic*`) and **Pexels** image search via `img_keywords`
-- Plug in your **own .pptx templates** (modern, boardroom, minimalist, etc.)
-- Enjoy **auto-layout** selection for text, diagrams, and tables
+- Prompt an LLM to produce **fully structured JSON** for every slide
+- Turn that JSON into a polished `.pptx` with **titles**, **bullets**, **step diagrams**, **tables**, and **images**
+- Leverage **Markdown** styling (`**bold**`, `*italic*`) and **Pexels** image search via `img_keywords`
+- Plug in your **own .pptx templates** (modern, boardroom, minimalist, etc.)
+- Enjoy **auto-layout** selection for text, diagrams, and tables
## 🚥 Pipeline Overview
-1. **Single Agent: Content → JSON**
- - You send one prompt to the LLM
+1. **Single Agent: Content → JSON**
+ - You send one prompt to the LLM
- It returns a JSON list with:
- - A **title slide** (`title`, `subtitle`)
- - At least one **step-by-step** slide (all bullets start with `>>`)
- - At least one **table** slide (`table`: `{headers, rows}`)
- - Two or more slides with meaningful `img_keywords`
- - All **bullet slides** using Markdown formatting
+ - A **title slide** (`title`, `subtitle`)
+ - At least one **step-by-step** slide (all bullets start with `>>`)
+ - At least one **table** slide (`table`: `{headers, rows}`)
+ - Two or more slides with meaningful `img_keywords`
+ - All **bullet slides** using Markdown formatting
-2. **PPTXToolkit: JSON → `.pptx`**
- - Pass that JSON into `PPTXToolkit.create_presentation(...)`
- - Renders slides with your chosen template, images via `img_keywords`, chevrons/pentagons, and tables
- - Outputs a ready-to-share PowerPoint file
+2. **PPTXToolkit: JSON → `.pptx`**
+ - Pass that JSON into `PPTXToolkit.create_presentation(...)`
+ - Renders slides with your chosen template, images via `img_keywords`, chevrons/pentagons, and tables
+ - Outputs a ready-to-share PowerPoint file
---
-Ready to build your next deck? Let’s get started! 🎉
+Ready to build your next deck? Let’s get started! 🎉
You can also check this cookbook in colab [here](https://colab.research.google.com/drive/1W_dsoq1jrO8A_TzwUxzAr4wFSWeXLmn7?usp=sharing)
diff --git a/docs/mintlify/cookbooks/basic_concepts/agents_message.mdx b/docs/mintlify/cookbooks/basic_concepts/agents_message.mdx
index b1a4fb0bf1..67f7517880 100644
--- a/docs/mintlify/cookbooks/basic_concepts/agents_message.mdx
+++ b/docs/mintlify/cookbooks/basic_concepts/agents_message.mdx
@@ -11,7 +11,7 @@ You can also check this cookbook in colab [here](https://colab.research.google.c
-
+
⭐ *Star us on [GitHub](https://github.com/camel-ai/camel), join our [Discord](https://discord.camel-ai.org), or follow us on [X](https://x.com/camelaiorg)*
@@ -311,9 +311,8 @@ Thanks from everyone at 🐫 CAMEL-AI
-
+
⭐ *Star us on [GitHub](https://github.com/camel-ai/camel), join our [Discord](https://discord.camel-ai.org), or follow us on [X](https://x.com/camelaiorg)*
---
-
diff --git a/docs/mintlify/cookbooks/basic_concepts/agents_prompting.mdx b/docs/mintlify/cookbooks/basic_concepts/agents_prompting.mdx
index 237e253491..bbfdee9d1c 100644
--- a/docs/mintlify/cookbooks/basic_concepts/agents_prompting.mdx
+++ b/docs/mintlify/cookbooks/basic_concepts/agents_prompting.mdx
@@ -11,13 +11,13 @@ You can also check this cookbook in colab [here](https://colab.research.google.c
-
+
⭐ *Star us on [GitHub](https://github.com/camel-ai/camel), join our [Discord](https://discord.camel-ai.org), or follow us on [X](https://x.com/camelaiorg)*
---
-This notebook demonstrates how to set up and leverage CAMEL's ability to use **Prompt** module.
+This notebook demonstrates how to set up and leverage CAMEL's ability to use **Prompt** module.
In this notebook, you'll explore:
@@ -229,9 +229,8 @@ Thanks from everyone at 🐫 CAMEL-AI
-
+
⭐ *Star us on [GitHub](https://github.com/camel-ai/camel), join our [Discord](https://discord.camel-ai.org), or follow us on [X](https://x.com/camelaiorg)*
---
-
diff --git a/docs/mintlify/cookbooks/basic_concepts/create_your_first_agent.mdx b/docs/mintlify/cookbooks/basic_concepts/create_your_first_agent.mdx
index ddfd0d9654..1e5c6d8aef 100644
--- a/docs/mintlify/cookbooks/basic_concepts/create_your_first_agent.mdx
+++ b/docs/mintlify/cookbooks/basic_concepts/create_your_first_agent.mdx
@@ -11,19 +11,19 @@ You can also check this cookbook in colab [here](https://colab.research.google.c
-
+
⭐ *Star us on [GitHub](https://github.com/camel-ai/camel), join our [Discord](https://discord.camel-ai.org), or follow us on [X](https://x.com/camelaiorg)*
---
-This notebook demonstrates how to set up and leverage CAMEL's ability to use `ChatAgent()` class.
+This notebook demonstrates how to set up and leverage CAMEL's ability to use `ChatAgent()` class.
In this notebook, you'll explore:
* **CAMEL**: A powerful multi-agent framework that enables Retrieval-Augmented Generation and multi-agent role-playing scenarios, allowing for sophisticated AI-driven tasks.
-* **ChatAgent()**: The class is a cornerstone of CAMEL.
+* **ChatAgent()**: The class is a cornerstone of CAMEL.
## Philosophical Bits
@@ -229,7 +229,7 @@ Key tools utilized in this notebook include:
* **CAMEL**: A powerful multi-agent framework that enables Retrieval-Augmented Generation and multi-agent role-playing scenarios, allowing for sophisticated AI-driven tasks.
-* **ChatAgent()**: The class is a cornerstone of CAMEL.
+* **ChatAgent()**: The class is a cornerstone of CAMEL.
That's everything: Got questions about 🐫 CAMEL-AI? Join us on [Discord](https://discord.camel-ai.org)! Whether you want to share feedback, explore the latest in multi-agent systems, get support, or connect with others on exciting projects, we’d love to have you in the community! 🤝
@@ -255,11 +255,8 @@ Thanks from everyone at 🐫 CAMEL-AI
-
+
⭐ *Star us on [GitHub](https://github.com/camel-ai/camel), join our [Discord](https://discord.camel-ai.org), or follow us on [X](https://x.com/camelaiorg)*
---
-
-
-
diff --git a/docs/mintlify/cookbooks/basic_concepts/create_your_first_agents_society.mdx b/docs/mintlify/cookbooks/basic_concepts/create_your_first_agents_society.mdx
index b32252307b..e3afdde746 100644
--- a/docs/mintlify/cookbooks/basic_concepts/create_your_first_agents_society.mdx
+++ b/docs/mintlify/cookbooks/basic_concepts/create_your_first_agents_society.mdx
@@ -11,7 +11,7 @@ You can also check this cookbook in colab [here](https://colab.research.google.c
-
+
⭐ *Star us on [GitHub](https://github.com/camel-ai/camel), join our [Discord](https://discord.camel-ai.org), or follow us on [X](https://x.com/camelaiorg)*
@@ -238,11 +238,8 @@ Thanks from everyone at 🐫 CAMEL-AI
-
+
⭐ *Star us on [GitHub](https://github.com/camel-ai/camel), join our [Discord](https://discord.camel-ai.org), or follow us on [X](https://x.com/camelaiorg)*
---
-
-
-
diff --git a/docs/mintlify/cookbooks/basic_concepts/model_speed_comparison.mdx b/docs/mintlify/cookbooks/basic_concepts/model_speed_comparison.mdx
index d94b4e74ec..ff07f98362 100644
--- a/docs/mintlify/cookbooks/basic_concepts/model_speed_comparison.mdx
+++ b/docs/mintlify/cookbooks/basic_concepts/model_speed_comparison.mdx
@@ -11,7 +11,7 @@ You can also check this cookbook in colab [here](https://colab.research.google.c
-
+
⭐ *Star us on [GitHub](https://github.com/camel-ai/camel), join our [Discord](https://discord.camel-ai.org), or follow us on [X](https://x.com/camelaiorg)*
@@ -23,7 +23,7 @@ In this notebook, you'll explore:
* **CAMEL**: A powerful multi-agent framework that enables Retrieval-Augmented Generation and multi-agent role-playing scenarios, allowing for sophisticated AI-driven tasks.
-* **ChatAgent()**: The class is a cornerstone of CAMEL.
+* **ChatAgent()**: The class is a cornerstone of CAMEL.
* **BaseMessage**: The base class for message objects used in the CAMEL chat system.
@@ -166,9 +166,8 @@ Thanks from everyone at 🐫 CAMEL-AI
-
+
⭐ *Star us on [GitHub](https://github.com/camel-ai/camel), join our [Discord](https://discord.camel-ai.org), or follow us on [X](https://x.com/camelaiorg)*
---
-
diff --git a/docs/mintlify/cookbooks/data_generation/cot_data_gen_sft_qwen_unsolth_upload_huggingface.mdx b/docs/mintlify/cookbooks/data_generation/cot_data_gen_sft_qwen_unsolth_upload_huggingface.mdx
index 247caa624f..cdea4eca3c 100644
--- a/docs/mintlify/cookbooks/data_generation/cot_data_gen_sft_qwen_unsolth_upload_huggingface.mdx
+++ b/docs/mintlify/cookbooks/data_generation/cot_data_gen_sft_qwen_unsolth_upload_huggingface.mdx
@@ -12,7 +12,7 @@ To run this, press "*Runtime*" and press "*Run all*" on a **free** Tesla T4 Goog
-
+
⭐ *Star us on [GitHub](https://github.com/camel-ai/camel), join our [Discord](https://discord.camel-ai.org), or follow us on [X](https://x.com/camelaiorg)*
@@ -727,8 +727,6 @@ Thanks from everyone at 🐫 CAMEL-AI
-
+
⭐ *Star us on [GitHub](https://github.com/camel-ai/camel), join our [Discord](https://discord.camel-ai.org), or follow us on [X](https://x.com/camelaiorg)*
-
-
diff --git a/docs/mintlify/cookbooks/data_generation/data_gen_with_real_function_calls_and_hermes_format.mdx b/docs/mintlify/cookbooks/data_generation/data_gen_with_real_function_calls_and_hermes_format.mdx
index 1f725fbe50..71d9301416 100644
--- a/docs/mintlify/cookbooks/data_generation/data_gen_with_real_function_calls_and_hermes_format.mdx
+++ b/docs/mintlify/cookbooks/data_generation/data_gen_with_real_function_calls_and_hermes_format.mdx
@@ -11,7 +11,7 @@ You can also check this cookbook in colab [here](https://colab.research.google.c
-
+
⭐ *Star us on [GitHub](https://github.com/camel-ai/camel), join our [Discord](https://discord.camel-ai.org), or follow us on [X](https://x.com/camelaiorg)*
@@ -322,7 +322,7 @@ In this tutorial, you learned how to generate user queries and structure tool ca
-
+
⭐ *Star us on [GitHub](https://github.com/camel-ai/camel), join our [Discord](https://discord.camel-ai.org), or follow us on [X](https://x.com/camelaiorg)*
diff --git a/docs/mintlify/cookbooks/data_generation/data_model_generation_and_structured_output_with_qwen.mdx b/docs/mintlify/cookbooks/data_generation/data_model_generation_and_structured_output_with_qwen.mdx
index c4201b182e..ea623f57da 100644
--- a/docs/mintlify/cookbooks/data_generation/data_model_generation_and_structured_output_with_qwen.mdx
+++ b/docs/mintlify/cookbooks/data_generation/data_model_generation_and_structured_output_with_qwen.mdx
@@ -11,7 +11,7 @@ You can also check this cookbook in colab [here](https://colab.research.google.c
-
+
⭐ *Star us on [GitHub](https://github.com/camel-ai/camel), join our [Discord](https://discord.camel-ai.org), or follow us on [X](https://x.com/camelaiorg)*
@@ -198,7 +198,7 @@ Thanks from everyone at 🐫 CAMEL-AI
-
+
⭐ *Star us on [GitHub](https://github.com/camel-ai/camel), join our [Discord](https://discord.camel-ai.org), or follow us on [X](https://x.com/camelaiorg)*
diff --git a/docs/mintlify/cookbooks/data_generation/distill_math_reasoning_data_from_deepseek_r1.mdx b/docs/mintlify/cookbooks/data_generation/distill_math_reasoning_data_from_deepseek_r1.mdx
index 3a39d9776e..d47187d88f 100644
--- a/docs/mintlify/cookbooks/data_generation/distill_math_reasoning_data_from_deepseek_r1.mdx
+++ b/docs/mintlify/cookbooks/data_generation/distill_math_reasoning_data_from_deepseek_r1.mdx
@@ -11,7 +11,7 @@ You can also check this cookbook in colab [here](https://colab.research.google.c
-
+
⭐ *Star us on [GitHub](https://github.com/camel-ai/camel), join our [Discord](https://discord.camel-ai.org), or follow us on [X](https://x.com/camelaiorg)*
@@ -35,15 +35,15 @@ Through the use of our synthetic data generation pipeline, CAEML-AI has crafted
- **📚 AMC AIME STaR Dataset**
A dataset of 4K advanced mathematical problems and solutions, distilled with improvement history showing how the solution was iteratively refined.
- 🔗 [Explore the Dataset](https://huggingface.co/datasets/camel-ai/amc_aime_star)
+ 🔗 [Explore the Dataset](https://huggingface.co/datasets/camel-ai/amc_aime_star)
- **📚 AMC AIME Distilled Dataset**
- A dataset of 4K advanced mathematical problems and solutions, distilled with clear step-by-step solutions. 🔗 [Explore the Dataset](https://huggingface.co/datasets/camel-ai/amc_aime_distilled)
+ A dataset of 4K advanced mathematical problems and solutions, distilled with clear step-by-step solutions. 🔗 [Explore the Dataset](https://huggingface.co/datasets/camel-ai/amc_aime_distilled)
- **📚 GSM8K Distilled Dataset**
- A dataset of 7K high quality linguistically diverse grade school math word problems and solutions, distilled with clear step-by-step solutions. 🔗 [Explore the Dataset](https://huggingface.co/datasets/camel-ai/gsm8k_distilled)
+ A dataset of 7K high quality linguistically diverse grade school math word problems and solutions, distilled with clear step-by-step solutions. 🔗 [Explore the Dataset](https://huggingface.co/datasets/camel-ai/gsm8k_distilled)
Perfect for those eager to explore AI-driven problem-solving or dive deep into mathematical reasoning! 🚀✨
@@ -489,7 +489,7 @@ print(f"You can view your dataset at: {dataset_url}")
## 🌟 Highlights
- **High-Quality Synthetic Data Generation:** CAMEL’s pipeline distills mathematical reasoning datasets with detailed step-by-step solutions, ideal for synthetic data generation.
-
+
- **Public Datasets:** Includes the **AMC AIME STaR**, **AMC AIME Distilled**, and **GSM8K Distilled Datasets**, providing diverse problems and reasoning solutions across various math topics.
- **Hugging Face Integration:** Easily share and access datasets on Hugging Face for collaborative research and development.
@@ -520,9 +520,8 @@ Thanks from everyone at 🐫 CAMEL-AI
-
+
⭐ *Star us on [GitHub](https://github.com/camel-ai/camel), join our [Discord](https://discord.camel-ai.org), or follow us on [X](https://x.com/camelaiorg)*
---
-
diff --git a/docs/mintlify/cookbooks/data_generation/self_improving_cot_generation.mdx b/docs/mintlify/cookbooks/data_generation/self_improving_cot_generation.mdx
index 7dd505f4e3..184aba441b 100644
--- a/docs/mintlify/cookbooks/data_generation/self_improving_cot_generation.mdx
+++ b/docs/mintlify/cookbooks/data_generation/self_improving_cot_generation.mdx
@@ -16,7 +16,7 @@ CAMEL developed an approach leverages iterative refinement, self-assessment, and
## 1. Overview of the End-to-End Pipeline 🔍
-### 1.1 Why an Iterative CoT Pipeline?
+### 1.1 Why an Iterative CoT Pipeline?
One-time CoT generation often leads to incomplete or suboptimal solutions. CAMEL addresses this challenge by employing a multi-step, iterative approach:
@@ -26,7 +26,7 @@ One-time CoT generation often leads to incomplete or suboptimal solutions. CAMEL
This self-improving methodology ensures that the reasoning process improves progressively, meeting specific thresholds for correctness, clarity, and completeness. Each iteration enhances the model's ability to solve the problem by learning from the previous outputs and evaluations.
-### 1.2 Core Components
+### 1.2 Core Components
The self-improving pipeline consists of three key components:
1. **`reason_agent`:** This agent is responsible for generating or improving reasoning traces.
@@ -61,7 +61,7 @@ Once the reasoning trace is generated, it is evaluated for its quality. This eva
- **Detecting weaknesses**: The evaluation identifies areas where the reasoning trace could be further improved.
- **Providing feedback**: The evaluation produces feedback that guides the agent in refining the reasoning trace. This feedback can come from either the **`evaluate_agent`** or a **`reward_model`**.
-#### 2.2.1 Agent-Based Evaluation
+#### 2.2.1 Agent-Based Evaluation
If an **`evaluate_agent`** is available, it examines the reasoning trace for:
1. **Correctness**: Does the trace logically solve the problem?
@@ -70,7 +70,7 @@ If an **`evaluate_agent`** is available, it examines the reasoning trace for:
The feedback from the agent provides insights into areas for improvement, such as unclear reasoning or incorrect answers, offering a more generalized approach compared to rule-based matching.
-#### 2.2.2 Reward Model Evaluation
+#### 2.2.2 Reward Model Evaluation
Alternatively, the pipeline supports using a **reward model** to evaluate the trace. The reward model outputs scores based on predefined dimensions such as correctness, coherence, complexity, and verbosity.
@@ -83,7 +83,7 @@ The key to CAMEL's success in CoT generation is its **self-improving loop**. Aft
#### How does this iterative refinement work?
1. **Feedback Integration**: The feedback from the evaluation phase is used to refine the reasoning. This could involve rewording unclear parts, adding missing steps, or adjusting the logic to make it more correct or complete.
-
+
2. **Improvement through Reasoning**: After receiving feedback, the **`reason_agent`** is used again to generate an improved version of the reasoning trace. This trace incorporates the feedback provided, refining the earlier steps and enhancing the overall reasoning.
3. **Re-evaluation**: Once the trace is improved, the new version is evaluated again using the same process (either agent-based evaluation or reward model). This new trace is assessed against the same criteria to ensure the improvements have been made.
@@ -163,7 +163,7 @@ from camel.datagen import SelfImprovingCoTPipeline
# Initialize agents
reason_agent = ChatAgent(
- """Answer my question and give your
+ """Answer my question and give your
final answer within \\boxed{}."""
)
@@ -345,4 +345,4 @@ _Stay tuned for more updates on CAMEL's journey in advancing agentic synthetic d
- [Self-Improving Math Reasoning Data Distillation](https://docs.camel-ai.org/cookbooks/data_generation/self_improving_math_reasoning_data_distillation_from_deepSeek_r1.html)
- [Generating High-Quality SFT Data with CAMEL](https://docs.camel-ai.org/cookbooks/data_generation/sft_data_generation_and_unsloth_finetuning_Qwen2_5_7B.html)
- [Function Call Data Generation and Evaluation](https://docs.camel-ai.org/cookbooks/data_generation/data_gen_with_real_function_calls_and_hermes_format.html)
-- [Agentic Data Generation, Evaluation & Filtering with Reward Models](https://docs.camel-ai.org/cookbooks/data_generation/synthetic_dataevaluation%26filter_with_reward_model.html)
\ No newline at end of file
+- [Agentic Data Generation, Evaluation & Filtering with Reward Models](https://docs.camel-ai.org/cookbooks/data_generation/synthetic_dataevaluation%26filter_with_reward_model.html)
diff --git a/docs/mintlify/cookbooks/data_generation/self_improving_math_reasoning_data_distillation_from_deepSeek_r1.mdx b/docs/mintlify/cookbooks/data_generation/self_improving_math_reasoning_data_distillation_from_deepSeek_r1.mdx
index 04f3cd0e9c..eeec9f753c 100644
--- a/docs/mintlify/cookbooks/data_generation/self_improving_math_reasoning_data_distillation_from_deepSeek_r1.mdx
+++ b/docs/mintlify/cookbooks/data_generation/self_improving_math_reasoning_data_distillation_from_deepSeek_r1.mdx
@@ -11,7 +11,7 @@ You can also check this cookbook in colab [here](https://colab.research.google.c
-
+
⭐ *Star us on [GitHub](https://github.com/camel-ai/camel), join our [Discord](https://discord.camel-ai.org), or follow us on [X](https://x.com/camelaiorg)*
@@ -34,15 +34,15 @@ Through the use of our synthetic data generation pipeline, CAEML-AI has crafted
- **📚 AMC AIME STaR Dataset**
A dataset of 4K advanced mathematical problems and solutions, distilled with improvement history showing how the solution was iteratively refined.
- 🔗 [Explore the Dataset](https://huggingface.co/datasets/camel-ai/amc_aime_star)
+ 🔗 [Explore the Dataset](https://huggingface.co/datasets/camel-ai/amc_aime_star)
- **📚 AMC AIME Distilled Dataset**
- A dataset of 4K advanced mathematical problems and solutions, distilled with clear step-by-step solutions. 🔗 [Explore the Dataset](https://huggingface.co/datasets/camel-ai/amc_aime_distilled)
+ A dataset of 4K advanced mathematical problems and solutions, distilled with clear step-by-step solutions. 🔗 [Explore the Dataset](https://huggingface.co/datasets/camel-ai/amc_aime_distilled)
- **📚 GSM8K Distilled Dataset**
- A dataset of 7K high quality linguistically diverse grade school math word problems and solutions, distilled with clear step-by-step solutions. 🔗 [Explore the Dataset](https://huggingface.co/datasets/camel-ai/gsm8k_distilled)
+ A dataset of 7K high quality linguistically diverse grade school math word problems and solutions, distilled with clear step-by-step solutions. 🔗 [Explore the Dataset](https://huggingface.co/datasets/camel-ai/gsm8k_distilled)
Perfect for those eager to explore AI-driven problem-solving or dive deep into mathematical reasoning! 🚀✨
@@ -538,9 +538,8 @@ Thanks from everyone at 🐫 CAMEL-AI
-
+
⭐ *Star us on [GitHub](https://github.com/camel-ai/camel), join our [Discord](https://discord.camel-ai.org), or follow us on [X](https://x.com/camelaiorg)*
---
-
diff --git a/docs/mintlify/cookbooks/data_generation/self_instruct_data_generation.mdx b/docs/mintlify/cookbooks/data_generation/self_instruct_data_generation.mdx
index 6e640f6c89..b51e953b02 100644
--- a/docs/mintlify/cookbooks/data_generation/self_instruct_data_generation.mdx
+++ b/docs/mintlify/cookbooks/data_generation/self_instruct_data_generation.mdx
@@ -11,7 +11,7 @@ You can also check this cookbook in colab [here](https://colab.research.google.c
-
+
⭐ *Star us on [GitHub](https://github.com/camel-ai/camel), join our [Discord](https://discord.camel-ai.org), or follow us on [X](https://x.com/camelaiorg)*
@@ -412,9 +412,8 @@ Thanks from everyone at 🐫 CAMEL-AI
-
+
⭐ *Star us on [GitHub](https://github.com/camel-ai/camel), join our [Discord](https://discord.camel-ai.org), or follow us on [X](https://x.com/camelaiorg)*
---
-
diff --git a/docs/mintlify/cookbooks/data_generation/sft_data_generation_and_unsloth_finetuning_Qwen2_5_7B.mdx b/docs/mintlify/cookbooks/data_generation/sft_data_generation_and_unsloth_finetuning_Qwen2_5_7B.mdx
index 7695cddc60..cd68143e96 100644
--- a/docs/mintlify/cookbooks/data_generation/sft_data_generation_and_unsloth_finetuning_Qwen2_5_7B.mdx
+++ b/docs/mintlify/cookbooks/data_generation/sft_data_generation_and_unsloth_finetuning_Qwen2_5_7B.mdx
@@ -14,7 +14,7 @@ To run this, press "*Runtime*" and press "*Run all*" on a **free** Tesla T4 Goog
-
+
⭐ *Star us on [GitHub](https://github.com/camel-ai/camel), join our [Discord](https://discord.camel-ai.org), or follow us on [X](https://x.com/camelaiorg)*
@@ -391,7 +391,7 @@ Thanks from everyone at 🐫 CAMEL-AI
-
+
⭐ *Star us on [GitHub](https://github.com/camel-ai/camel), join our [Discord](https://discord.camel-ai.org), or follow us on [X](https://x.com/camelaiorg)*
diff --git a/docs/mintlify/cookbooks/data_generation/sft_data_generation_and_unsloth_finetuning_mistral_7b_instruct.mdx b/docs/mintlify/cookbooks/data_generation/sft_data_generation_and_unsloth_finetuning_mistral_7b_instruct.mdx
index ec1791eed0..f768a479c1 100644
--- a/docs/mintlify/cookbooks/data_generation/sft_data_generation_and_unsloth_finetuning_mistral_7b_instruct.mdx
+++ b/docs/mintlify/cookbooks/data_generation/sft_data_generation_and_unsloth_finetuning_mistral_7b_instruct.mdx
@@ -14,7 +14,7 @@ To run this, press "*Runtime*" and press "*Run all*" on a **free** Tesla T4 Goog
-
+
⭐ *Star us on [GitHub](https://github.com/camel-ai/camel), join our [Discord](https://discord.camel-ai.org), or follow us on [X](https://x.com/camelaiorg)*
@@ -388,7 +388,7 @@ Thanks from everyone at 🐫 CAMEL-AI
-
+
⭐ *Star us on [GitHub](https://github.com/camel-ai/camel), join our [Discord](https://discord.camel-ai.org), or follow us on [X](https://x.com/camelaiorg)*
diff --git a/docs/mintlify/cookbooks/data_generation/sft_data_generation_and_unsloth_finetuning_tinyllama.mdx b/docs/mintlify/cookbooks/data_generation/sft_data_generation_and_unsloth_finetuning_tinyllama.mdx
index 5d58977abd..21d72a30e1 100644
--- a/docs/mintlify/cookbooks/data_generation/sft_data_generation_and_unsloth_finetuning_tinyllama.mdx
+++ b/docs/mintlify/cookbooks/data_generation/sft_data_generation_and_unsloth_finetuning_tinyllama.mdx
@@ -14,7 +14,7 @@ To run this, press "*Runtime*" and press "*Run all*" on a **free** Tesla T4 Goog
-
+
⭐ *Star us on [GitHub](https://github.com/camel-ai/camel), join our [Discord](https://discord.camel-ai.org), or follow us on [X](https://x.com/camelaiorg)*
@@ -390,7 +390,7 @@ Thanks from everyone at 🐫 CAMEL-AI
-
+
⭐ *Star us on [GitHub](https://github.com/camel-ai/camel), join our [Discord](https://discord.camel-ai.org), or follow us on [X](https://x.com/camelaiorg)*
diff --git a/docs/mintlify/cookbooks/data_generation/synthetic_dataevaluation&filter_with_reward_model.mdx b/docs/mintlify/cookbooks/data_generation/synthetic_dataevaluation&filter_with_reward_model.mdx
index 22b040fa2c..759532e4de 100644
--- a/docs/mintlify/cookbooks/data_generation/synthetic_dataevaluation&filter_with_reward_model.mdx
+++ b/docs/mintlify/cookbooks/data_generation/synthetic_dataevaluation&filter_with_reward_model.mdx
@@ -11,7 +11,7 @@ You can also check this cookbook in colab [here](https://colab.research.google.c
-
+
⭐ *Star us on [GitHub](https://github.com/camel-ai/camel), join our [Discord](https://discord.camel-ai.org), or follow us on [X](https://x.com/camelaiorg)*
@@ -296,7 +296,6 @@ Thanks from everyone at 🐫 CAMEL-AI
-
+
⭐ *Star us on [GitHub](https://github.com/camel-ai/camel), join our [Discord](https://discord.camel-ai.org), or follow us on [X](https://x.com/camelaiorg)*
-
diff --git a/docs/mintlify/cookbooks/data_processing/agent_with_chunkr_for_pdf_parsing.mdx b/docs/mintlify/cookbooks/data_processing/agent_with_chunkr_for_pdf_parsing.mdx
index 6c867b20c2..bb190bdb86 100644
--- a/docs/mintlify/cookbooks/data_processing/agent_with_chunkr_for_pdf_parsing.mdx
+++ b/docs/mintlify/cookbooks/data_processing/agent_with_chunkr_for_pdf_parsing.mdx
@@ -25,7 +25,7 @@ To run this, press "_Runtime_" and press "_Run all_" on a **free** Tesla T4 Goog
-
+
⭐ Star us on [*Github*](https://github.com/camel-ai/camel), join our [*Discord*](https://discord.camel-ai.org) or follow our [*X*](https://x.com/camelaiorg)
@@ -315,6 +315,6 @@ Thanks from everyone at 🐫 CAMEL-AI
-
+
⭐ Star us on Github, join our [*Discord*](https://discord.camel-ai.org) or follow our [*X*](https://x.com/camelaiorg) ⭐
diff --git a/docs/mintlify/cookbooks/data_processing/ingest_data_from_websites_with_Firecrawl.mdx b/docs/mintlify/cookbooks/data_processing/ingest_data_from_websites_with_Firecrawl.mdx
index c5016b254d..12fd57ca25 100644
--- a/docs/mintlify/cookbooks/data_processing/ingest_data_from_websites_with_Firecrawl.mdx
+++ b/docs/mintlify/cookbooks/data_processing/ingest_data_from_websites_with_Firecrawl.mdx
@@ -28,7 +28,7 @@ In this notebook, you'll explore:
-
+
⭐ *Star us on [GitHub](https://github.com/camel-ai/camel), join our [Discord](https://discord.camel-ai.org), or follow us on [X](https://x.com/camelaiorg)*
@@ -253,9 +253,8 @@ Thanks from everyone at 🐫 CAMEL-AI
-
+
⭐ *Star us on [GitHub](https://github.com/camel-ai/camel), join our [Discord](https://discord.camel-ai.org), or follow us on [X](https://x.com/camelaiorg)*
---
-
diff --git a/docs/mintlify/cookbooks/data_processing/summarisation_agent_with_mistral_ocr.mdx b/docs/mintlify/cookbooks/data_processing/summarisation_agent_with_mistral_ocr.mdx
index c39379bd58..cbe9022478 100644
--- a/docs/mintlify/cookbooks/data_processing/summarisation_agent_with_mistral_ocr.mdx
+++ b/docs/mintlify/cookbooks/data_processing/summarisation_agent_with_mistral_ocr.mdx
@@ -24,7 +24,7 @@ In this cookbook, we’ll explore [**Mistral OCR**](https://mistral.ai/news/mist
-
+
⭐ *Star us on [GitHub](https://github.com/camel-ai/camel), join our [Discord](https://discord.camel-ai.org), or follow us on [X](https://x.com/camelaiorg)*
@@ -40,19 +40,19 @@ Throughout history, advancements in information abstraction and retrieval have d
#### **Key Features of Mistral OCR:**
-1. **State-of-the-art complex document understanding**
+1. **State-of-the-art complex document understanding**
- Extracts interleaved text, figures, tables, and mathematical expressions with high fidelity.
-2. **Natively multilingual & multimodal**
+2. **Natively multilingual & multimodal**
- Parses scripts and fonts from across the globe, handling right-to-left layouts and non-Latin characters seamlessly.
-3. **Doc-as-prompt, structured output**
+3. **Doc-as-prompt, structured output**
- Returns ordered Markdown, embedding images and bounding-box metadata ready for RAG and downstream AI workflows.
-4. **Top-tier benchmarks & speed**
+4. **Top-tier benchmarks & speed**
- Outperforms leading OCR systems in accuracy—especially in math, tables, and multilingual tests—while delivering fast batch inference (∼2000 pages/min).
-5. **Scalable & flexible deployment**
+5. **Scalable & flexible deployment**
- Available via `mistral-ocr-latest` on Mistral’s developer suite, cloud partners, and on-premises self-hosting for sensitive data.
Ready to unlock your documents? Let’s dive into the extraction guide.
@@ -71,10 +71,10 @@ First, install the CAMEL package with all its dependencies.
If you don’t have a Mistral API key, you can obtain one by following these steps:
-1. **Create an account:**
+1. **Create an account:**
Go to [Mistral Console](https://console.mistral.ai/home) and sign up for an organization account.
-2. **Get your API key:**
+2. **Get your API key:**
Once logged in, navigate to **Organization** → **API Keys**, generate a new key, copy it, and store it securely.
@@ -210,8 +210,6 @@ Thanks from everyone at 🐫 CAMEL-AI
-
+
⭐ *Star us on [GitHub](https://github.com/camel-ai/camel), join our [Discord](https://discord.camel-ai.org), or follow us on [X](https://x.com/camelaiorg)*
-
-
diff --git a/docs/mintlify/cookbooks/data_processing/video_analysis.mdx b/docs/mintlify/cookbooks/data_processing/video_analysis.mdx
index b9b1e13aaf..c5a34496a1 100644
--- a/docs/mintlify/cookbooks/data_processing/video_analysis.mdx
+++ b/docs/mintlify/cookbooks/data_processing/video_analysis.mdx
@@ -11,7 +11,7 @@ You can also check this cookbook in colab [here](https://colab.research.google.c
-
+
⭐ *Star us on [GitHub](https://github.com/camel-ai/camel), join our [Discord](https://discord.camel-ai.org), or follow us on [X](https://x.com/camelaiorg)*
@@ -174,11 +174,8 @@ Thanks from everyone at 🐫 CAMEL-AI
-
+
⭐ *Star us on [GitHub](https://github.com/camel-ai/camel), join our [Discord](https://discord.camel-ai.org), or follow us on [X](https://x.com/camelaiorg)*
---
-
-
-
diff --git a/docs/mintlify/cookbooks/loong/batched_single_step_env.mdx b/docs/mintlify/cookbooks/loong/batched_single_step_env.mdx
index bd9f6ff113..52094bd2f9 100644
--- a/docs/mintlify/cookbooks/loong/batched_single_step_env.mdx
+++ b/docs/mintlify/cookbooks/loong/batched_single_step_env.mdx
@@ -10,7 +10,7 @@ Since many RL algorithms (such as GRPO) need multiple rollouts at each step, bat
First, we have to load a dataset from which we will sample questions. The dataset can be either a `StaticDataset`, which is finite and the length is known at runtime, or it can be a `BaseGenerator`, which is an infinite supply of question - answer pairs, synthetically generated in some way (depending on the implementation).
-For the sake of simplicity, we will start by loading the MATH dataset, remove unnecessary columns and rename the remaining ones, such that we can easily turn it into a `StaticDataset`, which `SingleStepEnv` can deal with.
+For the sake of simplicity, we will start by loading the MATH dataset, remove unnecessary columns and rename the remaining ones, such that we can easily turn it into a `StaticDataset`, which `SingleStepEnv` can deal with.
First, install the CAMEL package with all its dependencies:
@@ -91,7 +91,7 @@ verifier = PythonVerifier(extractor=extractor)
await verifier.setup(uv=True)
```
-Let's now initialize the single step environment with our filtered dataset and our verifier. The verifier will later help with the correctness reward
+Let's now initialize the single step environment with our filtered dataset and our verifier. The verifier will later help with the correctness reward
We can then call `env.reset(batch_size=4)` to draw from the initial state distribution (the dataset) and return `batch_size` many observations, which can then be fed into the agent.
@@ -120,7 +120,7 @@ microbatch1 = [Action(index=2, llm_response="\\boxed{-5}"), Action(index=3, llm_
await env.step(microbatch1)
```
-We have already received rewards for actions 2 and 3 of our environment. Let's next finish this environment.
+We have already received rewards for actions 2 and 3 of our environment. Let's next finish this environment.
```python
diff --git a/docs/mintlify/cookbooks/loong/single_step_env.mdx b/docs/mintlify/cookbooks/loong/single_step_env.mdx
index 4a325b8d4d..c489efb4cb 100644
--- a/docs/mintlify/cookbooks/loong/single_step_env.mdx
+++ b/docs/mintlify/cookbooks/loong/single_step_env.mdx
@@ -8,7 +8,7 @@ It's called *single step* environment, because the agent only does one step. It
First, we have to load a dataset from which we will sample questions. The dataset can be either a `StaticDataset`, which is finite and the length is known at runtime, or it can be a `BaseGenerator`, which is an infinite supply of question - answer pairs, synthetically generated in some way (depending on the implementation).
-For the sake of simplicity, we will start by loading the MATH dataset, remove unnecessary columns and rename the remaining ones, such that we can easily turn it into a `StaticDataset`, which `SingleStepEnv` can deal with.
+For the sake of simplicity, we will start by loading the MATH dataset, remove unnecessary columns and rename the remaining ones, such that we can easily turn it into a `StaticDataset`, which `SingleStepEnv` can deal with.
First, install the CAMEL package with all its dependencies:
@@ -89,7 +89,7 @@ verifier = PythonVerifier(extractor=extractor)
await verifier.setup(uv=True)
```
-Let's now initialize the single step environment with our filtered dataset and our verifier. The verifier will later help with the correctness reward
+Let's now initialize the single step environment with our filtered dataset and our verifier. The verifier will later help with the correctness reward
We can then call `env.reset()` to draw from the initial state distribution and return an observation, which can then be fed into the agent.
diff --git a/docs/mintlify/cookbooks/mcp/agents_with_sql_mcp.mdx b/docs/mintlify/cookbooks/mcp/agents_with_sql_mcp.mdx
index ac4c4c1f2c..2570c72418 100644
--- a/docs/mintlify/cookbooks/mcp/agents_with_sql_mcp.mdx
+++ b/docs/mintlify/cookbooks/mcp/agents_with_sql_mcp.mdx
@@ -11,7 +11,7 @@ You can also check this cookbook in [Google Colab](https://drive.google.com/file
-
+
⭐ *Star us on [GitHub](https://github.com/camel-ai/camel), join our [Discord](https://discord.camel-ai.org), or follow us on [X](https://x.com/camelaiorg)*
@@ -64,7 +64,7 @@ Create a file named `mcp_config.json` with the following content:
"transport": "stdio"
}
}
-}
+}
```
This configuration tells CAMEL how to start and communicate with your SQL MCP server.
@@ -222,12 +222,12 @@ async def main():
if not config_file_path.exists():
logger.error(f"MCP config file not found at: {config_file_path}")
return
-
+
mcp_toolkit = MCPToolkit(config_path=str(config_file_path))
-
+
await mcp_toolkit.connect()
tools = mcp_toolkit.get_tools()
-
+
# Get API key securely from user input
openrouter_api_key = getpass('Enter your OpenRouter API key: ')
if not openrouter_api_key:
@@ -279,18 +279,18 @@ async def main():
# Example query - you can modify this or make it interactive
user_question = "What tables are in the database and what's in them?"
-
+
logger.info(f"\n>>> User: {user_question}")
-
+
response = await agent.astep(user_question)
-
+
if response and response.msgs:
agent_reply = response.msgs[0].content
print(f"<<< Agent: {agent_reply}")
else:
print("<<< Agent: No response received from the model.")
logger.error("Response object or messages were empty")
-
+
print("\nScript finished.")
except Exception as e:
@@ -327,8 +327,8 @@ The agent will then:
2. Use the provided tools to interact with the database
3. Display the results in a human-readable format
-## Example Output -
-Enter your OpenRouter API key:
+## Example Output -
+Enter your OpenRouter API key:
Agent: I'll help you explore the database by first listing all tables and then examining their contents.
Let's start by listing the tables in the database:
@@ -427,9 +427,8 @@ Thanks from everyone at 🐫 CAMEL-AI
-
+
⭐ *Star us on [GitHub](https://github.com/camel-ai/camel), join our [Discord](https://discord.camel-ai.org), or follow us on [X](https://x.com/camelaiorg)*
---
-
diff --git a/docs/mintlify/cookbooks/mcp/camel_aci_mcp_cookbook.mdx b/docs/mintlify/cookbooks/mcp/camel_aci_mcp_cookbook.mdx
index 1287f7ea55..b79a3cd47c 100644
--- a/docs/mintlify/cookbooks/mcp/camel_aci_mcp_cookbook.mdx
+++ b/docs/mintlify/cookbooks/mcp/camel_aci_mcp_cookbook.mdx
@@ -11,7 +11,7 @@ You can also check this cookbook in [Google Colab](https://drive.google.com/file
-
+
⭐ *Star us on [GitHub](https://github.com/camel-ai/camel), join our [Discord](https://discord.camel-ai.org), or follow us on [X](https://x.com/camelaiorg)*
@@ -149,8 +149,8 @@ First things first, head to [ACI.dev](https://aci.dev) and sign up if you don't
Both methods use the same environment variables. Create a `.env` file in your project folder with these variables:
```bash
-GEMINI_API_KEY="your_gemini_api_key_here"
-ACI_API_KEY="your_aci_api_key_here"
+GEMINI_API_KEY="your_gemini_api_key_here"
+ACI_API_KEY="your_aci_api_key_here"
LINKED_ACCOUNT_OWNER_ID="your_linked_account_owner_id_here"
```
@@ -242,7 +242,7 @@ async def main():
await mcp_toolkit.connect()
tools = mcp_toolkit.get_tools() # connects and loads the tools in server
rprint(f"Connected successfully. Found [cyan]{len(tools)}[/cyan] tools available")
-
+
# Set up Gemini model
model = ModelFactory.create(
@@ -268,7 +268,7 @@ async def main():
user_query = input("\nEnter your query: ")
user_message = BaseMessage.make_user_message(role_name="User", content=user_query)
rprint("\n[yellow]Processing...[/yellow]")
- response = await agent.astep(user_message) # ask agent the question ( async )
+ response = await agent.astep(user_message) # ask agent the question ( async )
# Show response
if response and hasattr(response, "msgs") and response.msgs:
@@ -279,7 +279,7 @@ async def main():
rprint(f"Response content: {response}")
else:
rprint("[red]No response received[/red]")
-
+
# Disconnect from MCP
await mcp_toolkit.disconnect()
rprint("\n[green]Done[/green]")
@@ -340,18 +340,18 @@ load_dotenv()
def main():
rprint("[green]CAMEL AI with ACI Toolkit[/green]")
-
+
# get the linked account from env or use default
linked_account_owner_id = os.getenv("LINKED_ACCOUNT_OWNER_ID")
if not linked_account_owner_id:
raise ValueError("LINKED_ACCOUNT_OWNER_ID environment variable is required")
rprint(f"Using account: [cyan]{linked_account_owner_id}[/cyan]")
-
+
# setup aci toolkit
aci_toolkit = ACIToolkit(linked_account_owner_id=linked_account_owner_id)
tools = aci_toolkit.get_tools()
rprint(f"Loaded [cyan]{len(tools)}[/cyan] tools")
-
+
# setup gemini model
model = ModelFactory.create(
model_platform=ModelPlatformType.GEMINI, # you can use other models here too
@@ -359,28 +359,28 @@ def main():
api_key=os.getenv("GEMINI_API_KEY"),
model_config_dict={"temperature": 0.7, "max_tokens": 40000},
)
-
+
# create agent with tools
agent = ChatAgent(model=model, tools=tools)
rprint("[green]Agent ready[/green]")
-
+
# get user query
query = input("\nEnter your query: ")
rprint("\n[yellow]Processing...[/yellow]")
-
+
response = agent.step(query)
-
+
# show raw response
rprint(f"\n[dim]{response.msg}[/dim]")
rprint(f"\n[dim]Raw response type: {type(response)}[/dim]")
rprint(f"[dim]Response: {response}[/dim]")
-
+
# try to get the actual content
if hasattr(response, 'msgs') and response.msgs:
rprint(f"\nFound [cyan]{len(response.msgs)}[/cyan] messages:")
for i, msg in enumerate(response.msgs):
rprint(f"Message {i + 1}: {msg.content}")
-
+
rprint("\n[green]Done[/green]")
if __name__ == "__main__":
@@ -493,9 +493,8 @@ Thanks from everyone at 🐫 CAMEL-AI
-
+
⭐ *Star us on [GitHub](https://github.com/camel-ai/camel), join our [Discord](https://discord.camel-ai.org), or follow us on [X](https://x.com/camelaiorg)*
---
-
diff --git a/docs/mintlify/cookbooks/multi_agent_society/azure_openai_claude_society.mdx b/docs/mintlify/cookbooks/multi_agent_society/azure_openai_claude_society.mdx
index 7fdd87c6bd..7581305687 100644
--- a/docs/mintlify/cookbooks/multi_agent_society/azure_openai_claude_society.mdx
+++ b/docs/mintlify/cookbooks/multi_agent_society/azure_openai_claude_society.mdx
@@ -11,7 +11,7 @@ title: "🍳 CAMEL Cookbook: Building a Collaborative AI Research Society"
-
+
⭐ *Star us on [GitHub](https://github.com/camel-ai/camel), join our [Discord](https://discord.camel-ai.org), or follow us on [X](https://x.com/camelaiorg)*
@@ -635,9 +635,8 @@ Thanks from everyone at 🐫 CAMEL-AI
-
+
⭐ *Star us on [GitHub](https://github.com/camel-ai/camel), join our [Discord](https://discord.camel-ai.org), or follow us on [X](https://x.com/camelaiorg)*
---
-
diff --git a/docs/mintlify/docs.json b/docs/mintlify/docs.json
index e09e23ca5c..374c34f0b6 100644
--- a/docs/mintlify/docs.json
+++ b/docs/mintlify/docs.json
@@ -766,4 +766,4 @@
"measurementId": "G-XV5VXFS2RG"
}
}
-}
\ No newline at end of file
+}
diff --git a/docs/mintlify/get_started/installation.mdx b/docs/mintlify/get_started/installation.mdx
index a66a6115ac..5ff752bf51 100644
--- a/docs/mintlify/get_started/installation.mdx
+++ b/docs/mintlify/get_started/installation.mdx
@@ -104,7 +104,7 @@ We recommend starting with a simple role-playing scenario to understand CAMEL's
```bash
pip install -r requirements.txt
```
-
+
- Set up your environment variables by loading the `.env` file:
```python
from dotenv import load_dotenv
@@ -116,7 +116,7 @@ We recommend starting with a simple role-playing scenario to understand CAMEL's
python examples/role_playing.py
```
-
+
Want to see multi-agent collaboration at scale?
Try running the workforce example:
@@ -235,10 +235,10 @@ python examples/ai_society/role_playing.py
python examples/toolkits/code_execution_toolkit.py
# Generating knowledge graphs with agents
-python examples/knowledge_graph/knowledge_graph_agent_example.py
+python examples/knowledge_graph/knowledge_graph_agent_example.py
# Multiple agents collaborating on complex tasks
-python examples/workforce/multiple_single_agents.py
+python examples/workforce/multiple_single_agents.py
# Creative image generation with agents
python examples/vision/image_crafting.py
diff --git a/docs/mintlify/get_started/introduction.mdx b/docs/mintlify/get_started/introduction.mdx
index 87aebf2a90..bf4e0b7d6c 100644
--- a/docs/mintlify/get_started/introduction.mdx
+++ b/docs/mintlify/get_started/introduction.mdx
@@ -44,7 +44,7 @@ description: |
- OWL (Optimized Workforce Learning) is a multi-agent automation framework for real-world tasks. Built on CAMEL-AI,
+ OWL (Optimized Workforce Learning) is a multi-agent automation framework for real-world tasks. Built on CAMEL-AI,
it enables dynamic agent collaboration using tools like browsers, code interpreters, and multimodal models.
diff --git a/docs/mintlify/images/mcp_camel.svg b/docs/mintlify/images/mcp_camel.svg
index 32dd11f31f..cc22aeed20 100644
--- a/docs/mintlify/images/mcp_camel.svg
+++ b/docs/mintlify/images/mcp_camel.svg
@@ -1 +1 @@
-
\ No newline at end of file
+
diff --git a/docs/mintlify/key_modules/agents.mdx b/docs/mintlify/key_modules/agents.mdx
index 008849af23..9561ed8d56 100644
--- a/docs/mintlify/key_modules/agents.mdx
+++ b/docs/mintlify/key_modules/agents.mdx
@@ -7,7 +7,7 @@ icon: user-helmet-safety
## Concept
-Agents in CAMEL are autonomous entities capable of performing specific tasks through interaction with language models and other components.
+Agents in CAMEL are autonomous entities capable of performing specific tasks through interaction with language models and other components.
Each agent is designed with a particular role and capability, allowing them to work independently or collaboratively to achieve complex goals.
@@ -38,26 +38,26 @@ The `ChatAgent` is the primary implementation that handles conversations with la
-
- **`CriticAgent`**
+
+ **`CriticAgent`**
Specialized agent for evaluating and critiquing responses or solutions. Used in scenarios requiring quality assessment or validation.
- **`DeductiveReasonerAgent`**
+ **`DeductiveReasonerAgent`**
Focused on logical reasoning and deduction. Breaks down complex problems into smaller, manageable steps.
- **`EmbodiedAgent`**
+ **`EmbodiedAgent`**
Designed for embodied AI scenarios, capable of understanding and responding to physical world contexts.
- **`KnowledgeGraphAgent`**
+ **`KnowledgeGraphAgent`**
Specialized in building and utilizing knowledge graphs for enhanced reasoning and information management.
- **`MultiHopGeneratorAgent`**
+ **`MultiHopGeneratorAgent`**
Handles multi-hop reasoning tasks, generating intermediate steps to reach conclusions.
- **`SearchAgent`**
+ **`SearchAgent`**
Focused on information retrieval and search tasks across various data sources.
- **`TaskAgent`**
+ **`TaskAgent`**
Handles task decomposition and management, breaking down complex tasks into manageable subtasks.
diff --git a/docs/mintlify/key_modules/browsertoolkit.mdx b/docs/mintlify/key_modules/browsertoolkit.mdx
index 29a4afc889..fdd8876fb5 100644
--- a/docs/mintlify/key_modules/browsertoolkit.mdx
+++ b/docs/mintlify/key_modules/browsertoolkit.mdx
@@ -136,4 +136,4 @@ answer = browser_toolkit.browser.ask_question_about_video(question=question)
print(answer)
```
-
\ No newline at end of file
+
diff --git a/docs/mintlify/key_modules/datagen.mdx b/docs/mintlify/key_modules/datagen.mdx
index 45247ad6d3..ed726f7dea 100644
--- a/docs/mintlify/key_modules/datagen.mdx
+++ b/docs/mintlify/key_modules/datagen.mdx
@@ -27,7 +27,7 @@ This page introduces CAMEL's **data generation modules** for creating high-quali
-**CoTDataGenerator Class**
+**CoTDataGenerator Class**
The main class that implements the CoT generation system with the following capabilities:
@@ -405,7 +405,7 @@ The main class that implements the CoT generation system with the following capa
# Initialize agents
reason_agent = ChatAgent(
- """Answer my question and give your
+ """Answer my question and give your
final answer within \\boxed{}."""
)
diff --git a/docs/mintlify/key_modules/embeddings.mdx b/docs/mintlify/key_modules/embeddings.mdx
index 27f5094f7f..6a3927f027 100644
--- a/docs/mintlify/key_modules/embeddings.mdx
+++ b/docs/mintlify/key_modules/embeddings.mdx
@@ -11,8 +11,8 @@ icon: vector-square
-Text embeddings turn sentences or documents into high-dimensional vectors that capture meaning.
-Example:
+Text embeddings turn sentences or documents into high-dimensional vectors that capture meaning.
+Example:
“A young boy is playing soccer in a park.”
“A child is kicking a football on a playground.”
diff --git a/docs/mintlify/key_modules/loaders.mdx b/docs/mintlify/key_modules/loaders.mdx
index fd06347a01..628d040ac0 100644
--- a/docs/mintlify/key_modules/loaders.mdx
+++ b/docs/mintlify/key_modules/loaders.mdx
@@ -289,7 +289,7 @@ That’s it. With just a couple of lines, you can turn any website into clean ma
---
-Chunkr Reader allows you to process PDFs (and other docs) in chunks, with built-in OCR and format control.
+Chunkr Reader allows you to process PDFs (and other docs) in chunks, with built-in OCR and format control.
Below is a basic usage pattern:
Initialize the `ChunkrReader` and `ChunkrReaderConfig`, set the file path and chunking options, then submit your task and fetch results:
diff --git a/docs/mintlify/key_modules/memory.mdx b/docs/mintlify/key_modules/memory.mdx
index ab473303c7..91e941da10 100644
--- a/docs/mintlify/key_modules/memory.mdx
+++ b/docs/mintlify/key_modules/memory.mdx
@@ -126,7 +126,7 @@ icon: memory
- **What it is:**
+ **What it is:**
The basic data unit in CAMEL’s memory system—everything stored/retrieved flows through this structure.
**Attributes:**
@@ -142,7 +142,7 @@ icon: memory
- **What it is:**
+ **What it is:**
Result of memory retrieval from `AgentMemory`, scored for context relevance.
**Attributes:**
@@ -151,7 +151,7 @@ icon: memory
- **What it is:**
+ **What it is:**
The core “building block” for agent memory, following the Composite design pattern (supports tree structures).
**Key methods:**
@@ -161,7 +161,7 @@ icon: memory
- **What it is:**
+ **What it is:**
Defines strategies for generating agent context when data exceeds model limits.
**Key methods/properties:**
@@ -171,7 +171,7 @@ icon: memory
- **What it is:**
+ **What it is:**
Specialized `MemoryBlock` for direct agent use.
**Key methods:**
@@ -188,7 +188,7 @@ icon: memory
- **What it does:**
+ **What it does:**
Stores and retrieves recent chat history (like a conversation timeline).
**Initialization:**
@@ -200,12 +200,12 @@ icon: memory
- `write_records()`: Add new records
- `clear()`: Remove all chat history
- **Use Case:**
+ **Use Case:**
Best for maintaining the most recent conversation flow/context.
- **What it does:**
+ **What it does:**
Uses vector embeddings for storing and retrieving information based on semantic similarity.
**Initialization:**
@@ -217,7 +217,7 @@ icon: memory
- `write_records()`: Add new records (converted to vectors)
- `clear()`: Remove all vector records
- **Use Case:**
+ **Use Case:**
Ideal for large histories or when semantic search is needed.
@@ -234,8 +234,8 @@ icon: memory
-**What is it?**
-An **AgentMemory** implementation that wraps `ChatHistoryBlock`.
+**What is it?**
+An **AgentMemory** implementation that wraps `ChatHistoryBlock`.
**Best for:** Sequential, recent chat context (simple conversation memory).
**Initialization:**
@@ -251,8 +251,8 @@ An **AgentMemory** implementation that wraps `ChatHistoryBlock`.
-**What is it?**
-An **AgentMemory** implementation that wraps `VectorDBBlock`.
+**What is it?**
+An **AgentMemory** implementation that wraps `VectorDBBlock`.
**Best for:** Semantic search—find relevant messages by meaning, not just recency.
**Initialization:**
@@ -267,8 +267,8 @@ An **AgentMemory** implementation that wraps `VectorDBBlock`.
-**What is it?**
-Combines **ChatHistoryMemory** and **VectorDBMemory** for hybrid memory.
+**What is it?**
+Combines **ChatHistoryMemory** and **VectorDBMemory** for hybrid memory.
**Best for:** Production bots that need both recency & semantic search.
**Initialization:**
@@ -348,7 +348,7 @@ You can subclass `BaseContextCreator` for advanced control.
@property
def token_counter(self):
# Implement your token counting logic
- return
+ return
@property
def token_limit(self):
diff --git a/docs/mintlify/key_modules/models.mdx b/docs/mintlify/key_modules/models.mdx
index 0cd751b4f3..7611640b5f 100644
--- a/docs/mintlify/key_modules/models.mdx
+++ b/docs/mintlify/key_modules/models.mdx
@@ -91,7 +91,7 @@ Integrate your favorite models into CAMEL-AI with straightforward Python calls.
-
+
Here's how you use OpenAI models such as GPT-4o-mini with CAMEL:
```python
@@ -118,7 +118,7 @@ Integrate your favorite models into CAMEL-AI with straightforward Python calls.
-
+
Using Google's Gemini models in CAMEL:
- **Google AI Studio** ([Quick Start](https://aistudio.google.com/)): Try models quickly in a no-code environment.
@@ -149,7 +149,7 @@ Integrate your favorite models into CAMEL-AI with straightforward Python calls.
-
+
Integrate Mistral AI models like Mistral Medium into CAMEL:
```python
@@ -176,7 +176,7 @@ Integrate your favorite models into CAMEL-AI with straightforward Python calls.
-
+
Leveraging Anthropic's Claude models within CAMEL:
```python
@@ -203,10 +203,10 @@ Integrate your favorite models into CAMEL-AI with straightforward Python calls.
-
+
Leverage [CometAPI](https://api.cometapi.com/)'s unified access to multiple frontier AI models:
- - **CometAPI Platform** ([CometAPI](https://www.cometapi.com/?utm_source=camel-ai&utm_campaign=integration&utm_medium=integration&utm_content=integration)):
+ - **CometAPI Platform** ([CometAPI](https://www.cometapi.com/?utm_source=camel-ai&utm_campaign=integration&utm_medium=integration&utm_content=integration)):
- **API Key Setup**: Obtain your CometAPI key to start integration.
- **OpenAI Compatible**: Use familiar OpenAI API patterns with advanced frontier models.
@@ -265,7 +265,7 @@ Integrate your favorite models into CAMEL-AI with straightforward Python calls.
ModelType.COMETAPI_QWEN3_30B_A3B,
ModelType.COMETAPI_QWEN3_CODER_PLUS_2025_07_22
]
-
+
for model_type in models_to_try:
model = ModelFactory.create(
model_platform=ModelPlatformType.COMETAPI,
@@ -277,7 +277,7 @@ Integrate your favorite models into CAMEL-AI with straightforward Python calls.
-
+
Leverage [Nebius AI Studio](https://nebius.com/)'s high-performance GPU cloud with OpenAI-compatible models:
- **Nebius AI Studio** ([Platform](https://studio.nebius.com/)): Access powerful models through their cloud infrastructure.
@@ -319,7 +319,7 @@ Integrate your favorite models into CAMEL-AI with straightforward Python calls.
- **Complete Access:** All models available on [Nebius AI Studio](https://studio.nebius.com/) are supported
- **Predefined Enums:** Common models like `NEBIUS_GPT_OSS_120B`, `NEBIUS_DEEPSEEK_V3`, etc.
- **String-based Access:** Use any model name directly as a string for maximum flexibility
-
+
**Example with any model:**
```python
# Use any model available on Nebius
@@ -393,7 +393,7 @@ Integrate your favorite models into CAMEL-AI with straightforward Python calls.
- `OPENROUTER_LLAMA_4_SCOUT` - Meta's Llama 4 Scout model
- `OPENROUTER_OLYMPICODER_7B` - Open R1's OlympicCoder 7B model
- `OPENROUTER_HORIZON_ALPHA` - Horizon Alpha model
-
+
Free versions are also available for some models (e.g., `OPENROUTER_LLAMA_4_MAVERICK_FREE`).
@@ -427,7 +427,7 @@ Integrate your favorite models into CAMEL-AI with straightforward Python calls.
-
+
Using [Groq](https://groq.com/)'s powerful models (e.g., Llama 3.3-70B):
```python
diff --git a/docs/mintlify/key_modules/prompts.mdx b/docs/mintlify/key_modules/prompts.mdx
index 12eb807a53..4b1dd9ccf7 100644
--- a/docs/mintlify/key_modules/prompts.mdx
+++ b/docs/mintlify/key_modules/prompts.mdx
@@ -186,7 +186,7 @@ prompt2 = TextPrompt('Welcome, {name}!')
# Concatenation
prompt3 = prompt1 + ' ' + prompt2
-print(prompt3)
+print(prompt3)
# >>> "Hello, {name}! Welcome, {name}!"
print(isinstance(prompt3, TextPrompt)) # >>> True
print(prompt3.key_words) # >>> {'name'}
@@ -298,5 +298,3 @@ print(prompt5.key_words) # >>> {'NAME'}
-
-
diff --git a/docs/mintlify/key_modules/retrievers.mdx b/docs/mintlify/key_modules/retrievers.mdx
index 3bf60812a6..133185baa8 100644
--- a/docs/mintlify/key_modules/retrievers.mdx
+++ b/docs/mintlify/key_modules/retrievers.mdx
@@ -109,8 +109,8 @@ Use AutoRetriever for fast experiments and RAG workflows; for advanced control,
-For simple, blazing-fast search by keyword—use KeywordRetriever.
-Great for small data, transparency, or keyword-driven tasks.
+For simple, blazing-fast search by keyword—use KeywordRetriever.
+Great for small data, transparency, or keyword-driven tasks.
*(API and code example coming soon—see RAG Cookbook for details.)*
@@ -130,4 +130,3 @@ Great for small data, transparency, or keyword-driven tasks.
Full configuration and options for all retriever classes.
-
diff --git a/docs/mintlify/key_modules/runtimes.mdx b/docs/mintlify/key_modules/runtimes.mdx
index 718002b362..977fb3ef0a 100644
--- a/docs/mintlify/key_modules/runtimes.mdx
+++ b/docs/mintlify/key_modules/runtimes.mdx
@@ -122,11 +122,10 @@ All runtimes inherit from BaseRuntime, which defines core methods:
## More Examples
-You’ll find runnable scripts for each runtime in [examples/runtime](https://github.com/camel-ai/camel/tree/master/examples/runtimes)/ in our main repo.
+You’ll find runnable scripts for each runtime in [examples/runtime](https://github.com/camel-ai/camel/tree/master/examples/runtimes)/ in our main repo.
Each script demonstrates how to initialize and use a specific runtime—perfect for experimentation or production setups.
## Final Note
-The runtime system primarily sandboxes FunctionTool-style tool functions.
+The runtime system primarily sandboxes FunctionTool-style tool functions.
For agent-level, dynamic code execution, always consider dedicated sandboxing—such as UbuntuDockerRuntime’s exec_python_file()—for running dynamically generated scripts with maximum isolation and safety.
-
diff --git a/docs/mintlify/key_modules/storages.mdx b/docs/mintlify/key_modules/storages.mdx
index 2a6a1cf6e0..9c374f95f5 100644
--- a/docs/mintlify/key_modules/storages.mdx
+++ b/docs/mintlify/key_modules/storages.mdx
@@ -76,8 +76,8 @@ The Storage module in CAMEL-AI gives you a **unified interface for saving
**BaseGraphStorage**
- Abstract base for graph database integrations
- - **Supports:**
- - Schema queries and refresh
+ - **Supports:**
+ - Schema queries and refresh
- Adding/deleting/querying triplets
**NebulaGraph**
@@ -99,7 +99,7 @@ Here are practical usage patterns for each storage type—pick the ones you need
---
- Use for: Fast, temporary storage. Data is lost when your program exits.
+ Use for: Fast, temporary storage. Data is lost when your program exits.
Perfect for: Prototyping, testing, in-memory caching.
@@ -125,7 +125,7 @@ Here are practical usage patterns for each storage type—pick the ones you need
---
- Use for: Persistent, human-readable storage on disk.
+ Use for: Persistent, human-readable storage on disk.
Perfect for: Logs, local settings, configs, or sharing small data sets.
@@ -152,7 +152,7 @@ Here are practical usage patterns for each storage type—pick the ones you need
---
- Use for: Scalable, high-performance vector search (RAG, embeddings).
+ Use for: Scalable, high-performance vector search (RAG, embeddings).
Perfect for: Semantic search and production AI retrieval.
@@ -187,7 +187,7 @@ Here are practical usage patterns for each storage type—pick the ones you need
---
- Use for: Hybrid cloud-native storage, vectors + SQL in one.
+ Use for: Hybrid cloud-native storage, vectors + SQL in one.
Perfect for: Combining AI retrieval with your business database.
@@ -223,7 +223,7 @@ Here are practical usage patterns for each storage type—pick the ones you need
- Use for: Fast, scalable open-source vector search.
+ Use for: Fast, scalable open-source vector search.
Perfect for: RAG, document search, and high-scale retrieval tasks.
@@ -260,7 +260,7 @@ Here are practical usage patterns for each storage type—pick the ones you need
---
- Use for: Fastest way to build LLM apps with memory and embeddings.
+ Use for: Fastest way to build LLM apps with memory and embeddings.
Perfect for: From prototyping in notebooks to production clusters with the same simple API.
@@ -353,7 +353,7 @@ Here are practical usage patterns for each storage type—pick the ones you need
---
- Use for: Massive vector storage with advanced analytics.
+ Use for: Massive vector storage with advanced analytics.
Perfect for: Batch operations, cloud or on-prem setups, and high-throughput search.
@@ -449,7 +449,7 @@ Here are practical usage patterns for each storage type—pick the ones you need
---
- Use for: Vector search with hybrid (vector + keyword) capabilities.
+ Use for: Vector search with hybrid (vector + keyword) capabilities.
Perfect for: Document retrieval and multimodal AI apps.
@@ -491,7 +491,7 @@ Here are practical usage patterns for each storage type—pick the ones you need
---
- Use for: Open-source, distributed graph storage and querying.
+ Use for: Open-source, distributed graph storage and querying.
Perfect for: Knowledge graphs, relationships, and fast distributed queries.
@@ -510,7 +510,7 @@ Here are practical usage patterns for each storage type—pick the ones you need
---
- Use for: Industry-standard graph database for large-scale relationships.
+ Use for: Industry-standard graph database for large-scale relationships.
Perfect for: Enterprise graph workloads, Cypher queries, analytics.
```python
diff --git a/docs/mintlify/key_modules/tasks.mdx b/docs/mintlify/key_modules/tasks.mdx
index a1269632e8..7fc0aae818 100644
--- a/docs/mintlify/key_modules/tasks.mdx
+++ b/docs/mintlify/key_modules/tasks.mdx
@@ -6,7 +6,7 @@ icon: list-check
For more detailed usage information, please refer to our cookbook: [Task Generation Cookbook](../cookbooks/multi_agent_society/task_generation.ipynb)
-A task in CAMEL is a structured assignment that can be given to one or more agents. Tasks are higher-level than prompts and managed by modules like the Planner and Workforce.
+A task in CAMEL is a structured assignment that can be given to one or more agents. Tasks are higher-level than prompts and managed by modules like the Planner and Workforce.
Key ideas:
- Tasks can be collaborative, requiring multiple agents.
- Tasks can be decomposed into subtasks or evolved over time.
diff --git a/docs/mintlify/key_modules/terminaltoolkit.mdx b/docs/mintlify/key_modules/terminaltoolkit.mdx
index e451384d85..b0b52c94a9 100644
--- a/docs/mintlify/key_modules/terminaltoolkit.mdx
+++ b/docs/mintlify/key_modules/terminaltoolkit.mdx
@@ -149,4 +149,4 @@ help_result = terminal_toolkit.ask_user_for_help(id='session_1')
# in the console. After the user types '/exit', the script will resume.
print(help_result)
```
-
\ No newline at end of file
+
diff --git a/docs/mintlify/key_modules/tools.mdx b/docs/mintlify/key_modules/tools.mdx
index c3042e4882..8686776079 100644
--- a/docs/mintlify/key_modules/tools.mdx
+++ b/docs/mintlify/key_modules/tools.mdx
@@ -6,12 +6,12 @@ icon: screwdriver-wrench
For more detailed usage information, please refer to our cookbook: [Tools Cookbook](../cookbooks/advanced_features/agents_with_tools.ipynb)
- A Tool in CAMEL is a callable function with a name, description, input parameters, and an output type.
+ A Tool in CAMEL is a callable function with a name, description, input parameters, and an output type.
Tools act as the interface between agents and the outside world—think of them like OpenAI Functions you can easily convert, extend, or use directly.
- A Toolkit is a curated collection of related tools designed to work together for a specific purpose.
+ A Toolkit is a curated collection of related tools designed to work together for a specific purpose.
CAMEL provides a range of built-in toolkits—covering everything from web search and data extraction to code execution, GitHub integration, and much more.
diff --git a/docs/mintlify/mcp/camel_agents_as_an_mcp_clients.mdx b/docs/mintlify/mcp/camel_agents_as_an_mcp_clients.mdx
index b217c214a9..033e8a71b3 100644
--- a/docs/mintlify/mcp/camel_agents_as_an_mcp_clients.mdx
+++ b/docs/mintlify/mcp/camel_agents_as_an_mcp_clients.mdx
@@ -100,7 +100,7 @@ You can use sse or streamable-http for ACI.dev, pick w
- Once connected, you can extend your setup with other servers from ACI.dev, Composio, or `npx`.
+ Once connected, you can extend your setup with other servers from ACI.dev, Composio, or `npx`.
- Use `stdio` for local testing, `sse` or `streamable-http` for cloud tools.
@@ -122,7 +122,7 @@ This diagram illustrates how CAMEL agents use MCPToolkit to seamlessly connect w
-Want your MCP agent discoverable by thousands of clients?
+Want your MCP agent discoverable by thousands of clients?
Register it with a hub like ACI.dev or similar.
```python Register with ACI Registry lines icon="python"
from camel.agents import MCPAgent
@@ -148,11 +148,11 @@ Your agent is now connected to the AC
-Finding MCP servers is now a breeze with PulseMCP integration.
+Finding MCP servers is now a breeze with PulseMCP integration.
You don’t have to guess which MCP servers are available, just search, browse, and connect.
-PulseMCP acts as a living directory of the entire MCP ecosystem.
+PulseMCP acts as a living directory of the entire MCP ecosystem.
CAMEL toolkits can plug directly into PulseMCP, letting you browse and connect to thousands of servers, all kept up to date in real time.
You can visit [PulseMCP.com](https://pulsemcp.com) to browse all available MCP servers—everything from file systems and search to specialized APIs.
@@ -172,7 +172,7 @@ PulseMCP does the heavy lifting of finding, categorizing, and keeping MCP server
-Don’t need advanced tool-calling?
+Don’t need advanced tool-calling?
See this example for a super-lightweight setup.
diff --git a/docs/mintlify/mcp/camel_toolkits_as_an_mcp_server.mdx b/docs/mintlify/mcp/camel_toolkits_as_an_mcp_server.mdx
index 1e038fe76e..67d7a85fb0 100644
--- a/docs/mintlify/mcp/camel_toolkits_as_an_mcp_server.mdx
+++ b/docs/mintlify/mcp/camel_toolkits_as_an_mcp_server.mdx
@@ -13,7 +13,7 @@ description: "Share any CAMEL toolkit as an MCP server so external clients and a
- With one command, you can flip any toolkit into an MCP server.
+ With one command, you can flip any toolkit into an MCP server.
Now, any MCP-compatible client or agent can call your tools—locally or over the network.
@@ -25,14 +25,14 @@ description: "Share any CAMEL toolkit as an MCP server so external clients and a
You can turn any CAMEL toolkit into a full-featured MCP server—making its tools instantly available to other AI agents or external apps via the Model Context Protocol.
-Why do this?
+Why do this?
- Instantly share your agent tools with external clients (e.g., Claude, Cursor, custom dashboards).
- Enable distributed, language-agnostic tool execution across different systems and teams.
- Easily test, debug, and reuse your tools—no need to change the toolkit or agent code.
### Launch a Toolkit Server
-Below is a minimal script to expose ArxivToolkit as an MCP server.
+Below is a minimal script to expose ArxivToolkit as an MCP server.
Swap in any other toolkit (e.g., SearchToolkit, MathToolkit), they all work the same way!
diff --git a/docs/mintlify/mcp/connecting_existing_mcp_tools.mdx b/docs/mintlify/mcp/connecting_existing_mcp_tools.mdx
index dbedbe9527..f6adcebcac 100644
--- a/docs/mintlify/mcp/connecting_existing_mcp_tools.mdx
+++ b/docs/mintlify/mcp/connecting_existing_mcp_tools.mdx
@@ -6,17 +6,17 @@ icon: 'network'
## Overview
-You can connect any Model Context Protocol (MCP) tool—like the official filesystem server—directly to your CAMEL ChatAgent.
+You can connect any Model Context Protocol (MCP) tool—like the official filesystem server—directly to your CAMEL ChatAgent.
This gives your agents natural language access to external filesystems, databases, or any MCP-compatible service.
-Use Case:
+Use Case:
Let your agent list files or read documents by wiring up the official MCP Filesystem server as a tool—no code changes to the agent required!
- You can use any MCP-compatible tool.
+ You can use any MCP-compatible tool.
For this example, we'll use the official filesystem server from the Model Context Protocol community.
Install globally using npm:
@@ -150,6 +150,6 @@ Let your agent list files or read documents by wiring up the official MCP Filesy
-That's it!
+That's it!
Your CAMEL agent can now leverage any external tool (filesystem, APIs, custom scripts) that supports MCP. Plug and play!
diff --git a/docs/mintlify/mcp/export_camel_agent_as_mcp_server.mdx b/docs/mintlify/mcp/export_camel_agent_as_mcp_server.mdx
index c64c7c22b8..7ea13a9fbe 100644
--- a/docs/mintlify/mcp/export_camel_agent_as_mcp_server.mdx
+++ b/docs/mintlify/mcp/export_camel_agent_as_mcp_server.mdx
@@ -14,8 +14,8 @@ Any MCP-compatible client (Claude, Cursor, editors, or your own app) can connect
-Scripted Server:
-Launch your agent as an MCP server with the ready-made scripts in services/.
+Scripted Server:
+Launch your agent as an MCP server with the ready-made scripts in services/.
Configure your MCP client (Claude, Cursor, etc.) to connect:
```json mcp_servers_config.json Example highlight={5}
{
@@ -71,7 +71,7 @@ if __name__ == "__main__":
## Real-world Example
-You can use Claude, Cursor, or any other app to call your custom agent!
+You can use Claude, Cursor, or any other app to call your custom agent!
Just connect to your CAMEL MCP server
@@ -94,6 +94,6 @@ You can expose any number of custom tools, multi-agent workflows, or domain know
---
-Want to create your own tools and toolkits?
+Want to create your own tools and toolkits?
See Toolkits Reference for everything you can expose to the MCP ecosystem!
diff --git a/docs/mintlify/mcp/mcp_hub.mdx b/docs/mintlify/mcp/mcp_hub.mdx
index 4cc77dfc6a..4077a53ea6 100644
--- a/docs/mintlify/mcp/mcp_hub.mdx
+++ b/docs/mintlify/mcp/mcp_hub.mdx
@@ -2,4 +2,4 @@
title: "CAMEL-AI MCPHub"
icon: warehouse
url: "https://mcp.camel-ai.org/"
----
\ No newline at end of file
+---
diff --git a/docs/mintlify/mcp/overview.mdx b/docs/mintlify/mcp/overview.mdx
index e2ebcb79b8..60e3ef12ff 100644
--- a/docs/mintlify/mcp/overview.mdx
+++ b/docs/mintlify/mcp/overview.mdx
@@ -8,7 +8,7 @@ icon: 'play'
MCP (Model Context Protocol) originated from an [Anthropic article](https://www.anthropic.com/news/model-context-protocol) published on November 25, 2024: *Introducing the Model Context Protocol*.
-MCP defines **how applications and AI models exchange contextual information**.
+MCP defines **how applications and AI models exchange contextual information**.
It enables developers to connect data sources, tools, and functions to LLMs using a universal, standardized protocol—much like USB-C enables diverse devices to connect via a single interface.
@@ -70,26 +70,26 @@ MCP follows a **client-server model** with three main roles:

### How it works, step by step:
-1. **User asks:**
+1. **User asks:**
“What documents do I have on my desktop?” via the Host (e.g., Claude Desktop).
-2. **Host (MCP Host):**
+2. **Host (MCP Host):**
Receives your question and forwards it to the Claude model.
-3. **Client (MCP Client):**
+3. **Client (MCP Client):**
Claude model decides it needs more data, Client is activated to connect to a file system MCP Server.
-4. **Server (MCP Server):**
+4. **Server (MCP Server):**
The server reads your desktop directory and returns a list of documents.
-5. **Results:**
+5. **Results:**
Claude uses this info to answer your question, displayed in your desktop app.
-This architecture **lets agents dynamically call tools and access data**—local or remote—while developers only focus on building the relevant MCPServer.
+This architecture **lets agents dynamically call tools and access data**—local or remote—while developers only focus on building the relevant MCPServer.
**You don’t have to handle the nitty-gritty of connecting Hosts and Clients.**
-For deeper architecture details and diagrams, see the
+For deeper architecture details and diagrams, see the
official MCP docs: Architecture Concepts.
@@ -100,4 +100,3 @@ For deeper architecture details and diagrams, see the
- **Users** get safer, more flexible, and privacy-friendly AI workflows.
---
-
diff --git a/docs/mintlify/reference/camel.agents.chat_agent.mdx b/docs/mintlify/reference/camel.agents.chat_agent.mdx
index 5805742fd7..3fecd4dfe5 100644
--- a/docs/mintlify/reference/camel.agents.chat_agent.mdx
+++ b/docs/mintlify/reference/camel.agents.chat_agent.mdx
@@ -1453,7 +1453,7 @@ intermediate responses as they are generated.
- **input_message** (Union[BaseMessage, str]): The input message for the agent.
- **response_format** (Optional[Type[BaseModel]], optional): A Pydantic model defining the expected structure of the response.
-- **Yields**:
+- **Yields**:
- **ChatAgentResponse**: Intermediate responses containing partial content, tool calls, and other information as they become available.
diff --git a/docs/mintlify/reference/camel.loaders.firecrawl_reader.mdx b/docs/mintlify/reference/camel.loaders.firecrawl_reader.mdx
index f3ab330023..5a0d0c0476 100644
--- a/docs/mintlify/reference/camel.loaders.firecrawl_reader.mdx
+++ b/docs/mintlify/reference/camel.loaders.firecrawl_reader.mdx
@@ -14,7 +14,7 @@ Firecrawl allows you to turn entire websites into LLM-ready markdown.
- **api_key** (Optional[str]): API key for authenticating with the Firecrawl API.
- **api_url** (Optional[str]): Base URL for the Firecrawl API.
-- **References**:
+- **References**:
- **https**: //docs.firecrawl.dev/introduction
diff --git a/docs/mintlify/reference/camel.loaders.jina_url_reader.mdx b/docs/mintlify/reference/camel.loaders.jina_url_reader.mdx
index bcb5868eb5..4a726e233a 100644
--- a/docs/mintlify/reference/camel.loaders.jina_url_reader.mdx
+++ b/docs/mintlify/reference/camel.loaders.jina_url_reader.mdx
@@ -18,7 +18,7 @@ replace the UnstructuredIO URL Reader in the pipeline.
- **return_format** (ReturnFormat, optional): The level of detail of the returned content, which is optimized for LLMs. For now screenshots are not supported. Defaults to ReturnFormat.DEFAULT.
- **json_response** (bool, optional): Whether to return the response in JSON format. Defaults to False.
- **timeout** (int, optional): The maximum time in seconds to wait for the page to be rendered. Defaults to 30. **kwargs (Any): Additional keyword arguments, including proxies, cookies, etc. It should align with the HTTP Header field and value pairs listed in the reference.
-- **References**:
+- **References**:
- **https**: //jina.ai/reader
diff --git a/docs/mintlify/reference/camel.loaders.scrapegraph_reader.mdx b/docs/mintlify/reference/camel.loaders.scrapegraph_reader.mdx
index 5834f25db7..39bbb7ce22 100644
--- a/docs/mintlify/reference/camel.loaders.scrapegraph_reader.mdx
+++ b/docs/mintlify/reference/camel.loaders.scrapegraph_reader.mdx
@@ -14,7 +14,7 @@ searching.
**Parameters:**
- **api_key** (Optional[str]): API key for authenticating with the ScrapeGraphAI API.
-- **References**:
+- **References**:
- **https**: //scrapegraph.ai/
diff --git a/docs/mintlify/reference/camel.models.aws_bedrock_model.mdx b/docs/mintlify/reference/camel.models.aws_bedrock_model.mdx
index 66c018a8f4..c03b3e4310 100644
--- a/docs/mintlify/reference/camel.models.aws_bedrock_model.mdx
+++ b/docs/mintlify/reference/camel.models.aws_bedrock_model.mdx
@@ -20,7 +20,7 @@ AWS Bedrock API in a unified OpenAICompatibleModel interface.
- **token_counter** (BaseTokenCounter, optional): Token counter to use for the model. If not provided, :obj:`OpenAITokenCounter( ModelType.GPT_4O_MINI)` will be used. (default: :obj:`None`)
- **timeout** (Optional[float], optional): The timeout value in seconds for API calls. If not provided, will fall back to the MODEL_TIMEOUT environment variable or default to 180 seconds. (default: :obj:`None`)
- **max_retries** (int, optional): Maximum number of retries for API calls. (default: :obj:`3`) **kwargs (Any): Additional arguments to pass to the client initialization.
-- **References**:
+- **References**:
- **https**: //docs.aws.amazon.com/bedrock/latest/APIReference/welcome.html
diff --git a/docs/mintlify/reference/camel.models.azure_openai_model.mdx b/docs/mintlify/reference/camel.models.azure_openai_model.mdx
index cb881f1b69..33c48bc45b 100644
--- a/docs/mintlify/reference/camel.models.azure_openai_model.mdx
+++ b/docs/mintlify/reference/camel.models.azure_openai_model.mdx
@@ -25,7 +25,7 @@ Azure OpenAI API in a unified BaseModelBackend interface.
- **client** (Optional[Any], optional): A custom synchronous AzureOpenAI client instance. If provided, this client will be used instead of creating a new one. Useful for RL frameworks like AReaL or rLLM that provide Azure OpenAI-compatible clients. The client should implement the AzureOpenAI client interface with `.chat.completions.create()` and `.beta.chat.completions.parse()` methods. (default: :obj:`None`)
- **async_client** (Optional[Any], optional): A custom asynchronous AzureOpenAI client instance. If provided, this client will be used instead of creating a new one. The client should implement the AsyncAzureOpenAI client interface. (default: :obj:`None`)
- **azure_deployment_name** (Optional[str], optional): **Deprecated**. Use `model_type` parameter instead. This parameter is kept for backward compatibility and will be removed in a future version. (default: :obj:`None`) **kwargs (Any): Additional arguments to pass to the client initialization. Ignored if custom clients are provided.
-- **References**:
+- **References**:
- **https**: //learn.microsoft.com/en-us/azure/ai-services/openai/
diff --git a/docs/mintlify/reference/camel.models.deepseek_model.mdx b/docs/mintlify/reference/camel.models.deepseek_model.mdx
index 515c27f716..3ddde463d1 100644
--- a/docs/mintlify/reference/camel.models.deepseek_model.mdx
+++ b/docs/mintlify/reference/camel.models.deepseek_model.mdx
@@ -19,7 +19,7 @@ DeepSeek API in a unified OpenAICompatibleModel interface.
- **token_counter** (Optional[BaseTokenCounter], optional): Token counter to use for the model. If not provided, :obj:`OpenAITokenCounter` will be used. (default: :obj:`None`)
- **timeout** (Optional[float], optional): The timeout value in seconds for API calls. If not provided, will fall back to the MODEL_TIMEOUT environment variable or default to 180 seconds. (default: :obj:`None`)
- **max_retries** (int, optional): Maximum number of retries for API calls. (default: :obj:`3`) **kwargs (Any): Additional arguments to pass to the client initialization.
-- **References**:
+- **References**:
- **https**: //api-docs.deepseek.com/
diff --git a/docs/mintlify/reference/camel.models.ollama_model.mdx b/docs/mintlify/reference/camel.models.ollama_model.mdx
index 998f1203b1..1208693740 100644
--- a/docs/mintlify/reference/camel.models.ollama_model.mdx
+++ b/docs/mintlify/reference/camel.models.ollama_model.mdx
@@ -20,7 +20,7 @@ Ollama service interface.
- **token_counter** (Optional[BaseTokenCounter], optional): Token counter to use for the model. If not provided, :obj:`OpenAITokenCounter( ModelType.GPT_4O_MINI)` will be used. (default: :obj:`None`)
- **timeout** (Optional[float], optional): The timeout value in seconds for API calls. If not provided, will fall back to the MODEL_TIMEOUT environment variable or default to 180 seconds. (default: :obj:`None`)
- **max_retries** (int, optional): Maximum number of retries for API calls. (default: :obj:`3`) **kwargs (Any): Additional arguments to pass to the client initialization.
-- **References**:
+- **References**:
- **https**: //github.com/ollama/ollama/blob/main/docs/openai.md
diff --git a/docs/mintlify/reference/camel.models.vllm_model.mdx b/docs/mintlify/reference/camel.models.vllm_model.mdx
index 94f9d9b806..e094621b1f 100644
--- a/docs/mintlify/reference/camel.models.vllm_model.mdx
+++ b/docs/mintlify/reference/camel.models.vllm_model.mdx
@@ -19,7 +19,7 @@ vLLM service interface.
- **token_counter** (Optional[BaseTokenCounter], optional): Token counter to use for the model. If not provided, :obj:`OpenAITokenCounter( ModelType.GPT_4O_MINI)` will be used. (default: :obj:`None`)
- **timeout** (Optional[float], optional): The timeout value in seconds for API calls. If not provided, will fall back to the MODEL_TIMEOUT environment variable or default to 180 seconds. (default: :obj:`None`)
- **max_retries** (int, optional): Maximum number of retries for API calls. (default: :obj:`3`) **kwargs (Any): Additional arguments to pass to the client initialization.
-- **References**:
+- **References**:
- **https**: //docs.vllm.ai/en/latest/serving/openai_compatible_server.html
diff --git a/docs/mintlify/reference/camel.retrievers.bm25_retriever.mdx b/docs/mintlify/reference/camel.retrievers.bm25_retriever.mdx
index 80674920ec..e0d782cd4e 100644
--- a/docs/mintlify/reference/camel.retrievers.bm25_retriever.mdx
+++ b/docs/mintlify/reference/camel.retrievers.bm25_retriever.mdx
@@ -19,7 +19,7 @@ frequency of the query terms.
- **bm25** (BM25Okapi): An instance of the BM25Okapi class used for calculating document scores.
- **content_input_path** (str): The path to the content that has been processed and stored.
- **unstructured_modules** (UnstructuredIO): A module for parsing files and URLs and chunking content based on specified parameters.
-- **References**:
+- **References**:
- **https**: //github.com/dorianbrown/rank_bm25
diff --git a/docs/mintlify/reference/camel.retrievers.cohere_rerank_retriever.mdx b/docs/mintlify/reference/camel.retrievers.cohere_rerank_retriever.mdx
index 692844e385..f04ee8cd10 100644
--- a/docs/mintlify/reference/camel.retrievers.cohere_rerank_retriever.mdx
+++ b/docs/mintlify/reference/camel.retrievers.cohere_rerank_retriever.mdx
@@ -15,7 +15,7 @@ model.
- **model_name** (str): The model name to use for re-ranking.
- **api_key** (Optional[str]): The API key for authenticating with the Cohere service.
-- **References**:
+- **References**:
- **https**: //txt.cohere.com/rerank/
diff --git a/docs/mintlify/reference/camel.storages.key_value_storages.mem0_cloud.mdx b/docs/mintlify/reference/camel.storages.key_value_storages.mem0_cloud.mdx
index 00282663c5..f0a6d70438 100644
--- a/docs/mintlify/reference/camel.storages.key_value_storages.mem0_cloud.mdx
+++ b/docs/mintlify/reference/camel.storages.key_value_storages.mem0_cloud.mdx
@@ -18,7 +18,7 @@ search, and manage text with context.
- **api_key** (str, optional): The API key for authentication. If not provided, will try to get from environment variable MEM0_API_KEY (default: :obj:`None`).
- **user_id** (str, optional): Default user ID to associate memories with (default: :obj:`None`).
- **metadata** (Dict[str, Any], optional): Default metadata to include with all memories (default: :obj:`None`).
-- **References**:
+- **References**:
- **https**: //docs.mem0.ai
diff --git a/docs/mintlify/reference/camel.storages.object_storages.amazon_s3.mdx b/docs/mintlify/reference/camel.storages.object_storages.amazon_s3.mdx
index 1e77af5ccf..06b48cfaa8 100644
--- a/docs/mintlify/reference/camel.storages.object_storages.amazon_s3.mdx
+++ b/docs/mintlify/reference/camel.storages.object_storages.amazon_s3.mdx
@@ -22,7 +22,7 @@ logged in with AWS CLI).
- **access_key_id** (Optional[str], optional): The AWS access key ID. Defaults to None.
- **secret_access_key** (Optional[str], optional): The AWS secret access key. Defaults to None.
- **anonymous** (bool, optional): Whether to use anonymous access. Defaults to False.
-- **References**:
+- **References**:
- **https**: //aws.amazon.com/pm/serv-s3/
- **https**: //aws.amazon.com/cli/
diff --git a/docs/mintlify/reference/camel.storages.object_storages.azure_blob.mdx b/docs/mintlify/reference/camel.storages.object_storages.azure_blob.mdx
index d785edb613..ea16da0135 100644
--- a/docs/mintlify/reference/camel.storages.object_storages.azure_blob.mdx
+++ b/docs/mintlify/reference/camel.storages.object_storages.azure_blob.mdx
@@ -16,7 +16,7 @@ container in the storage account.
- **storage_account_name** (str): The name of the storage account.
- **container_name** (str): The name of the container.
- **access_key** (Optional[str], optional): The access key of the storage account. Defaults to None.
-- **References**:
+- **References**:
- **https**: //azure.microsoft.com/en-us/products/storage/blobs
diff --git a/docs/mintlify/reference/camel.storages.object_storages.google_cloud.mdx b/docs/mintlify/reference/camel.storages.object_storages.google_cloud.mdx
index eee8cf59d7..fc5014378e 100644
--- a/docs/mintlify/reference/camel.storages.object_storages.google_cloud.mdx
+++ b/docs/mintlify/reference/camel.storages.object_storages.google_cloud.mdx
@@ -20,7 +20,7 @@ line tool and save the credentials first.
- **bucket_name** (str): The name of the bucket.
- **create_if_not_exists** (bool, optional): Whether to create the bucket if it does not exist. Defaults to True.
- **anonymous** (bool, optional): Whether to use anonymous access. Defaults to False.
-- **References**:
+- **References**:
- **https**: //cloud.google.com/storage
- **https**: //cloud.google.com/docs/authentication/api-keys
diff --git a/docs/mintlify/reference/camel.toolkits.wechat_official_toolkit.mdx b/docs/mintlify/reference/camel.toolkits.wechat_official_toolkit.mdx
index c2e53b60cc..429c2da036 100644
--- a/docs/mintlify/reference/camel.toolkits.wechat_official_toolkit.mdx
+++ b/docs/mintlify/reference/camel.toolkits.wechat_official_toolkit.mdx
@@ -15,7 +15,7 @@ def _get_wechat_access_token():
**Raises:**
- **ValueError**: If credentials are missing or token retrieval fails.
-- **References**:
+- **References**:
- **https**: //developers.weixin.qq.com/doc/offiaccount/Basic_Information/Get_access_token.html
diff --git a/docs/mintlify/reference/index.mdx b/docs/mintlify/reference/index.mdx
index 58613592e7..2daf4a54ea 100644
--- a/docs/mintlify/reference/index.mdx
+++ b/docs/mintlify/reference/index.mdx
@@ -123,4 +123,4 @@ Generate and work with text embeddings from various providers.
---
-*This API reference is automatically generated from the CAMEL-AI codebase. For the latest updates, visit our [GitHub repository](https://github.com/camel-ai/camel).*
\ No newline at end of file
+*This API reference is automatically generated from the CAMEL-AI codebase. For the latest updates, visit our [GitHub repository](https://github.com/camel-ai/camel).*
diff --git a/docs/reference/camel.agents.chat_agent.md b/docs/reference/camel.agents.chat_agent.md
index 9d087d29a4..7c2210342a 100644
--- a/docs/reference/camel.agents.chat_agent.md
+++ b/docs/reference/camel.agents.chat_agent.md
@@ -1030,7 +1030,7 @@ intermediate responses as they are generated.
- **input_message** (Union[BaseMessage, str]): The input message for the agent.
- **response_format** (Optional[Type[BaseModel]], optional): A Pydantic model defining the expected structure of the response.
-- **Yields**:
+- **Yields**:
- **ChatAgentResponse**: Intermediate responses containing partial content, tool calls, and other information as they become available.
diff --git a/docs/reference/camel.models.azure_openai_model.md b/docs/reference/camel.models.azure_openai_model.md
index d8ce07d378..73f497cab9 100644
--- a/docs/reference/camel.models.azure_openai_model.md
+++ b/docs/reference/camel.models.azure_openai_model.md
@@ -23,7 +23,7 @@ Azure OpenAI API in a unified BaseModelBackend interface.
- **token_counter** (Optional[BaseTokenCounter], optional): Token counter to use for the model. If not provided, :obj:`OpenAITokenCounter` will be used. (default: :obj:`None`)
- **timeout** (Optional[float], optional): The timeout value in seconds for API calls. If not provided, will fall back to the MODEL_TIMEOUT environment variable or default to 180 seconds. (default: :obj:`None`)
- **max_retries** (int, optional): Maximum number of retries for API calls. (default: :obj:`3`) **kwargs (Any): Additional arguments to pass to the client initialization.
-- **References**:
+- **References**:
- **https**: //learn.microsoft.com/en-us/azure/ai-services/openai/
diff --git a/docs/reference/camel.toolkits.gmail_toolkit.md b/docs/reference/camel.toolkits.gmail_toolkit.md
index 4eabd5d00d..6771a15b64 100644
--- a/docs/reference/camel.toolkits.gmail_toolkit.md
+++ b/docs/reference/camel.toolkits.gmail_toolkit.md
@@ -824,19 +824,19 @@ from camel.toolkits import GmailToolkit
def setup_gmail_authentication():
"""Complete Gmail authentication setup."""
-
+
# Check if credentials file exists
if not os.path.exists('credentials.json'):
raise FileNotFoundError(
"credentials.json not found. Please download it from Google Cloud Console."
)
-
+
try:
# Initialize Gmail toolkit (triggers authentication)
gmail = GmailToolkit()
print("✓ Gmail authentication successful!")
return gmail
-
+
except Exception as e:
print(f"✗ Gmail authentication failed: {e}")
print("Please check your credentials.json file and try again.")
@@ -845,7 +845,7 @@ def setup_gmail_authentication():
# Usage
if __name__ == "__main__":
gmail = setup_gmail_authentication()
-
+
# Test authentication by getting user profile
profile = gmail.get_user_profile()
if profile['success']:
diff --git a/examples/agents/agent_step_with_reasoning.py b/examples/agents/agent_step_with_reasoning.py
index 8782cb412a..f8ea261a25 100644
--- a/examples/agents/agent_step_with_reasoning.py
+++ b/examples/agents/agent_step_with_reasoning.py
@@ -16,7 +16,7 @@
from camel.types import ModelPlatformType, ModelType
sys_msg = "You are a helpful assistant."
-usr_msg = """Who is the best basketball player in the world?
+usr_msg = """Who is the best basketball player in the world?
Tell about his career.
"""
@@ -97,7 +97,7 @@
# flake8: noqa: E501
"""
===============================================================================
-Question: Who do you think is the best basketball player in the world? Here are the candidates and their probabilities:
+Question: Who do you think is the best basketball player in the world? Here are the candidates and their probabilities:
1. LeBron James (0.4): Known for his versatility, leadership, and consistent performance over the years. Multiple NBA championships and MVP awards.
2. Giannis Antetokounmpo (0.3): Known as the "Greek Freak," celebrated for his athleticism, defensive skills, and recent NBA championship win. Multiple MVP awards.
3. Stephen Curry (0.3): Renowned for his exceptional shooting ability, revolutionized the game with his three-point shooting. Multiple NBA championships and MVP awards.
diff --git a/examples/agents/mcp_agent/mcp_agent_using_registry.py b/examples/agents/mcp_agent/mcp_agent_using_registry.py
index f25b00080d..4f9b33d56b 100644
--- a/examples/agents/mcp_agent/mcp_agent_using_registry.py
+++ b/examples/agents/mcp_agent/mcp_agent_using_registry.py
@@ -107,9 +107,9 @@ async def main():
5. **Google Workspace Server** (@rishipradeep-think41/gmail-backupmcp)
- Another option for managing Gmail and Calendar
-Each of these tools requires configuration before it can be used. You'll need
-to click on one of the configuration links above to set up the tool with your
-Gmail credentials. Once you've completed the configuration, let me know which
+Each of these tools requires configuration before it can be used. You'll need
+to click on one of the configuration links above to set up the tool with your
+Gmail credentials. Once you've completed the configuration, let me know which
tool you've chosen, and I can help you use it to connect to your Gmail account.
===========================================================================
"""
@@ -122,13 +122,13 @@ async def main():
Based on my search results, here's what I found about CAMEL-AI.org:
## Organization Overview
-CAMEL-AI.org is the first LLM (Large Language Model) multi-agent framework and
-an open-source community. The name CAMEL stands for "Communicative Agents for
+CAMEL-AI.org is the first LLM (Large Language Model) multi-agent framework and
+an open-source community. The name CAMEL stands for "Communicative Agents for
Mind Exploration of Large Language Model Society."
## Core Purpose
-The organization is dedicated to "Finding the Scaling Law of Agents" - this
-appears to be their primary research mission, focusing on understanding how
+The organization is dedicated to "Finding the Scaling Law of Agents" - this
+appears to be their primary research mission, focusing on understanding how
agent-based AI systems scale and develop.
## Research Focus
@@ -140,14 +140,14 @@ async def main():
## Community and Collaboration
- They maintain an active open-source community
-- They invite contributors and collaborators through platforms like Slack and
+- They invite contributors and collaborators through platforms like Slack and
Discord
-- The organization has a research collaboration questionnaire for those
+- The organization has a research collaboration questionnaire for those
interested in building or researching environments for LLM-based agents
## Technical Resources
- Their code is available on GitHub (github.com/camel-ai) with 18 repositories
-- They provide documentation for developers and researchers at
+- They provide documentation for developers and researchers at
docs.camel-ai.org
- They offer tools and cookbooks for working with their agent framework
@@ -156,8 +156,8 @@ async def main():
- GitHub: https://github.com/camel-ai
- Documentation: https://docs.camel-ai.org/
-The organization appears to be at the forefront of research on multi-agent AI
-systems, focusing on how these systems can cooperate autonomously and scale
+The organization appears to be at the forefront of research on multi-agent AI
+systems, focusing on how these systems can cooperate autonomously and scale
effectively.
===========================================================================
"""
diff --git a/examples/agents/mcp_agent/mcp_servers_config.json b/examples/agents/mcp_agent/mcp_servers_config.json
index c0649fef8b..769f6c0b30 100644
--- a/examples/agents/mcp_agent/mcp_servers_config.json
+++ b/examples/agents/mcp_agent/mcp_servers_config.json
@@ -8,4 +8,4 @@
}
},
"mcpWebServers": {}
-}
\ No newline at end of file
+}
diff --git a/examples/agents/repo_agent.py b/examples/agents/repo_agent.py
index eb1b29e4a0..03ed1af639 100644
--- a/examples/agents/repo_agent.py
+++ b/examples/agents/repo_agent.py
@@ -40,16 +40,16 @@
print(response.msgs[0].content)
"""
-Based on your request to learn how to use a `ChatAgent` in CAMEL, I will
-explain key aspects of the implementation provided in the source code
-"retrieved" and guide you on how to create and utilize the `ChatAgent`
+Based on your request to learn how to use a `ChatAgent` in CAMEL, I will
+explain key aspects of the implementation provided in the source code
+"retrieved" and guide you on how to create and utilize the `ChatAgent`
effectively.
### Overview of `ChatAgent`
-`ChatAgent` is designed to interact with language models, supporting
-conversation management, memory, and tool integration.
-It can perform tasks like handling user queries, responding with structured
+`ChatAgent` is designed to interact with language models, supporting
+conversation management, memory, and tool integration.
+It can perform tasks like handling user queries, responding with structured
data, and performing computations.
### Basic Usage of `ChatAgent`
@@ -64,7 +64,7 @@
```
2. **Creating a `ChatAgent` Instance**:
- When you create an instance of `ChatAgent`, you can optionally pass a
+ When you create an instance of `ChatAgent`, you can optionally pass a
`system_message` to define its role and behavior.
```python
@@ -72,7 +72,7 @@
```
3. **Interacting with the Agent**:
- You can have a conversation by using the `step()` method, which allows you
+ You can have a conversation by using the `step()` method, which allows you
to send messages and get responses.
```python
@@ -100,7 +100,7 @@ def calculator(a: int, b: int) -> int:
#### Structured Output
-You can specify structured outputs using Pydantic models to control the
+You can specify structured outputs using Pydantic models to control the
format of the response.
```python
@@ -119,7 +119,7 @@ class StructuredResponse(BaseModel):
### Example with a Specific Model
-The code examples you provided also show how to specify and configure models
+The code examples you provided also show how to specify and configure models
used by `ChatAgent`. Here's how to create a `ChatAgent` with a custom model:
```python
@@ -144,13 +144,13 @@ class StructuredResponse(BaseModel):
### Conclusion
-You can leverage `ChatAgent` in CAMEL to create powerful conversational agents
-that can perform a variety of tasks, integrate tools, and manage conversations
-effectively. The examples given demonstrate basic usage, tool integration,
-structured output formats, and model specification, allowing you to customize
+You can leverage `ChatAgent` in CAMEL to create powerful conversational agents
+that can perform a variety of tasks, integrate tools, and manage conversations
+effectively. The examples given demonstrate basic usage, tool integration,
+structured output formats, and model specification, allowing you to customize
the behavior of your chat agent to suit your needs.
-If you need more specific features or have other questions about the CAMEL
+If you need more specific features or have other questions about the CAMEL
framework, feel free to ask!
"""
diff --git a/examples/benchmarks/apibank.py b/examples/benchmarks/apibank.py
index 94d86d474a..7478d28b4e 100644
--- a/examples/benchmarks/apibank.py
+++ b/examples/benchmarks/apibank.py
@@ -53,11 +53,11 @@
'''
===============================================================================
API description for ToolSearcher:
- {"name": "ToolSearcher", "description": "Searches for relevant tools in
- library based on the keywords.", "input_parameters": {"keywords": {"type":
- "str", "description": "The keyword to search for."}},
- "output_parameters":
- {"best_matchs": {"type": "Union[List[dict], dict]",
+ {"name": "ToolSearcher", "description": "Searches for relevant tools in
+ library based on the keywords.", "input_parameters": {"keywords": {"type":
+ "str", "description": "The keyword to search for."}},
+ "output_parameters":
+ {"best_matchs": {"type": "Union[List[dict], dict]",
"description": "The best match tool(s)."}}}
===============================================================================
'''
diff --git a/examples/benchmarks/apibench.py b/examples/benchmarks/apibench.py
index 1839770cd5..012b5d93d0 100644
--- a/examples/benchmarks/apibench.py
+++ b/examples/benchmarks/apibench.py
@@ -44,25 +44,25 @@
===============================================================================
Example question including API documentation:
What is an API that can be used to classify sports activities in videos?\n
- Use this API documentation for reference:
- {"domain": "Video Classification", "framework": "PyTorch",
- "functionality": "3D ResNet", "api_name": "slow_r50",
- "api_call": "torch.hub.load(repo_or_dir='facebookresearch/pytorchvideo',
- model='slow_r50', pretrained=True)", "api_arguments": {"pretrained": "True"},
- "python_environment_requirements": ["torch", "json", "urllib",
- "pytorchvideo",
- "torchvision", "torchaudio", "torchtext", "torcharrow", "TorchData",
- "TorchRec", "TorchServe", "PyTorch on XLA Devices"],
- "example_code": ["import torch",
- "model = torch.hub.load('facebookresearch/pytorchvideo',
- 'slow_r50', pretrained=True)",
- "device = 'cpu'", "model = model.eval()", "model = model.to(device)"],
- "performance": {"dataset": "Kinetics 400",
- "accuracy": {"top_1": 74.58, "top_5": 91.63},
- "Flops (G)": 54.52, "Params (M)": 32.45},
- "description": "The 3D ResNet model is a Resnet-style video classification
- network pretrained on the Kinetics 400 dataset. It is based on the
- architecture from the paper 'SlowFast Networks for Video Recognition'
+ Use this API documentation for reference:
+ {"domain": "Video Classification", "framework": "PyTorch",
+ "functionality": "3D ResNet", "api_name": "slow_r50",
+ "api_call": "torch.hub.load(repo_or_dir='facebookresearch/pytorchvideo',
+ model='slow_r50', pretrained=True)", "api_arguments": {"pretrained": "True"},
+ "python_environment_requirements": ["torch", "json", "urllib",
+ "pytorchvideo",
+ "torchvision", "torchaudio", "torchtext", "torcharrow", "TorchData",
+ "TorchRec", "TorchServe", "PyTorch on XLA Devices"],
+ "example_code": ["import torch",
+ "model = torch.hub.load('facebookresearch/pytorchvideo',
+ 'slow_r50', pretrained=True)",
+ "device = 'cpu'", "model = model.eval()", "model = model.to(device)"],
+ "performance": {"dataset": "Kinetics 400",
+ "accuracy": {"top_1": 74.58, "top_5": 91.63},
+ "Flops (G)": 54.52, "Params (M)": 32.45},
+ "description": "The 3D ResNet model is a Resnet-style video classification
+ network pretrained on the Kinetics 400 dataset. It is based on the
+ architecture from the paper 'SlowFast Networks for Video Recognition'
by Christoph Feichtenhofer et al."}}
===============================================================================
'''
diff --git a/examples/benchmarks/browsecomp_workforce.py b/examples/benchmarks/browsecomp_workforce.py
index 7b487c25af..07de756bed 100644
--- a/examples/benchmarks/browsecomp_workforce.py
+++ b/examples/benchmarks/browsecomp_workforce.py
@@ -40,8 +40,8 @@
# Create specialized agents for the workforce
web_researcher_sys_msg = BaseMessage.make_assistant_message(
role_name="Web Researcher",
- content="""You are an expert at researching information on the web.
-You can search for and analyze web content to extract accurate information.
+ content="""You are an expert at researching information on the web.
+You can search for and analyze web content to extract accurate information.
You excel at understanding complex queries and finding precise answers.""",
)
diff --git a/examples/benchmarks/nexus.py b/examples/benchmarks/nexus.py
index 299d5385d0..765fc4f260 100644
--- a/examples/benchmarks/nexus.py
+++ b/examples/benchmarks/nexus.py
@@ -43,23 +43,23 @@
def getIndicatorForIPv6(apiKey: str, ip: str, section: str):
"""
- Retrieves comprehensive information for a specific IPv6 address from the
- AlienVault database.
- This function allows you to obtain various types of data.
- The 'general' section provides general information about the IP,
- including geo data, and a list of other available sections.
- 'reputation' offers OTX data on malicious activity observed by
- AlienVault Labs. 'geo' details more verbose geographic data such
- as country code and coordinates. 'malware' reveals malware samples
- connected to the IP,
- and 'urlList' shows URLs associated with the IP. Lastly, 'passiveDns'
- includes passive DNS information about hostnames/domains
+ Retrieves comprehensive information for a specific IPv6 address from the
+ AlienVault database.
+ This function allows you to obtain various types of data.
+ The 'general' section provides general information about the IP,
+ including geo data, and a list of other available sections.
+ 'reputation' offers OTX data on malicious activity observed by
+ AlienVault Labs. 'geo' details more verbose geographic data such
+ as country code and coordinates. 'malware' reveals malware samples
+ connected to the IP,
+ and 'urlList' shows URLs associated with the IP. Lastly, 'passiveDns'
+ includes passive DNS information about hostnames/domains
pointing to this IP.
Args:
- apiKey: string, required, Your AlienVault API key
- ip: string, required, IPv6 address to query
- - section: string, required, Specific data section to retrieve
+ - section: string, required, Specific data section to retrieve
(options: general, reputation, geo, malware, urlList, passiveDns)
"""
@@ -67,21 +67,21 @@ def getIndicatorForIPv6(apiKey: str, ip: str, section: str):
def getIndicatorForDomain(apiKey: str, domain: str, section: str):
"""
- Retrieves a comprehensive overview for a given domain name from the
- AlienVault database. This function provides various data types
- about the domain. The 'general' section includes general information
- about the domain, such as geo data, and lists of other available
- sections. 'geo' provides detailed geographic data including country
- code and coordinates. The 'malware' section indicates malware samples
+ Retrieves a comprehensive overview for a given domain name from the
+ AlienVault database. This function provides various data types
+ about the domain. The 'general' section includes general information
+ about the domain, such as geo data, and lists of other available
+ sections. 'geo' provides detailed geographic data including country
+ code and coordinates. The 'malware' section indicates malware samples
associated with the domain. 'urlList' shows URLs linked to the domain,
'passiveDns' details passive DNS information about hostnames/domains
- associated with the domain,
+ associated with the domain,
and 'whois' gives Whois records for the domain.
Args:
- apiKey: string, required, Your AlienVault API key
- domain: string, required, Domain address to query
- - section: string, required, Specific data section to retrieve
+ - section: string, required, Specific data section to retrieve
(options: general, geo, malware, urlList, passiveDns, whois)
"""
@@ -89,20 +89,20 @@ def getIndicatorForDomain(apiKey: str, domain: str, section: str):
def getIndicatorForHostname(apiKey: str, hostname: str, section: str):
"""
- Retrieves detailed information for a specific hostname from the
- AlienVault database. This function provides various data types about
- the hostname. The 'general' section includes general information
- about the IP, geo data, and lists of other available sections.
- 'geo' provides detailed geographic data including country code
- and coordinates. The 'malware' section indicates malware samples
- associated with the hostname. 'urlList' shows URLs linked to
- the hostname, and 'passiveDns' details passive DNS information
+ Retrieves detailed information for a specific hostname from the
+ AlienVault database. This function provides various data types about
+ the hostname. The 'general' section includes general information
+ about the IP, geo data, and lists of other available sections.
+ 'geo' provides detailed geographic data including country code
+ and coordinates. The 'malware' section indicates malware samples
+ associated with the hostname. 'urlList' shows URLs linked to
+ the hostname, and 'passiveDns' details passive DNS information
about hostnames/domains associated with the hostname.
Args:
- apiKey: string, required, Your AlienVault API key
- hostname: string, required, Single hostname address to query
- - section: string, required, Specific data section to retrieve
+ - section: string, required, Specific data section to retrieve
(options: general, geo, malware, urlList, passiveDns)
"""
@@ -110,18 +110,18 @@ def getIndicatorForHostname(apiKey: str, hostname: str, section: str):
def getIndicatorForFileHashes(apiKey: str, fileHash: str, section: str):
"""
- Retrieves information related to a specific file hash from the
- AlienVault database.
- This function provides two types of data: 'general',
- which includes general metadata about the file hash and a list of other
- available sections for the hash; and 'analysis', which encompasses both
- dynamic and static analysis of the file,
+ Retrieves information related to a specific file hash from the
+ AlienVault database.
+ This function provides two types of data: 'general',
+ which includes general metadata about the file hash and a list of other
+ available sections for the hash; and 'analysis', which encompasses both
+ dynamic and static analysis of the file,
including Cuckoo analysis, exiftool, etc.
Args:
- apiKey: string, required, Your AlienVault API key
- fileHash: string, required, Single file hash to query
- - section: string, required, Specific data section to retrieve
+ - section: string, required, Specific data section to retrieve
(options: general, analysis)
"""
@@ -129,18 +129,18 @@ def getIndicatorForFileHashes(apiKey: str, fileHash: str, section: str):
def getIndicatorForUrl(apiKey: str, url: str, section: str):
"""
- Retrieves information related to a specific URL from the AlienVault
- database. This function offers two types of data: 'general',
- which includes historical geographic information,
- any pulses this indicator is on,
- and a list of other available sections for this URL; and 'url_list',
- which provides full results from AlienVault Labs URL analysis,
+ Retrieves information related to a specific URL from the AlienVault
+ database. This function offers two types of data: 'general',
+ which includes historical geographic information,
+ any pulses this indicator is on,
+ and a list of other available sections for this URL; and 'url_list',
+ which provides full results from AlienVault Labs URL analysis,
potentially including multiple entries.
Args:
- apiKey: string, required, Your AlienVault API key
- url: string, required, Single URL to query
- - section: string, required, Specific data section to retrieve
+ - section: string, required, Specific data section to retrieve
(options: general, url_list)
"""
@@ -148,20 +148,20 @@ def getIndicatorForUrl(apiKey: str, url: str, section: str):
def getIndicatorForCVE(apiKey: str, cve: str, section: str):
"""
- Retrieves information related to a specific CVE
+ Retrieves information related to a specific CVE
(Common Vulnerability Enumeration)
from the AlienVault database. This function offers detailed data on CVEs.
- The 'General' section includes MITRE CVE data, such as CPEs
- (Common Platform Enumerations),
- CWEs (Common Weakness Enumerations), and other relevant details.
- It also provides information on any pulses this indicator is on,
+ The 'General' section includes MITRE CVE data, such as CPEs
+ (Common Platform Enumerations),
+ CWEs (Common Weakness Enumerations), and other relevant details.
+ It also provides information on any pulses this indicator is on,
and lists other sections currently available for this CVE.
Args:
- apiKey: string, required, Your AlienVault API key
- - cve: string, required, Specific CVE identifier to query
+ - cve: string, required, Specific CVE identifier to query
(e.g., 'CVE-2014-0160')
- - section: string, required, Specific data section to retrieve
+ - section: string, required, Specific data section to retrieve
('general' only)
"""
@@ -169,16 +169,16 @@ def getIndicatorForCVE(apiKey: str, cve: str, section: str):
def getIndicatorForNIDS(apiKey: str, nids: str, section: str):
"""
- Retrieves metadata information for a specific
- Network Intrusion Detection System (NIDS)
- indicator from the AlienVault database. This function is designed to
+ Retrieves metadata information for a specific
+ Network Intrusion Detection System (NIDS)
+ indicator from the AlienVault database. This function is designed to
provide general metadata about NIDS indicators.
Args:
- apiKey: string, required, Your AlienVault API key
- - nids: string, required, Specific NIDS indicator to query
+ - nids: string, required, Specific NIDS indicator to query
(e.g., '2820184')
- - section: string, required, Specific data section to retrieve
+ - section: string, required, Specific data section to retrieve
('general' only)
"""
@@ -188,17 +188,17 @@ def getIndicatorForCorrelationRules(apiKey: str, correlationRule: str,
"""
Retrieves metadata information related to a specific Correlation Rule from
- the AlienVault database. This function is designed to provide
- general metadata about
- Correlation Rules used in network security and event correlation.
- Correlation Rules are crucial for identifying patterns and potential
+ the AlienVault database. This function is designed to provide
+ general metadata about
+ Correlation Rules used in network security and event correlation.
+ Correlation Rules are crucial for identifying patterns and potential
security threats in network data.
Args:
- apiKey: string, required, Your AlienVault API key
- - correlationRule: string, required, Specific Correlation Rule
+ - correlationRule: string, required, Specific Correlation Rule
identifier to query (e.g., '572f8c3c540c6f0161677877')
- - section: string, required, Specific data section to retrieve
+ - section: string, required, Specific data section to retrieve
('general' only)
"""
diff --git a/examples/bots/discord_bot.py b/examples/bots/discord_bot.py
index 29822892de..e51c614298 100644
--- a/examples/bots/discord_bot.py
+++ b/examples/bots/discord_bot.py
@@ -55,9 +55,9 @@ def __init__(
"""
assistant_sys_msg = '''
- Objective:
+ Objective:
You are a customer service bot designed to assist users
- with inquiries related to our open-source project.
+ with inquiries related to our open-source project.
Your responses should be informative, concise, and helpful.
Instructions:
@@ -65,13 +65,13 @@ def __init__(
user's question. Focus on keywords and context to
determine the user's intent.
Search for Relevant Information: Use the provided dataset
- and refer to the RAG (file to find answers that
- closely match the user's query. The RAG file
- contains detailed interactions and should be your
+ and refer to the RAG (file to find answers that
+ closely match the user's query. The RAG file
+ contains detailed interactions and should be your
primary resource for crafting responses.
- Provide Clear and Concise Responses: Your answers should
+ Provide Clear and Concise Responses: Your answers should
be clear and to the point. Avoid overly technical
- language unless the user's query indicates
+ language unless the user's query indicates
familiarity with technical terms.
Encourage Engagement: Where applicable, encourage users
to contribute to the project or seek further
@@ -86,12 +86,12 @@ def __init__(
further engagement if appropriate.
bd
Tone:
- Professional: Maintain a professional tone that
+ Professional: Maintain a professional tone that
instills confidence in the user.
- Friendly: Be approachable and friendly to make users
+ Friendly: Be approachable and friendly to make users
feel comfortable.
Helpful: Always aim to be as helpful as possible,
- guiding users to solutions.
+ guiding users to solutions.
'''
self._agent = ChatAgent(
diff --git a/examples/bots/discord_bot_use_msg_queue.py b/examples/bots/discord_bot_use_msg_queue.py
index d1e729fa0c..4e4faf660c 100644
--- a/examples/bots/discord_bot_use_msg_queue.py
+++ b/examples/bots/discord_bot_use_msg_queue.py
@@ -51,9 +51,9 @@ def __init__(
"""
assistant_sys_msg = '''
- Objective:
+ Objective:
You are a customer service bot designed to assist users
- with inquiries related to our open-source project.
+ with inquiries related to our open-source project.
Your responses should be informative, concise, and helpful.
Instructions:
@@ -61,13 +61,13 @@ def __init__(
user's question. Focus on keywords and context to
determine the user's intent.
Search for Relevant Information: Use the provided dataset
- and refer to the RAG (file to find answers that
- closely match the user's query. The RAG file
- contains detailed interactions and should be your
+ and refer to the RAG (file to find answers that
+ closely match the user's query. The RAG file
+ contains detailed interactions and should be your
primary resource for crafting responses.
- Provide Clear and Concise Responses: Your answers should
+ Provide Clear and Concise Responses: Your answers should
be clear and to the point. Avoid overly technical
- language unless the user's query indicates
+ language unless the user's query indicates
familiarity with technical terms.
Encourage Engagement: Where applicable, encourage users
to contribute to the project or seek further
@@ -82,12 +82,12 @@ def __init__(
further engagement if appropriate.
bd
Tone:
- Professional: Maintain a professional tone that
+ Professional: Maintain a professional tone that
instills confidence in the user.
- Friendly: Be approachable and friendly to make users
+ Friendly: Be approachable and friendly to make users
feel comfortable.
Helpful: Always aim to be as helpful as possible,
- guiding users to solutions.
+ guiding users to solutions.
'''
self._agent = ChatAgent(
diff --git a/examples/bots/slack_bot.py b/examples/bots/slack_bot.py
index 6ba9648c55..46ba58baa1 100644
--- a/examples/bots/slack_bot.py
+++ b/examples/bots/slack_bot.py
@@ -57,9 +57,9 @@ def __init__(
"""
content = '''
- Objective:
+ Objective:
You are a customer service bot designed to assist users
- with inquiries related to our open-source project.
+ with inquiries related to our open-source project.
Your responses should be informative, concise, and helpful.
Instructions:
@@ -67,13 +67,13 @@ def __init__(
user's question. Focus on keywords and context to
determine the user's intent.
Search for Relevant Information: Use the provided dataset
- and refer to the RAG (file to find answers that
- closely match the user's query. The RAG file
- contains detailed interactions and should be your
+ and refer to the RAG (file to find answers that
+ closely match the user's query. The RAG file
+ contains detailed interactions and should be your
primary resource for crafting responses.
- Provide Clear and Concise Responses: Your answers should
+ Provide Clear and Concise Responses: Your answers should
be clear and to the point. Avoid overly technical
- language unless the user's query indicates
+ language unless the user's query indicates
familiarity with technical terms.
Encourage Engagement: Where applicable, encourage users
to contribute to the project or seek further
@@ -88,12 +88,12 @@ def __init__(
further engagement if appropriate.
bd
Tone:
- Professional: Maintain a professional tone that
+ Professional: Maintain a professional tone that
instills confidence in the user.
- Friendly: Be approachable and friendly to make users
+ Friendly: Be approachable and friendly to make users
feel comfortable.
Helpful: Always aim to be as helpful as possible,
- guiding users to solutions.
+ guiding users to solutions.
'''
self._agent = ChatAgent(
diff --git a/examples/bots/slack_bot_use_msg_queue.py b/examples/bots/slack_bot_use_msg_queue.py
index d2c08184b9..f4fe64e881 100644
--- a/examples/bots/slack_bot_use_msg_queue.py
+++ b/examples/bots/slack_bot_use_msg_queue.py
@@ -60,9 +60,9 @@ def __init__(
"""
assistant_sys_msg = '''
- Objective:
+ Objective:
You are a customer service bot designed to assist users
- with inquiries related to our open-source project.
+ with inquiries related to our open-source project.
Your responses should be informative, concise, and helpful.
Instructions:
@@ -70,13 +70,13 @@ def __init__(
user's question. Focus on keywords and context to
determine the user's intent.
Search for Relevant Information: Use the provided dataset
- and refer to the RAG (file to find answers that
- closely match the user's query. The RAG file
- contains detailed interactions and should be your
+ and refer to the RAG (file to find answers that
+ closely match the user's query. The RAG file
+ contains detailed interactions and should be your
primary resource for crafting responses.
- Provide Clear and Concise Responses: Your answers should
+ Provide Clear and Concise Responses: Your answers should
be clear and to the point. Avoid overly technical
- language unless the user's query indicates
+ language unless the user's query indicates
familiarity with technical terms.
Encourage Engagement: Where applicable, encourage users
to contribute to the project or seek further
@@ -91,12 +91,12 @@ def __init__(
further engagement if appropriate.
bd
Tone:
- Professional: Maintain a professional tone that
+ Professional: Maintain a professional tone that
instills confidence in the user.
- Friendly: Be approachable and friendly to make users
+ Friendly: Be approachable and friendly to make users
feel comfortable.
Helpful: Always aim to be as helpful as possible,
- guiding users to solutions.
+ guiding users to solutions.
'''
self._agent = ChatAgent(
@@ -270,10 +270,10 @@ def start_async_queue_processor(agent: BotAgent, msg_queue: queue.Queue):
if __name__ == "__main__":
r"""Main entry point for running the Slack bot application.
- This section initializes the required components including the message
- queue, agent, and the SlackBot instance. It also starts a separate thread
- for asynchronous message processing to avoid blocking the Slack bot's main
- event loop. The `slack_bot.run()` function will handle incoming Slack
+ This section initializes the required components including the message
+ queue, agent, and the SlackBot instance. It also starts a separate thread
+ for asynchronous message processing to avoid blocking the Slack bot's main
+ event loop. The `slack_bot.run()` function will handle incoming Slack
events on the main thread, while the separate thread will handle processing
the messages from the queue.
"""
diff --git a/examples/conversion.py b/examples/conversion.py
index 5922cf2707..4beea32e83 100644
--- a/examples/conversion.py
+++ b/examples/conversion.py
@@ -59,8 +59,8 @@ def camel_messages_to_sharegpt(
ShareGPTMessage(
from_="tool",
value='''
-{"name": "get_stock_fundamentals", "content":
-{"symbol": "TSLA", "company_name": "Tesla, Inc.",
+{"name": "get_stock_fundamentals", "content":
+{"symbol": "TSLA", "company_name": "Tesla, Inc.",
"sector": "Consumer Cyclical", "pe_ratio": 49.604652}}
''',
),
diff --git a/examples/data_collectors/alpaca_collector.py b/examples/data_collectors/alpaca_collector.py
index abf5eaf71b..e76ec62081 100644
--- a/examples/data_collectors/alpaca_collector.py
+++ b/examples/data_collectors/alpaca_collector.py
@@ -52,5 +52,5 @@
{'instruction': 'You are a helpful assistantWhen is the release date of the video game Portal?', 'input': '', 'output': 'The video game "Portal" was released on October 10, 2007. It was developed by Valve Corporation and is part of the game bundle known as "The Orange Box," which also included "Half-Life 2" and its episodes.'}
2025-01-19 19:26:09,140 - httpx - INFO - HTTP Request: POST https://api.openai.com/v1/chat/completions "HTTP/1.1 200 OK"
{'instruction': 'You are a helpful assistant When is the release date of the video game Portal?', 'input': '', 'output': 'The video game "Portal" was released on October 10, 2007. It was developed by Valve Corporation and is part of the game bundle known as "The Orange Box," which also included "Half-Life 2" and its episodes.'}
-{'instruction': 'You are a helpful assistantWhen is the release date of the video game Portal?', 'input': '', 'output': 'The video game "Portal" was released on October 10, 2007. It was developed by
+{'instruction': 'You are a helpful assistantWhen is the release date of the video game Portal?', 'input': '', 'output': 'The video game "Portal" was released on October 10, 2007. It was developed by
"""
diff --git a/examples/datagen/evol_instruct/input.json b/examples/datagen/evol_instruct/input.json
index f5a5e3bdce..58ff3b7473 100644
--- a/examples/datagen/evol_instruct/input.json
+++ b/examples/datagen/evol_instruct/input.json
@@ -4,4 +4,4 @@
"Find the center of the circle with equation $x^2 - 6x + y^2 + 2y = 9$.",
"What are all values of $p$ such that for every $q>0$, we have $$\\frac{3(pq^2+p^2q+3q^2+3pq)}{p+q}>2p^2q?$$ Express your answer in interval notation in decimal form.",
"Tim wants to invest some money in a bank which compounds quarterly with an annual interest rate of $7\\%$. To the nearest dollar, how much money should he invest if he wants a total of $\\$60,\\!000$ at the end of $5$ years?"
-]
\ No newline at end of file
+]
diff --git a/examples/datagen/evol_instruct/results_iter.json b/examples/datagen/evol_instruct/results_iter.json
index 99758ce862..70a87df945 100644
--- a/examples/datagen/evol_instruct/results_iter.json
+++ b/examples/datagen/evol_instruct/results_iter.json
@@ -4,4 +4,4 @@
"Determine the equation of the circle that passes through the point \\((1, 2)\\), is tangent to the line \\(3x - 4y + 5 = 0\\), and whose center lies on the line \\(x + y = 7\\). Additionally, find the coordinates of the center and the radius of this circle.",
"Determine all values of \\( p \\) such that for every \\( q > 0 \\), the inequality\n\n\\[\n\\frac{4(pq^3 + p^2q^2 + 4q^3 + 4pq^2) + 2p^2q}{p^2 + q^2} > 3p^3q + \\frac{q^4}{p}\n\\]\n\nholds true. Express your answer in interval notation in decimal form.",
"Alex plans to invest in a retirement account that compounds interest semi-annually with an annual nominal interest rate of \\(6\\%\\). Additionally, Alex will make semi-annual deposits that increase by \\(3\\%\\) each period, starting with an initial deposit of \\(\\$600\\) at the end of the first period. If Alex wants to have a total of \\(\\$75,\\!000\\) at the end of \\(8\\) years, to the nearest dollar, how much should the initial investment be?"
-]
\ No newline at end of file
+]
diff --git a/examples/datagen/evol_instruct/results_iter_harder.json b/examples/datagen/evol_instruct/results_iter_harder.json
index 53f91c12ef..938a632b48 100644
--- a/examples/datagen/evol_instruct/results_iter_harder.json
+++ b/examples/datagen/evol_instruct/results_iter_harder.json
@@ -4,4 +4,4 @@
"Given the equation of the hyperbola \\(9x^2 - 16y^2 - 54x + 64y - 1 = 0\\), find the equation of the asymptotes of the hyperbola. Then, determine the equation of an ellipse that shares the same center and axes orientation as this hyperbola, and whose eccentricity is half that of the hyperbola. Provide the equation of the ellipse in standard form.",
"Determine all values of \\( p \\) such that for every \\( q > 0 \\), the following equation holds:\n\n\\[\n\\int_0^{\\infty} \\left( \\frac{4(pq^3 + p^2q^2 + 4q^3 + 4pq^2) + \\ln(p^2+q^2)}{p^2+q^2} \\right) e^{-q^2} \\, dq = \\int_0^{\\infty} \\left( 3p^3q^2 + e^{p^2q} \\right) e^{-q^2} \\, dq\n\\]\n\nExpress your answer as a set of values in interval notation in decimal form.",
"A company plans to invest in three projects, each with different compounding interest schemes. Project A compounds interest quarterly at an annual rate of \\(7.2\\%\\), Project B compounds interest weekly at an annual rate of \\(6.5\\%\\), and Project C compounds interest continuously at an annual rate of \\(6.0\\%\\). The company wants the total amount from all three investments to be \\( \\$300,000 \\) after 5 years, with the amount from each project being equal at that time. What is the initial investment required for each project?"
-]
\ No newline at end of file
+]
diff --git a/examples/datagen/self_improving_cot/gsm8k_dataset.json b/examples/datagen/self_improving_cot/gsm8k_dataset.json
index be68dad056..de5a5179e3 100644
--- a/examples/datagen/self_improving_cot/gsm8k_dataset.json
+++ b/examples/datagen/self_improving_cot/gsm8k_dataset.json
@@ -89,4 +89,4 @@
"boxed_answer_success": true,
"improvement_history": []
}
-]
\ No newline at end of file
+]
diff --git a/examples/datagen/self_improving_cot/self_improving_cot_example.py b/examples/datagen/self_improving_cot/self_improving_cot_example.py
index f5dea4ea60..0014c444c5 100644
--- a/examples/datagen/self_improving_cot/self_improving_cot_example.py
+++ b/examples/datagen/self_improving_cot/self_improving_cot_example.py
@@ -49,9 +49,9 @@ def main():
problems = json.load(f)
# Initialize agent
- reason_agent_system_message = """Please reason step by step, and put your
+ reason_agent_system_message = """Please reason step by step, and put your
final answer within \\boxed{}."""
- evaluate_agent_system_message = """You are a highly critical teacher who
+ evaluate_agent_system_message = """You are a highly critical teacher who
evaluates the student's answers with a meticulous and demanding approach.
"""
reason_agent = ChatAgent(reason_agent_system_message, model=model)
diff --git a/examples/datagen/self_improving_cot/self_improving_cot_example_with_r1.py b/examples/datagen/self_improving_cot/self_improving_cot_example_with_r1.py
index 472602997a..6aef5a3024 100644
--- a/examples/datagen/self_improving_cot/self_improving_cot_example_with_r1.py
+++ b/examples/datagen/self_improving_cot/self_improving_cot_example_with_r1.py
@@ -74,9 +74,9 @@ def main():
problems = json.load(f)
# Initialize agent
- reason_agent_system_message = """Answer my question and give your
+ reason_agent_system_message = """Answer my question and give your
final answer within \\boxed{}."""
- evaluate_agent_system_message = """You are a highly critical teacher who
+ evaluate_agent_system_message = """You are a highly critical teacher who
evaluates the student's answers with a meticulous and demanding approach.
"""
reason_agent = ChatAgent(
diff --git a/examples/datagen/self_instruct/data_output.json b/examples/datagen/self_instruct/data_output.json
index 3833490afd..fc16e9abd1 100644
--- a/examples/datagen/self_instruct/data_output.json
+++ b/examples/datagen/self_instruct/data_output.json
@@ -74,4 +74,4 @@
}
]
}
-]
\ No newline at end of file
+]
diff --git a/examples/datagen/source2synth.py b/examples/datagen/source2synth.py
index 65ee5ff533..c2b1e7c935 100644
--- a/examples/datagen/source2synth.py
+++ b/examples/datagen/source2synth.py
@@ -55,32 +55,32 @@ def main():
test_texts = [
# Chain of technological developments
"""
- The invention of transistors revolutionized electronics in the 1950s.
- These tiny semiconductor devices enabled the development of smaller and more
- efficient computers. The miniaturization of computers led to the creation of
- personal computers in the 1980s, which transformed how people work and communicate.
- This digital revolution eventually gave rise to the internet, connecting billions
- of people worldwide. Today, this interconnected network powers artificial
+ The invention of transistors revolutionized electronics in the 1950s.
+ These tiny semiconductor devices enabled the development of smaller and more
+ efficient computers. The miniaturization of computers led to the creation of
+ personal computers in the 1980s, which transformed how people work and communicate.
+ This digital revolution eventually gave rise to the internet, connecting billions
+ of people worldwide. Today, this interconnected network powers artificial
intelligence systems that are reshaping various industries.
""", # noqa: E501
# Environmental changes causation chain
"""
- Industrial activities have significantly increased carbon dioxide emissions since
- the Industrial Revolution. These elevated CO2 levels have enhanced the greenhouse
- effect, trapping more heat in Earth's atmosphere. The rising global temperatures
- have accelerated the melting of polar ice caps, which has led to rising sea levels.
- Coastal communities are now facing increased flooding risks, forcing many to
- consider relocation. This migration pattern is creating new challenges for urban
+ Industrial activities have significantly increased carbon dioxide emissions since
+ the Industrial Revolution. These elevated CO2 levels have enhanced the greenhouse
+ effect, trapping more heat in Earth's atmosphere. The rising global temperatures
+ have accelerated the melting of polar ice caps, which has led to rising sea levels.
+ Coastal communities are now facing increased flooding risks, forcing many to
+ consider relocation. This migration pattern is creating new challenges for urban
planning and resource management.
""", # noqa: E501
# Biological evolution chain
"""
- The discovery of antibiotics began with Alexander Fleming's observation of
- penicillin in 1928. The widespread use of these medications has saved countless
- lives from bacterial infections. However, the extensive use of antibiotics has
- led to the evolution of resistant bacteria strains. These superbugs now pose
- a significant challenge to modern medicine, requiring the development of new
- treatment approaches. Scientists are exploring alternative solutions like
+ The discovery of antibiotics began with Alexander Fleming's observation of
+ penicillin in 1928. The widespread use of these medications has saved countless
+ lives from bacterial infections. However, the extensive use of antibiotics has
+ led to the evolution of resistant bacteria strains. These superbugs now pose
+ a significant challenge to modern medicine, requiring the development of new
+ treatment approaches. Scientists are exploring alternative solutions like
bacteriophage therapy to combat antibiotic resistance.
""", # noqa: E501
]
@@ -191,75 +191,75 @@ def main():
Q&A Pair 1:
Type: multi_hop_qa
-Question: How did the invention of transistors impact the development of
+Question: How did the invention of transistors impact the development of
personal computers?
Reasoning Steps:
1. {'step': 'Identify the role of transistors in electronics.'}
-2. {'step': 'Understand how transistors enabled the miniaturization of
+2. {'step': 'Understand how transistors enabled the miniaturization of
computers.'}
-3. {'step': 'Connect the miniaturization of computers to the creation of
+3. {'step': 'Connect the miniaturization of computers to the creation of
personal computers in the 1980s.'}
-4. {'step': 'Determine the overall impact of personal computers on work and
+4. {'step': 'Determine the overall impact of personal computers on work and
communication.'}
-Answer: The invention of transistors allowed for smaller and more efficient
-computers, which led to the development of personal computers in the 1980s,
+Answer: The invention of transistors allowed for smaller and more efficient
+computers, which led to the development of personal computers in the 1980s,
transforming work and communication.
Supporting Facts:
1. Transistors are semiconductor devices that revolutionized electronics.
2. The miniaturization of computers was made possible by transistors.
-3. Personal computers emerged in the 1980s as a result of smaller computer
+3. Personal computers emerged in the 1980s as a result of smaller computer
designs.
4. Personal computers changed how people work and communicate.
Q&A Pair 2:
Type: multi_hop_qa
-Question: What was the sequence of developments that led from transistors to
+Question: What was the sequence of developments that led from transistors to
the internet?
Reasoning Steps:
-1. {'step': 'Identify how transistors contributed to the development of
+1. {'step': 'Identify how transistors contributed to the development of
smaller and more efficient computers.'}
-2. {'step': 'Explain how the miniaturization of computers resulted in the
+2. {'step': 'Explain how the miniaturization of computers resulted in the
creation of personal computers in the 1980s.'}
3. {'step': 'Discuss how personal computers transformed work and communication.
'}
-4. {'step': 'Connect the transformation in communication to the rise of the
+4. {'step': 'Connect the transformation in communication to the rise of the
internet.'}
-Answer: Transistors enabled smaller computers, which led to personal computers
-in the 1980s, transforming communication and eventually giving rise to the
+Answer: Transistors enabled smaller computers, which led to personal computers
+in the 1980s, transforming communication and eventually giving rise to the
internet.
Supporting Facts:
-1. Transistors are tiny semiconductor devices that made computers smaller and
+1. Transistors are tiny semiconductor devices that made computers smaller and
more efficient.
-2. The miniaturization of computers allowed for the creation of personal
+2. The miniaturization of computers allowed for the creation of personal
computers in the 1980s.
3. Personal computers transformed how people work and communicate.
-4. The digital revolution and personal computers contributed to the rise of
+4. The digital revolution and personal computers contributed to the rise of
the internet, connecting billions worldwide.
Q&A Pair 3:
Type: multi_hop_qa
-Question: How did the miniaturization of computers contribute to the
+Question: How did the miniaturization of computers contribute to the
development of artificial intelligence systems today?
Reasoning Steps:
-1. {'step': 'Identify the impact of miniaturization on the creation of
+1. {'step': 'Identify the impact of miniaturization on the creation of
personal computers in the 1980s.'}
2. {'step': 'Explain how personal computers transformed communication and work.
'}
-3. {'step': 'Connect the digital revolution and the rise of the internet to
+3. {'step': 'Connect the digital revolution and the rise of the internet to
the development of artificial intelligence.'}
-4. {'step': 'Discuss how the interconnected network of the internet supports
+4. {'step': 'Discuss how the interconnected network of the internet supports
AI systems in various industries.'}
-Answer: The miniaturization of computers led to personal computers, which
-transformed communication and work, and this digital revolution, along with
-the internet, supports the development of artificial intelligence systems
+Answer: The miniaturization of computers led to personal computers, which
+transformed communication and work, and this digital revolution, along with
+the internet, supports the development of artificial intelligence systems
today.
Supporting Facts:
-1. Miniaturization of computers enabled the creation of personal computers in
+1. Miniaturization of computers enabled the creation of personal computers in
the 1980s.
2. Personal computers transformed how people work and communicate.
-3. The digital revolution led to the rise of the internet, connecting billions
+3. The digital revolution led to the rise of the internet, connecting billions
of people.
-4. The internet powers artificial intelligence systems that are reshaping
+4. The internet powers artificial intelligence systems that are reshaping
various industries.
=== Batch Processing Statistics ===
diff --git a/examples/debug/eigent.py b/examples/debug/eigent.py
index 3c4381de2c..9fdd92e5ef 100644
--- a/examples/debug/eigent.py
+++ b/examples/debug/eigent.py
@@ -98,76 +98,76 @@ def developer_agent_factory(
*TerminalToolkit(clone_current_env=True).get_tools(),
]
- system_message = f"""You are a skilled coding assistant. You can write and
- execute code by using the available terminal tools. You MUST use the
- `send_message_to_user` tool to inform the user of every decision and
- action you take. Your message must include a short title and a
+ system_message = f"""You are a skilled coding assistant. You can write and
+ execute code by using the available terminal tools. You MUST use the
+ `send_message_to_user` tool to inform the user of every decision and
+ action you take. Your message must include a short title and a
one-sentence description. This is a mandatory part of your workflow.
You are now working in `{WORKING_DIRECTORY}`. All your work
related to local operations should be done in that directory.
Your capabilities include:
- - Writing code to solve tasks. To execute the code, you MUST first save
- it to a file in the workspace (e.g., `script.py`), and then run it using
+ - Writing code to solve tasks. To execute the code, you MUST first save
+ it to a file in the workspace (e.g., `script.py`), and then run it using
the terminal tool (e.g., `python script.py`).
- - Running terminal commands to install packages (e.g., with `pip` or
- `uv`), process files, or test functionality. All files you create should
+ - Running terminal commands to install packages (e.g., with `pip` or
+ `uv`), process files, or test functionality. All files you create should
be in the designated workspace.
- - You can use `uv` or `pip` to install packages, for example, `uv pip
+ - You can use `uv` or `pip` to install packages, for example, `uv pip
install requests` or `pip install requests`.
- - Verifying your solutions through immediate execution and testing in the
+ - Verifying your solutions through immediate execution and testing in the
terminal.
- - Utilizing any Python libraries (e.g., requests, BeautifulSoup, pandas,
- etc.) needed for efficient solutions. You can install missing packages
+ - Utilizing any Python libraries (e.g., requests, BeautifulSoup, pandas,
+ etc.) needed for efficient solutions. You can install missing packages
using `pip` or `uv` in the terminal.
- - Implementing complete, production-ready code rather than theoretical
+ - Implementing complete, production-ready code rather than theoretical
examples.
- - Demonstrating results with proper error handling and practical
+ - Demonstrating results with proper error handling and practical
implementation.
- - Asking for human input via the console if you are stuck or need
+ - Asking for human input via the console if you are stuck or need
clarification.
- - Communicating with other agents using messaging tools. You can use
- `list_available_agents` to see available team members and `send_message`
+ - Communicating with other agents using messaging tools. You can use
+ `list_available_agents` to see available team members and `send_message`
to coordinate with them for complex tasks requiring collaboration.
### Terminal Tool Workflow:
- The terminal tools are session-based. You must manage one or more terminal
+ The terminal tools are session-based. You must manage one or more terminal
sessions to perform your tasks. A session is identified by a unique `id`.
- 1. **Execute Commands**: Use `shell_exec(id="...", command="...")` to run
+ 1. **Execute Commands**: Use `shell_exec(id="...", command="...")` to run
a command. If the `id` is new, a new session is created.
Example: `shell_exec(id="session_1", command="ls -l")`
- 2. **Manage Long-Running Tasks**: For commands that take time, run them
- in one step, and then use `shell_wait(id="...")` to wait for
- completion. This prevents blocking and allows you to perform other
+ 2. **Manage Long-Running Tasks**: For commands that take time, run them
+ in one step, and then use `shell_wait(id="...")` to wait for
+ completion. This prevents blocking and allows you to perform other
tasks in parallel.
- 3. **View Output**: Use `shell_view(id="...")` to see the full command
+ 3. **View Output**: Use `shell_view(id="...")` to see the full command
history and output of a session.
- 4. **Run Tasks in Parallel**: Use different session IDs to run multiple
+ 4. **Run Tasks in Parallel**: Use different session IDs to run multiple
commands concurrently.
- `shell_exec(id="install", command="pip install numpy")`
- `shell_exec(id="test", command="python my_script.py")`
5. **Interact with Processes**: For commands that require input:
- - Initialize TerminalToolkit with `interactive=True` for real-time
+ - Initialize TerminalToolkit with `interactive=True` for real-time
interactive sessions.
- - Use `shell_write_to_process(id="...", content="...")` to send input
+ - Use `shell_write_to_process(id="...", content="...")` to send input
to a non-interactive running process.
- 6. **Stop a Process**: If a process needs to be terminated, use
+ 6. **Stop a Process**: If a process needs to be terminated, use
`shell_kill_process(id="...")`.
### Collaboration and Assistance:
- - If you get stuck, encounter an issue you cannot solve (like a CAPTCHA),
+ - If you get stuck, encounter an issue you cannot solve (like a CAPTCHA),
or need clarification, use the `ask_human_via_console` tool.
- - For complex tasks, you can collaborate with other agents. Use
- `list_available_agents` to see your team members and `send_message` to
+ - For complex tasks, you can collaborate with other agents. Use
+ `list_available_agents` to see your team members and `send_message` to
communicate with them.
- Remember to manage your terminal sessions. You can create new sessions
+ Remember to manage your terminal sessions. You can create new sessions
and run commands in them.
"""
@@ -234,8 +234,8 @@ def search_agent_factory(
SearchToolkit().search_bing,
]
- system_message = f"""You are a helpful assistant that can search the web,
- extract webpage content, simulate browser actions, and provide relevant
+ system_message = f"""You are a helpful assistant that can search the web,
+ extract webpage content, simulate browser actions, and provide relevant
information to solve the given task.
**CRITICAL**: You MUST NOT answer from your own knowledge. All information
@@ -279,7 +279,7 @@ def search_agent_factory(
4. **Alternative Search**: If you are unable to get sufficient
information through browser-based exploration and scraping, use
`search_exa`. This tool is best used for getting quick summaries or
- finding specific answers when visiting web page is could not find the
+ finding specific answers when visiting web page is could not find the
information.
### Guidelines and Best Practices
@@ -289,8 +289,8 @@ def search_agent_factory(
- **Thoroughness**: If a search query is complex, break it down. If a
snippet is unhelpful but the URL seems authoritative, visit the page.
Check subpages for more information.
- - **Local File Operations**: You can use `shell_exec` to perform
- terminal commands within your working directory, such as listing files
+ - **Local File Operations**: You can use `shell_exec` to perform
+ terminal commands within your working directory, such as listing files
(`ls`) or checking file content (`cat`).
- **Persistence**: If one method fails, try another. Combine search,
scraper, and browser tools for comprehensive information gathering.
@@ -301,7 +301,7 @@ def search_agent_factory(
visited and processed.
### Handling Obstacles
- - When encountering verification challenges (like login, CAPTCHAs or
+ - When encountering verification challenges (like login, CAPTCHAs or
robot checks), you MUST request help using the human toolkit.
"""
@@ -337,10 +337,10 @@ def document_agent_factory(
*TerminalToolkit().get_tools(),
]
- system_message = f"""You are a Document Processing Assistant specialized
- in creating, modifying, and managing various document formats. You MUST
- use the `send_message_to_user` tool to inform the user of every decision
- and action you take. Your message must include a short title and a
+ system_message = f"""You are a Document Processing Assistant specialized
+ in creating, modifying, and managing various document formats. You MUST
+ use the `send_message_to_user` tool to inform the user of every decision
+ and action you take. Your message must include a short title and a
one-sentence description. This is a mandatory part of your workflow.
You are now working in `{WORKING_DIRECTORY}`. All your work
@@ -349,26 +349,26 @@ def document_agent_factory(
Your capabilities include:
1. Information Gathering:
- - Before creating any document, you MUST use the `read_note` tool to
+ - Before creating any document, you MUST use the `read_note` tool to
get all the information gathered by the Search Agent.
- - The notes contain all the raw data, findings, and sources you need
+ - The notes contain all the raw data, findings, and sources you need
to complete your work.
- - You can communicate with other agents using messaging tools when you
- need additional information. Use `list_available_agents` to see
- available team members and `send_message` to request specific data or
+ - You can communicate with other agents using messaging tools when you
+ need additional information. Use `list_available_agents` to see
+ available team members and `send_message` to request specific data or
clarifications.
2. Document Creation & Editing:
- - Create and write to various file formats including Markdown (.md),
+ - Create and write to various file formats including Markdown (.md),
Word documents (.docx), PDFs, CSV files, JSON, YAML, and HTML
- - Apply formatting options including custom encoding, font styles, and
+ - Apply formatting options including custom encoding, font styles, and
layout settings
- Modify existing files with automatic backup functionality
- - Support for mathematical expressions in PDF documents through LaTeX
+ - Support for mathematical expressions in PDF documents through LaTeX
rendering
3. PowerPoint Presentation Creation:
- - Create professional PowerPoint presentations with title slides and
+ - Create professional PowerPoint presentations with title slides and
content slides
- Format text with bold and italic styling
- Create bullet point lists with proper hierarchical structure
@@ -377,7 +377,7 @@ def document_agent_factory(
- Support for custom templates and slide layouts
4. Excel Spreadsheet Management:
- - Extract and analyze content from Excel files (.xlsx, .xls, .csv)
+ - Extract and analyze content from Excel files (.xlsx, .xls, .csv)
with detailed cell information and markdown formatting
- Create new Excel workbooks from scratch with multiple sheets
- Perform comprehensive spreadsheet operations including:
@@ -394,9 +394,9 @@ def document_agent_factory(
- Send informative messages to users without requiring responses
6. Terminal and File System:
- - You have access to a full suite of terminal tools to interact with
+ - You have access to a full suite of terminal tools to interact with
the file system within your working directory (`{WORKING_DIRECTORY}`).
- - You can execute shell commands (`shell_exec`), list files, and manage
+ - You can execute shell commands (`shell_exec`), list files, and manage
your workspace as needed to support your document creation tasks.
When working with documents, you should:
@@ -406,11 +406,11 @@ def document_agent_factory(
- Ask clarifying questions when user requirements are ambiguous
- Recommend best practices for document organization and presentation
- For Excel files, always provide clear data structure and organization
- - When creating spreadsheets, consider data relationships and use
+ - When creating spreadsheets, consider data relationships and use
appropriate sheet naming conventions
- Your goal is to help users efficiently create, modify, and manage their
- documents with professional quality and appropriate formatting across all
+ Your goal is to help users efficiently create, modify, and manage their
+ documents with professional quality and appropriate formatting across all
supported formats including advanced spreadsheet functionality."""
return ChatAgent(
@@ -443,10 +443,10 @@ def multi_modal_agent_factory(model: BaseModelBackend, task_id: str):
*TerminalToolkit().get_tools(),
]
- system_message = f"""You are a Multi-Modal Processing Assistant
- specialized in analyzing and generating various types of media content.
- You MUST use the `send_message_to_user` tool to inform the user of every
- decision and action you take. Your message must include a short title and
+ system_message = f"""You are a Multi-Modal Processing Assistant
+ specialized in analyzing and generating various types of media content.
+ You MUST use the `send_message_to_user` tool to inform the user of every
+ decision and action you take. Your message must include a short title and
a one-sentence description. This is a mandatory part of your workflow.
You are now working in `{WORKING_DIRECTORY}`. All your work
@@ -477,10 +477,10 @@ def multi_modal_agent_factory(model: BaseModelBackend, task_id: str):
- Send informative messages to users without requiring responses
5. Agent Communication:
- - Communicate with other agents using messaging tools when
- collaboration is needed. Use `list_available_agents` to see available
- team members and `send_message` to coordinate with them, especially
- when you need to share analysis results or request additional
+ - Communicate with other agents using messaging tools when
+ collaboration is needed. Use `list_available_agents` to see available
+ team members and `send_message` to coordinate with them, especially
+ when you need to share analysis results or request additional
processing capabilities.
6. File Management:
@@ -495,7 +495,7 @@ def multi_modal_agent_factory(model: BaseModelBackend, task_id: str):
- Explain your analysis process and reasoning
- Ask clarifying questions when user requirements are ambiguous
- Your goal is to help users effectively process, understand, and create
+ Your goal is to help users effectively process, understand, and create
multi-modal content across audio and visual domains."""
return ChatAgent(
@@ -515,10 +515,10 @@ def social_medium_agent_factory(model: BaseModelBackend, task_id: str):
BaseMessage.make_assistant_message(
role_name="Social Medium Agent",
content=f"""
-You are a Social Media Management Assistant with comprehensive capabilities
-across multiple platforms. You MUST use the `send_message_to_user` tool to
-inform the user of every decision and action you take. Your message must
-include a short title and a one-sentence description. This is a mandatory
+You are a Social Media Management Assistant with comprehensive capabilities
+across multiple platforms. You MUST use the `send_message_to_user` tool to
+inform the user of every decision and action you take. Your message must
+include a short title and a one-sentence description. This is a mandatory
part of your workflow.
@@ -528,7 +528,7 @@ def social_medium_agent_factory(model: BaseModelBackend, task_id: str):
Your integrated toolkits enable you to:
1. WhatsApp Business Management (WhatsAppToolkit):
- - Send text and template messages to customers via the WhatsApp Business
+ - Send text and template messages to customers via the WhatsApp Business
API.
- Retrieve business profile information.
@@ -561,9 +561,9 @@ def social_medium_agent_factory(model: BaseModelBackend, task_id: str):
- Ask questions to users and send messages via console.
8. Agent Communication:
- - Communicate with other agents using messaging tools when collaboration
- is needed. Use `list_available_agents` to see available team members and
- `send_message` to coordinate with them, especially when you need content
+ - Communicate with other agents using messaging tools when collaboration
+ is needed. Use `list_available_agents` to see available team members and
+ `send_message` to coordinate with them, especially when you need content
from document agents or research from search agents.
9. File System Access:
@@ -573,7 +573,7 @@ def social_medium_agent_factory(model: BaseModelBackend, task_id: str):
When assisting users, always:
- Identify which platform's functionality is needed for the task.
-- Check if required API credentials are available before attempting
+- Check if required API credentials are available before attempting
operations.
- Provide clear explanations of what actions you're taking.
- Handle rate limits and API restrictions appropriately.
@@ -750,8 +750,8 @@ async def main():
human_task = Task(
content=(
"""
- go to amazon and find a popular product,
- check the comments and reviews,
+ go to amazon and find a popular product,
+ check the comments and reviews,
and then write a report about the product.
"""
),
diff --git a/examples/deductive_reasoner_agent/deduce_conditions_and_quality.py b/examples/deductive_reasoner_agent/deduce_conditions_and_quality.py
index 1452617965..b5a2b983bb 100644
--- a/examples/deductive_reasoner_agent/deduce_conditions_and_quality.py
+++ b/examples/deductive_reasoner_agent/deduce_conditions_and_quality.py
@@ -30,7 +30,7 @@ def main(model=None) -> None:
print(
Fore.GREEN
+ "Conditions and quality from the starting state:\n"
- + f"{json.dumps(conditions_and_quality,
+ + f"{json.dumps(conditions_and_quality,
indent=4, ensure_ascii=False)}",
Fore.RESET,
)
diff --git a/examples/interpreters/ipython_interpreter_example.py b/examples/interpreters/ipython_interpreter_example.py
index 04c4701ce5..30f0e4054b 100644
--- a/examples/interpreters/ipython_interpreter_example.py
+++ b/examples/interpreters/ipython_interpreter_example.py
@@ -21,7 +21,7 @@
code = """
def add(a, b):
return a + b
-
+
def multiply(a, b):
return a * b
@@ -34,7 +34,7 @@ def main():
operation = subtract
result = operation(a, b)
print(result)
-
+
if __name__ == "__main__":
main()
"""
diff --git a/examples/interpreters/microsandbox_interpreter_example.py b/examples/interpreters/microsandbox_interpreter_example.py
index a8fb74ae21..1e903cd386 100644
--- a/examples/interpreters/microsandbox_interpreter_example.py
+++ b/examples/interpreters/microsandbox_interpreter_example.py
@@ -33,7 +33,7 @@ def calculate(a, b):
'product': a * b,
'difference': a - b
}
-
+
result = calculate(10, 5)
for key, value in result.items():
print(f"{key}: {value}")
@@ -60,8 +60,8 @@ def test_javascript_example():
{name: 'Bob', age: 25},
{name: 'Charlie', age: 35}
];
-
- const avgAge = users.reduce((sum, user) => sum + user.age, 0) /
+
+ const avgAge = users.reduce((sum, user) => sum + user.age, 0) /
users.length;
console.log(`Average age: ${avgAge}`);
console.log(`Users: ${users.map(u => u.name).join(', ')}`);
diff --git a/examples/knowledge_graph/knowledge_graph_agent_example.py b/examples/knowledge_graph/knowledge_graph_agent_example.py
index 77e679290a..8e73a6eb3b 100644
--- a/examples/knowledge_graph/knowledge_graph_agent_example.py
+++ b/examples/knowledge_graph/knowledge_graph_agent_example.py
@@ -23,8 +23,8 @@
kg_agent = KnowledgeGraphAgent()
# Set example text input
-text_example = """CAMEL-AI.org is an open-source community dedicated to the
-study of autonomous and communicative agents.
+text_example = """CAMEL-AI.org is an open-source community dedicated to the
+study of autonomous and communicative agents.
"""
# Create an element from given text
@@ -72,72 +72,72 @@
{'agent_generated'}), Node(id='community', type='Concept', properties=
{'agent_generated'}), Node(id='study', type='Concept', properties=
{'agent_generated'}), Node(id='autonomous agents', type='Concept', properties=
-{'agent_generated'}), Node(id='communicative agents', type='Concept',
+{'agent_generated'}), Node(id='communicative agents', type='Concept',
properties={'agent_generated'})], relationships=[Relationship(subj=Node
-(id='CAMEL-AI.org', type='Organization', properties={'agent_generated'}),
-obj=Node(id='community', type='Concept', properties={'agent_generated'}),
+(id='CAMEL-AI.org', type='Organization', properties={'agent_generated'}),
+obj=Node(id='community', type='Concept', properties={'agent_generated'}),
type='FocusOn', properties={"'agent_generated'"}), Relationship(subj=Node
-(id='CAMEL-AI.org', type='Organization', properties={'agent_generated'}),
-obj=Node(id='study', type='Concept', properties={'agent_generated'}),
+(id='CAMEL-AI.org', type='Organization', properties={'agent_generated'}),
+obj=Node(id='study', type='Concept', properties={'agent_generated'}),
type='FocusOn', properties={"'agent_generated'"}), Relationship(subj=Node
-(id='CAMEL-AI.org', type='Organization', properties={'agent_generated'}),
+(id='CAMEL-AI.org', type='Organization', properties={'agent_generated'}),
obj=Node(id='autonomous agents', type='Concept', properties=
-{'agent_generated'}), type='FocusOn', properties={"'agent_generated'"}),
+{'agent_generated'}), type='FocusOn', properties={"'agent_generated'"}),
Relationship(subj=Node(id='CAMEL-AI.org', type='Organization', properties=
-{'agent_generated'}), obj=Node(id='communicative agents', type='Concept',
+{'agent_generated'}), obj=Node(id='communicative agents', type='Concept',
properties={'agent_generated'}), type='FocusOn', properties=
-{"'agent_generated'"})], source=)
===============================================================================
"""
custom_prompt = """
-You are tasked with extracting nodes and relationships from given content and
-structures them into Node and Relationship objects. Here's the outline of what
+You are tasked with extracting nodes and relationships from given content and
+structures them into Node and Relationship objects. Here's the outline of what
you needs to do:
Content Extraction:
-You should be able to process input content and identify entities mentioned
+You should be able to process input content and identify entities mentioned
within it.
-Entities can be any noun phrases or concepts that represent distinct entities
+Entities can be any noun phrases or concepts that represent distinct entities
in the context of the given content.
Node Extraction:
For each identified entity, you should create a Node object.
Each Node object should have a unique identifier (id) and a type (type).
-Additional properties associated with the node can also be extracted and
+Additional properties associated with the node can also be extracted and
stored.
Relationship Extraction:
You should identify relationships between entities mentioned in the content.
For each relationship, create a Relationship object.
-A Relationship object should have a subject (subj) and an object (obj) which
+A Relationship object should have a subject (subj) and an object (obj) which
are Node objects representing the entities involved in the relationship.
-Each relationship should also have a type (type), and additional properties if
+Each relationship should also have a type (type), and additional properties if
applicable.
-**New Requirement:**
-Each relationship must have a timestamp representing the time the relationship
+**New Requirement:**
+Each relationship must have a timestamp representing the time the relationship
was established or mentioned.
Output Formatting:
-The extracted nodes and relationships should be formatted as instances of the
+The extracted nodes and relationships should be formatted as instances of the
provided Node and Relationship classes.
Ensure that the extracted data adheres to the structure defined by the classes.
-Output the structured data in a format that can be easily validated against
+Output the structured data in a format that can be easily validated against
the provided code.
Instructions for you:
Read the provided content thoroughly.
-Identify distinct entities mentioned in the content and categorize them as
+Identify distinct entities mentioned in the content and categorize them as
nodes.
-Determine relationships between these entities and represent them as directed
+Determine relationships between these entities and represent them as directed
relationships, including a timestamp for each relationship.
Provide the extracted nodes and relationships in the specified format below.
Example for you:
Example Content:
-"John works at XYZ Corporation since 2020. He is a software engineer. The
+"John works at XYZ Corporation since 2020. He is a software engineer. The
company is located in New York City."
Expected Output:
@@ -150,14 +150,14 @@
Relationships:
-Relationship(subj=Node(id='John', type='Person'), obj=Node(id='XYZ
+Relationship(subj=Node(id='John', type='Person'), obj=Node(id='XYZ
Corporation', type='Organization'), type='WorksAt', timestamp='1717193166')
-Relationship(subj=Node(id='John', type='Person'), obj=Node(id='New York City',
+Relationship(subj=Node(id='John', type='Person'), obj=Node(id='New York City',
type='Location'), type='ResidesIn', timestamp='1719700236')
===== TASK =====
-Please extracts nodes and relationships from given content and structures them
-into Node and Relationship objects.
+Please extracts nodes and relationships from given content and structures them
+into Node and Relationship objects.
{task}
"""
@@ -188,13 +188,13 @@
### Relationships:
-1. Relationship(subj=Node(id='CAMEL-AI.org', type='Organization'),
+1. Relationship(subj=Node(id='CAMEL-AI.org', type='Organization'),
obj=Node(id='open-source community', type='Community'), type='IsPartOf',
timestamp='1717193166')
-2. Relationship(subj=Node(id='open-source community', type='Community'),
+2. Relationship(subj=Node(id='open-source community', type='Community'),
obj=Node(id='autonomous agents', type='Concept'), type='Studies',
timestamp='1719700236')
-3. Relationship(subj=Node(id='open-source community', type='Community'),
+3. Relationship(subj=Node(id='open-source community', type='Community'),
obj=Node(id='communicative agents', type='Concept'), type='Studies',
timestamp='1719700236')
===============================================================================
@@ -204,30 +204,30 @@
===============================================================================
nodes=[
Node(id='CAMEL-AI.org', type='Organization',
-properties={'source': 'agent_created'}),
+properties={'source': 'agent_created'}),
Node(id='open-source community', type='Community',
-properties={'source': 'agent_created'}),
+properties={'source': 'agent_created'}),
Node(id='autonomous agents', type='Concept',
-properties={'source': 'agent_created'}),
+properties={'source': 'agent_created'}),
Node(id='communicative agents', type='Concept',
-properties={'source': 'agent_created'})]
-relationships=[Relationship(subj=Node(id='CAMEL-AI.org', type='Organization',
-properties={'source': 'agent_created'}),
+properties={'source': 'agent_created'})]
+relationships=[Relationship(subj=Node(id='CAMEL-AI.org', type='Organization',
+properties={'source': 'agent_created'}),
obj=Node(id='open-source community', type='Community',
properties={'source': 'agent_created'}),
-type="IsA', timestamp='1717193166", properties={'source': 'agent_created'}),
-Relationship(subj=Node(id='open-source community', type='Community',
-properties={'source': 'agent_created'}),
-obj=Node(id='autonomous agents', type='Concept',
+type="IsA', timestamp='1717193166", properties={'source': 'agent_created'}),
+Relationship(subj=Node(id='open-source community', type='Community',
+properties={'source': 'agent_created'}),
+obj=Node(id='autonomous agents', type='Concept',
properties={'source': 'agent_created'}), type="Studies',
-timestamp='1719700236",
+timestamp='1719700236",
+properties={'source': 'agent_created'}),
+Relationship(subj=Node(id='open-source community', type='Community',
+properties={'source': 'agent_created'}),
+obj=Node(id='communicative agents', type='Concept',
properties={'source': 'agent_created'}),
-Relationship(subj=Node(id='open-source community', type='Community',
-properties={'source': 'agent_created'}),
-obj=Node(id='communicative agents', type='Concept',
-properties={'source': 'agent_created'}),
type="Studies', timestamp='1719700236",
-properties={'source': 'agent_created'})]
+properties={'source': 'agent_created'})]
source=
===============================================================================
"""
diff --git a/examples/loaders/apify_example.py b/examples/loaders/apify_example.py
index bdf1a01925..de0fc90fba 100644
--- a/examples/loaders/apify_example.py
+++ b/examples/loaders/apify_example.py
@@ -48,18 +48,18 @@
===============================================================================
[{'url': 'https://www.camel-ai.org/', 'crawl': {'loadedUrl': 'https://www.camel
-ai.org/', 'loadedTime': '2024-10-27T04:51:16.651Z', 'referrerUrl': 'https://ww
-w.camel-ai.org/', 'depth': 0, 'httpStatusCode': 200}, 'metadata':
-{'canonicalUrl': 'https://www.camel-ai.org/', 'title': 'CAMEL-AI',
-'description': 'CAMEL-AI.org is the 1st LLM multi-agent framework and an
-open-source community dedicated to finding the scaling law of agents.',
+w.camel-ai.org/', 'depth': 0, 'httpStatusCode': 200}, 'metadata':
+{'canonicalUrl': 'https://www.camel-ai.org/', 'title': 'CAMEL-AI',
+'description': 'CAMEL-AI.org is the 1st LLM multi-agent framework and an
+open-source community dedicated to finding the scaling law of agents.',
'author': None, 'keywords': None, 'languageCode': 'en', 'openGraph':
-[{'property': 'og:title', 'content': 'CAMEL-AI'}, {'property':
-'og:description', 'content': 'CAMEL-AI.org is the 1st LLM multi-agent
-framework and an open-source community dedicated to finding the scaling law of
-agents.'}, {'property': 'twitter:title', 'content': 'CAMEL-AI'}, {'property':
-'twitter:description', 'content': 'CAMEL-AI.org is the 1st LLM multi-agent
-framework and an open-source community dedicated to finding the scaling law of
-agents.'}, {'property': 'og:type', 'content': 'website'}], 'jsonLd': None,
+[{'property': 'og:title', 'content': 'CAMEL-AI'}, {'property':
+'og:description', 'content': 'CAMEL-AI.org is the 1st LLM multi-agent
+framework and an open-source community dedicated to finding the scaling law of
+agents.'}, {'property': 'twitter:title', 'content': 'CAMEL-AI'}, {'property':
+'twitter:description', 'content': 'CAMEL-AI.org is the 1st LLM multi-agent
+framework and an open-source community dedicated to finding the scaling law of
+agents.'}, {'property': 'og:type', 'content': 'website'}], 'jsonLd': None,
'headers': {'date': 'Sun, 27 Oct 2024 04:50:18 GMT', 'content-type': 'text/
html', 'cf-ray': '8d901082dae7efbe-PDX', 'cf-cache-status': 'HIT', 'age': '10
81', 'content-encoding': 'gzip', 'last-modified': 'Sat, 26 Oct 2024 11:51:32 G
diff --git a/examples/loaders/crawl4ai_example.py b/examples/loaders/crawl4ai_example.py
index 1e3b54604f..8a9e8281b6 100644
--- a/examples/loaders/crawl4ai_example.py
+++ b/examples/loaders/crawl4ai_example.py
@@ -34,41 +34,41 @@
## Books
-A [fictional bookstore](http://books.toscrape.com) that desperately wants to
-be scraped. It's a safe place for beginners learning web scraping and for
-developers validating their scraping technologies as well.
+A [fictional bookstore](http://books.toscrape.com) that desperately wants to
+be scraped. It's a safe place for beginners learning web scraping and for
+developers validating their scraping technologies as well.
Available at: [books.toscrape.com](http://books.toscrape.com)
[](http://books.toscrape.com)
-Details
----
-Amount of items | 1000
-Pagination | ✔
-Items per page | max 20
-Requires JavaScript | ✘
-
+Details
+---
+Amount of items | 1000
+Pagination | ✔
+Items per page | max 20
+Requires JavaScript | ✘
+
## Quotes
[A website](http://quotes.toscrape.com/) that lists quotes from famous people.
-It has many endpoints showing the quotes in many different ways, each of them
+It has many endpoints showing the quotes in many different ways, each of them
including new scraping challenges for you, as described below.
[](http://quotes.toscrape.com)
-Endpoints
----
-[Default](http://quotes.toscrape.com/)| Microdata and pagination
-[Scroll](http://quotes.toscrape.com/scroll) | infinite scrolling pagination
-[JavaScript](http://quotes.toscrape.com/js) | JavaScript generated content
-[Delayed](http://quotes.toscrape.com/js-delayed) | Same as JavaScript but with
- a delay (?delay=10000)
-[Tableful](http://quotes.toscrape.com/tableful) | a table based messed-up
+Endpoints
+---
+[Default](http://quotes.toscrape.com/)| Microdata and pagination
+[Scroll](http://quotes.toscrape.com/scroll) | infinite scrolling pagination
+[JavaScript](http://quotes.toscrape.com/js) | JavaScript generated content
+[Delayed](http://quotes.toscrape.com/js-delayed) | Same as JavaScript but with
+ a delay (?delay=10000)
+[Tableful](http://quotes.toscrape.com/tableful) | a table based messed-up
layout
-[Login](http://quotes.toscrape.com/login) | login with CSRF token
- (any user/passwd works)
-[ViewState](http://quotes.toscrape.com/search.aspx) | an AJAX based filter
- form with ViewStates
+[Login](http://quotes.toscrape.com/login) | login with CSRF token
+ (any user/passwd works)
+[ViewState](http://quotes.toscrape.com/search.aspx) | an AJAX based filter
+ form with ViewStates
[Random](http://quotes.toscrape.com/random) | a single random quote
===============================================================================
'''
@@ -79,11 +79,11 @@
print(scrape_result)
'''
===============================================================================
-{url: 'https://toscrape.com/',
-'raw_result': CrawlResult(url='https://toscrape.com/', markdown=...,
+{url: 'https://toscrape.com/',
+'raw_result': CrawlResult(url='https://toscrape.com/', markdown=...,
cleaned_html=..., links=...),
-'markdown': "\n\n# Web Scraping Sandbox\n\n## Books...",
-'cleaned_html': '\n
\n
\n
\n\n
\n
\n\n
Web Scraping Sandbox
\n...'}
===============================================================================
diff --git a/examples/loaders/firecrawl_example.py b/examples/loaders/firecrawl_example.py
index 9e2da6a810..83ff7f9cd9 100644
--- a/examples/loaders/firecrawl_example.py
+++ b/examples/loaders/firecrawl_example.py
@@ -36,17 +36,17 @@
Camel-AI Team
-We are finding the
+We are finding the
scaling law of agent
=========================================
-🐫 CAMEL is an open-source library designed for the study of autonomous and
-communicative agents. We believe that studying these agents on a large scale
-offers valuable insights into their behaviors, capabilities, and potential
-risks. To facilitate research in this field, we implement and support various
+🐫 CAMEL is an open-source library designed for the study of autonomous and
+communicative agents. We believe that studying these agents on a large scale
+offers valuable insights into their behaviors, capabilities, and potential
+risks. To facilitate research in this field, we implement and support various
types of agents, tasks, prompts, models, and simulated environments.
-**We are** always looking for more **contributors** and **collaborators**.
+**We are** always looking for more **contributors** and **collaborators**.
Contact us to join forces via [Slack](https://join.slack.com/t/camel-kwr1314/
shared_invite/zt-1vy8u9lbo-ZQmhIAyWSEfSwLCl2r2eKA)
or [Discord](https://discord.gg/CNcNpquyDc)...
@@ -74,14 +74,14 @@ class TopArticlesSchema(BaseModel):
print(response)
'''
===============================================================================
-{'top': [{'title': 'Foobar2000', 'points': 69, 'by': 'citruscomputing',
-'commentsURL': 'item?id=41122920'}, {'title': 'How great was the Great
+{'top': [{'title': 'Foobar2000', 'points': 69, 'by': 'citruscomputing',
+'commentsURL': 'item?id=41122920'}, {'title': 'How great was the Great
Oxidation Event?', 'points': 145, 'by': 'Brajeshwar', 'commentsURL': 'item?
-id=41119080'}, {'title': 'Launch HN: Martin (YC S23) - Using LLMs to Make a
+id=41119080'}, {'title': 'Launch HN: Martin (YC S23) - Using LLMs to Make a
Better Siri', 'points': 73, 'by': 'darweenist', 'commentsURL': 'item?
-id=41119443'}, {'title': 'macOS in QEMU in Docker', 'points': 488, 'by':
-'lijunhao', 'commentsURL': 'item?id=41116473'}, {'title': 'Crafting
-Interpreters with Rust: On Garbage Collection', 'points': 148, 'by':
+id=41119443'}, {'title': 'macOS in QEMU in Docker', 'points': 488, 'by':
+'lijunhao', 'commentsURL': 'item?id=41116473'}, {'title': 'Crafting
+Interpreters with Rust: On Garbage Collection', 'points': 148, 'by':
'amalinovic', 'commentsURL': 'item?id=41108662'}]}
===============================================================================
'''
diff --git a/examples/loaders/mistral_example.py b/examples/loaders/mistral_example.py
index 25ec022c7a..2eb0e9e862 100644
--- a/examples/loaders/mistral_example.py
+++ b/examples/loaders/mistral_example.py
@@ -24,63 +24,63 @@
print(url_ocr_response)
"""
============================================================================
-pages=[OCRPageObject(index=5, markdown='\n\nFigure 2:
-Scatter plot of predicted accuracy versus (true) OOD accuracy. Each point
-denotes a different OOD dataset, all evaluated with the same DenseNet121
-model. We only plot the best three methods. With ATC (ours), we refer to
-ATC-NE. We observe that ATC significantly outperforms other methods and with
-ATC, we recover the desired line $y=x$ with a robust linear fit. Aggregated
-estimation error in Table 1 and plots for other datasets and architectures in
-App. H.\nof the target accuracy with various methods given access to only
-unlabeled data from the target. Unless noted otherwise, all models are trained
-only on samples from the source distribution with the main exception of
-pre-training on a different distribution. We use labeled examples from the
-target distribution to only obtain true error estimates.\n\nDatasets. First,
+pages=[OCRPageObject(index=5, markdown='\n\nFigure 2:
+Scatter plot of predicted accuracy versus (true) OOD accuracy. Each point
+denotes a different OOD dataset, all evaluated with the same DenseNet121
+model. We only plot the best three methods. With ATC (ours), we refer to
+ATC-NE. We observe that ATC significantly outperforms other methods and with
+ATC, we recover the desired line $y=x$ with a robust linear fit. Aggregated
+estimation error in Table 1 and plots for other datasets and architectures in
+App. H.\nof the target accuracy with various methods given access to only
+unlabeled data from the target. Unless noted otherwise, all models are trained
+only on samples from the source distribution with the main exception of
+pre-training on a different distribution. We use labeled examples from the
+target distribution to only obtain true error estimates.\n\nDatasets. First,
we consider synthetic shifts induced due to different visual corruptions (e.
-g., shot noise, motion blur etc.) under ImageNet-C (Hendrycks \\& Dietterich,
-2019). Next, we consider natural shifts due to differences in the data
-collection process of ImageNet (Russakovsky et al., 2015), e.g, ImageNetv2
-(Recht et al., 2019). We also consider images with artistic renditions of
-object classes, i.e., ImageNet-R (Hendrycks et al., 2021) and ImageNet-Sketch
-(Wang et al., 2019). Note that renditions dataset only contains a subset 200
-classes from ImageNet. To include renditions dataset in our testbed, we
-include results on ImageNet restricted to these 200 classes (which we call
-ImageNet-200) along with full ImageNet.\n\nSecond, we consider BREEDS
-(Santurkar et al., 2020) to assess robustness to subpopulation shifts, in
-particular, to understand how accuracy estimation methods behave when novel
-subpopulations not observed during training are introduced. BREEDS leverages
-class hierarchy in ImageNet to create 4 datasets ENTITY-13, ENTITY-30,
-LIVING-17, NON-LIVING-26. We focus on natural and synthetic shifts as in
-ImageNet on same and different subpopulations in BREEDs. Third, from WILDS
-(Koh et al., 2021) benchmark, we consider FMoW-WILDS (Christie et al., 2018),
-RxRx1-WILDS (Taylor et al., 2019), Amazon-WILDS (Ni et al., 2019),
-CivilComments-WILDS (Borkan et al., 2019) to consider distribution shifts
-faced in the wild.\n\nFinally, similar to ImageNet, we consider (i) synthetic
-shifts (CIFAR-10-C) due to common corruptions; and (ii) natural shift (i.e.,
-CIFARv2 (Recht et al., 2018)) on CIFAR-10 (Krizhevsky \\& Hinton, 2009). On
-CIFAR-100, we just have synthetic shifts due to common corruptions. For
-completeness, we also consider natural shifts on MNIST (LeCun et al., 1998) as
-in the prior work (Deng \\& Zheng, 2021). We use three real shifted datasets,
-i.e., USPS (Hull, 1994), SVHN (Netzer et al., 2011) and QMNIST (Yadav \\&
+g., shot noise, motion blur etc.) under ImageNet-C (Hendrycks \\& Dietterich,
+2019). Next, we consider natural shifts due to differences in the data
+collection process of ImageNet (Russakovsky et al., 2015), e.g, ImageNetv2
+(Recht et al., 2019). We also consider images with artistic renditions of
+object classes, i.e., ImageNet-R (Hendrycks et al., 2021) and ImageNet-Sketch
+(Wang et al., 2019). Note that renditions dataset only contains a subset 200
+classes from ImageNet. To include renditions dataset in our testbed, we
+include results on ImageNet restricted to these 200 classes (which we call
+ImageNet-200) along with full ImageNet.\n\nSecond, we consider BREEDS
+(Santurkar et al., 2020) to assess robustness to subpopulation shifts, in
+particular, to understand how accuracy estimation methods behave when novel
+subpopulations not observed during training are introduced. BREEDS leverages
+class hierarchy in ImageNet to create 4 datasets ENTITY-13, ENTITY-30,
+LIVING-17, NON-LIVING-26. We focus on natural and synthetic shifts as in
+ImageNet on same and different subpopulations in BREEDs. Third, from WILDS
+(Koh et al., 2021) benchmark, we consider FMoW-WILDS (Christie et al., 2018),
+RxRx1-WILDS (Taylor et al., 2019), Amazon-WILDS (Ni et al., 2019),
+CivilComments-WILDS (Borkan et al., 2019) to consider distribution shifts
+faced in the wild.\n\nFinally, similar to ImageNet, we consider (i) synthetic
+shifts (CIFAR-10-C) due to common corruptions; and (ii) natural shift (i.e.,
+CIFARv2 (Recht et al., 2018)) on CIFAR-10 (Krizhevsky \\& Hinton, 2009). On
+CIFAR-100, we just have synthetic shifts due to common corruptions. For
+completeness, we also consider natural shifts on MNIST (LeCun et al., 1998) as
+in the prior work (Deng \\& Zheng, 2021). We use three real shifted datasets,
+i.e., USPS (Hull, 1994), SVHN (Netzer et al., 2011) and QMNIST (Yadav \\&
Bottou, 2019). We give a detailed overview of our setup in App. F.
-\n\nArchitectures and Evaluation. For ImageNet, BREEDS, CIFAR, FMoW-WILDS,
-RxRx1-WILDS datasets, we use DenseNet121 (Huang et al., 2017) and ResNet50 (He
-et al., 2016) architectures. For Amazon-WILDS and CivilComments-WILDS, we
-fine-tune a DistilBERT-base-uncased (Sanh et al., 2019) model. For MNIST, we
-train a fully connected multilayer perceptron. We use standard training with
-benchmarked hyperparameters. To compare methods, we report average absolute
-difference between the true accuracy on the target data and the estimated
-accuracy on the same unlabeled examples. We refer to this metric as Mean
-Absolute estimation Error (MAE). Along with MAE, we also show scatter plots to
-visualize performance at individual target sets. Refer to App. G for
-additional details on the setup.\n\nMethods With ATC-NE, we denote ATC with
-negative entropy score function and with ATC-MC, we denote ATC with maximum
-confidence score function. For all methods, we implement post-hoc calibration
-on validation source data with Temperature Scaling (TS; Guo et al. (2017)).
-Below we briefly discuss baselines methods compared in our work and relegate
-details to App. E.', images=[OCRImageObject(id='img-0.jpeg', top_left_x=294,
-top_left_y=180, bottom_right_x=1387, bottom_right_y=558, image_base64=None,
-image_annotation=None)], dimensions=OCRPageDimensions(dpi=200, height=2200,
+\n\nArchitectures and Evaluation. For ImageNet, BREEDS, CIFAR, FMoW-WILDS,
+RxRx1-WILDS datasets, we use DenseNet121 (Huang et al., 2017) and ResNet50 (He
+et al., 2016) architectures. For Amazon-WILDS and CivilComments-WILDS, we
+fine-tune a DistilBERT-base-uncased (Sanh et al., 2019) model. For MNIST, we
+train a fully connected multilayer perceptron. We use standard training with
+benchmarked hyperparameters. To compare methods, we report average absolute
+difference between the true accuracy on the target data and the estimated
+accuracy on the same unlabeled examples. We refer to this metric as Mean
+Absolute estimation Error (MAE). Along with MAE, we also show scatter plots to
+visualize performance at individual target sets. Refer to App. G for
+additional details on the setup.\n\nMethods With ATC-NE, we denote ATC with
+negative entropy score function and with ATC-MC, we denote ATC with maximum
+confidence score function. For all methods, we implement post-hoc calibration
+on validation source data with Temperature Scaling (TS; Guo et al. (2017)).
+Below we briefly discuss baselines methods compared in our work and relegate
+details to App. E.', images=[OCRImageObject(id='img-0.jpeg', top_left_x=294,
+top_left_y=180, bottom_right_x=1387, bottom_right_y=558, image_base64=None,
+image_annotation=None)], dimensions=OCRPageDimensions(dpi=200, height=2200,
width=1700))] model='mistral-ocr-2505-completion' usage_info=OCRUsageInfo
(pages_processed=1, doc_size_bytes=3002783) document_annotation=None
============================================================================
@@ -94,14 +94,14 @@ class hierarchy in ImageNet to create 4 datasets ENTITY-13, ENTITY-30,
print(image_ocr_response)
"""
============================================================================
-pages=[OCRPageObject(index=0, markdown='PLACE FACE UP ON DASH\nCITY OF PALO
-ALTO\nNOT VALID FOR\nONSTREET PARKING\n\nExpiration Date/Time\n11:59 PM\nAUG
-19, 2024\n\nPurchase Date/Time: 01:34pm Aug 19, 2024\nTotal Due: $15.00\nTotal
-Paid: $15.00\nTicket #: 00005883\nS/N #: 520117260957\nSetting: Permit
-Machines\nMach Name: Civic Center\n\n#****-1224, Visa\nDISPLAY FACE UP ON
+pages=[OCRPageObject(index=0, markdown='PLACE FACE UP ON DASH\nCITY OF PALO
+ALTO\nNOT VALID FOR\nONSTREET PARKING\n\nExpiration Date/Time\n11:59 PM\nAUG
+19, 2024\n\nPurchase Date/Time: 01:34pm Aug 19, 2024\nTotal Due: $15.00\nTotal
+Paid: $15.00\nTicket #: 00005883\nS/N #: 520117260957\nSetting: Permit
+Machines\nMach Name: Civic Center\n\n#****-1224, Visa\nDISPLAY FACE UP ON
DASH\n\nPERMIT EXPIRES\nAT MIDNIGHT', images=[], dimensions=OCRPageDimensions
-(dpi=200, height=3210, width=1806))] model='mistral-ocr-2505-completion'
-usage_info=OCRUsageInfo(pages_processed=1, doc_size_bytes=3110191)
+(dpi=200, height=3210, width=1806))] model='mistral-ocr-2505-completion'
+usage_info=OCRUsageInfo(pages_processed=1, doc_size_bytes=3110191)
document_annotation=None
============================================================================
"""
@@ -135,20 +135,20 @@ class hierarchy in ImageNet to create 4 datasets ENTITY-13, ENTITY-30,
"""
============================================================================
-pages=[OCRPageObject(index=0, markdown='PLACE FACE UP ON DASH\nCITY OF PALO
-ALTO\nNOT VALID FOR\nONSTREET PARKING\n\nExpiration Date/Time\n11:59 PM\nAUG
-19, 2024\n\nPurchase Date/Time: 01:34pm Aug 19, 2024\nTotal Due: $15.00\nTotal
-Paid: $15.00\nTicket #: 00005883\nS/N #: 520117260957\nSetting: Permit
-Machines\nMach Name: Civic Center\n\n#****-1224, Visa\nDISPLAY FACE UP ON
+pages=[OCRPageObject(index=0, markdown='PLACE FACE UP ON DASH\nCITY OF PALO
+ALTO\nNOT VALID FOR\nONSTREET PARKING\n\nExpiration Date/Time\n11:59 PM\nAUG
+19, 2024\n\nPurchase Date/Time: 01:34pm Aug 19, 2024\nTotal Due: $15.00\nTotal
+Paid: $15.00\nTicket #: 00005883\nS/N #: 520117260957\nSetting: Permit
+Machines\nMach Name: Civic Center\n\n#****-1224, Visa\nDISPLAY FACE UP ON
DASH\n\nPERMIT EXPIRES\nAT MIDNIGHT', images=[], dimensions=OCRPageDimensions
-(dpi=200, height=3210, width=1806))] model='mistral-ocr-2505-completion'
-usage_info=OCRUsageInfo(pages_processed=1, doc_size_bytes=3110191)
+(dpi=200, height=3210, width=1806))] model='mistral-ocr-2505-completion'
+usage_info=OCRUsageInfo(pages_processed=1, doc_size_bytes=3110191)
document_annotation=None
-pages=[OCRPageObject(index=0, markdown='CAMEL AI Research Report\nThis is a
-complex mock PDF document for testing OCR capabilities.\nIt contains multiple
+pages=[OCRPageObject(index=0, markdown='CAMEL AI Research Report\nThis is a
+complex mock PDF document for testing OCR capabilities.\nIt contains multiple
lines of text and formatting.', images=[], dimensions=OCRPageDimensions
-(dpi=200, height=2339, width=1653))] model='mistral-ocr-2505-completion'
-usage_info=OCRUsageInfo(pages_processed=1, doc_size_bytes=612)
+(dpi=200, height=2339, width=1653))] model='mistral-ocr-2505-completion'
+usage_info=OCRUsageInfo(pages_processed=1, doc_size_bytes=612)
document_annotation=None
============================================================================
"""
diff --git a/examples/memories/agent_memory_example.py b/examples/memories/agent_memory_example.py
index fd5c52c8f9..9e924602eb 100644
--- a/examples/memories/agent_memory_example.py
+++ b/examples/memories/agent_memory_example.py
@@ -55,8 +55,8 @@
)
'''
===============================================================================
-Assistant response 1: Yes, I can remember that instruction. "Banana" is
-designated as a country in this context. How can I assist you further with
+Assistant response 1: Yes, I can remember that instruction. "Banana" is
+designated as a country in this context. How can I assist you further with
this information?
===============================================================================
'''
@@ -69,7 +69,7 @@
)
'''
===============================================================================
-Assistant response 2: Got it! I will remember that CAMEL lives in Banana. How
+Assistant response 2: Got it! I will remember that CAMEL lives in Banana. How
can I assist you further?
===============================================================================
'''
@@ -104,8 +104,8 @@
)
'''
===============================================================================
-New Agent response (after loading memory): We were discussing that "Banana" is
-a country, and you mentioned that CAMEL lives in Banana. How can I assist you
+New Agent response (after loading memory): We were discussing that "Banana" is
+a country, and you mentioned that CAMEL lives in Banana. How can I assist you
further with this information?
===============================================================================
'''
@@ -142,13 +142,13 @@
You are a helpful assistant
You are a helpful assistant
Hello, can you remember these instructions?: Banana is a country.
-Yes, I can remember that instruction. "Banana" is designated as a country in
+Yes, I can remember that instruction. "Banana" is designated as a country in
this context. How can I assist you further with this information?
Please store and recall this next time: CAMEL lives in Banana.
-Got it! I will remember that CAMEL lives in Banana. How can I assist you
+Got it! I will remember that CAMEL lives in Banana. How can I assist you
further?
What were we talking about?
-We were discussing that "Banana" is a country, and you mentioned that CAMEL
+We were discussing that "Banana" is a country, and you mentioned that CAMEL
lives in Banana. How can I assist you further with this information?
Another system message
This is memory from a second agent
diff --git a/examples/memories/agent_memory_vector_db_example.py b/examples/memories/agent_memory_vector_db_example.py
index ec8c5f5141..cfadbebfec 100644
--- a/examples/memories/agent_memory_vector_db_example.py
+++ b/examples/memories/agent_memory_vector_db_example.py
@@ -66,13 +66,13 @@
)
'''
===============================================================================
-Agent #1 response: Yes, dolphins use echolocation as a way to navigate and
-hunt for food in their aquatic environment. They emit sound waves that travel
-through the water, and when these sound waves hit an object, they bounce back
-to the dolphin. By interpreting the returning echoes, dolphins can determine
-the size, shape, distance, and even the texture of objects around them. This
-ability is particularly useful in murky waters where visibility is limited.
-Echolocation is a remarkable adaptation that enhances their ability to survive
+Agent #1 response: Yes, dolphins use echolocation as a way to navigate and
+hunt for food in their aquatic environment. They emit sound waves that travel
+through the water, and when these sound waves hit an object, they bounce back
+to the dolphin. By interpreting the returning echoes, dolphins can determine
+the size, shape, distance, and even the texture of objects around them. This
+ability is particularly useful in murky waters where visibility is limited.
+Echolocation is a remarkable adaptation that enhances their ability to survive
and thrive in their habitats.
===============================================================================
'''
@@ -85,12 +85,12 @@
)
'''
===============================================================================
-Agent #1 response: That's correct! Whales are indeed the largest mammals on
-Earth, with the blue whale being the largest animal known to have ever
-existed. They belong to the order Cetacea, which also includes dolphins and
-porpoises. Both dolphins and whales use echolocation to navigate and hunt for
-food in the ocean, although their methods and the specifics of their
-echolocation can vary. Would you like to know more about either dolphins or
+Agent #1 response: That's correct! Whales are indeed the largest mammals on
+Earth, with the blue whale being the largest animal known to have ever
+existed. They belong to the order Cetacea, which also includes dolphins and
+porpoises. Both dolphins and whales use echolocation to navigate and hunt for
+food in the ocean, although their methods and the specifics of their
+echolocation can vary. Would you like to know more about either dolphins or
whales?
===============================================================================
'''
@@ -107,7 +107,7 @@
# 7) Create a new agent, load that JSON memory to confirm retrieval
new_agent1 = ChatAgent(
- system_message="""You are the resurrected assistant #1 with
+ system_message="""You are the resurrected assistant #1 with
vector DB memory.""",
agent_id="agent_001", # same agent_id to match the saved records
model=model,
@@ -137,47 +137,47 @@
)
'''
===============================================================================
-New Agent #1 response (after loading memory): Marine mammals are a diverse
-group of mammals that are primarily adapted to life in the ocean. They include
-several different orders, each with unique characteristics and adaptations.
+New Agent #1 response (after loading memory): Marine mammals are a diverse
+group of mammals that are primarily adapted to life in the ocean. They include
+several different orders, each with unique characteristics and adaptations.
Here are some key points about marine mammals:
1. **Orders of Marine Mammals**:
- - **Cetacea**: This order includes whales, dolphins, and porpoises. They
- are fully aquatic and have adaptations such as streamlined bodies and the
+ - **Cetacea**: This order includes whales, dolphins, and porpoises. They
+ are fully aquatic and have adaptations such as streamlined bodies and the
ability to hold their breath for long periods.
- - **Pinnipedia**: This group includes seals, sea lions, and walruses. They
+ - **Pinnipedia**: This group includes seals, sea lions, and walruses. They
are semi-aquatic, spending time both in the water and on land.
- - **Sirenia**: This order includes manatees and dugongs, which are
+ - **Sirenia**: This order includes manatees and dugongs, which are
herbivorous and primarily inhabit warm coastal waters and rivers.
- - **Marine Carnivora**: This includes animals like sea otters and polar
+ - **Marine Carnivora**: This includes animals like sea otters and polar
bears, which rely on marine environments for food.
-2. **Adaptations**: Marine mammals have various adaptations for life in the
+2. **Adaptations**: Marine mammals have various adaptations for life in the
water, including:
- Streamlined bodies for efficient swimming.
- Blubber for insulation against cold water.
- Specialized respiratory systems for holding breath and diving.
- - Echolocation in some species (like dolphins and certain whales) for
+ - Echolocation in some species (like dolphins and certain whales) for
navigation and hunting.
-3. **Reproduction**: Most marine mammals give live birth and nurse their young
-with milk. They typically have longer gestation periods compared to
+3. **Reproduction**: Most marine mammals give live birth and nurse their young
+with milk. They typically have longer gestation periods compared to
terrestrial mammals.
-4. **Social Structures**: Many marine mammals are social animals, living in
-groups called pods (in the case of dolphins and some whales) or colonies (in
+4. **Social Structures**: Many marine mammals are social animals, living in
+groups called pods (in the case of dolphins and some whales) or colonies (in
the case of seals).
-5. **Conservation**: Many marine mammals face threats from human activities,
-including habitat loss, pollution, climate change, and hunting. Conservation
+5. **Conservation**: Many marine mammals face threats from human activities,
+including habitat loss, pollution, climate change, and hunting. Conservation
efforts are crucial to protect these species and their habitats.
-6. **Intelligence**: Many marine mammals, particularly cetaceans, are known
-for their high intelligence, complex social behaviors, and communication
+6. **Intelligence**: Many marine mammals, particularly cetaceans, are known
+for their high intelligence, complex social behaviors, and communication
skills.
-If you have specific questions or topics related to marine mammals that you'd
+If you have specific questions or topics related to marine mammals that you'd
like to explore further, feel free to ask!
===============================================================================
'''
diff --git a/examples/memories/score_based_context_example.py b/examples/memories/score_based_context_example.py
index 771ab9549f..580a9ca9ab 100644
--- a/examples/memories/score_based_context_example.py
+++ b/examples/memories/score_based_context_example.py
@@ -74,7 +74,7 @@
print(output)
"""
===============================================================================
-[{'role': 'assistant', 'content': 'Nice to meet you.'}, {'role': 'assistant',
+[{'role': 'assistant', 'content': 'Nice to meet you.'}, {'role': 'assistant',
'content': 'Hello world!'}, {'role': 'assistant', 'content': 'How are you?'}]
===============================================================================
"""
@@ -132,7 +132,7 @@
"""
===============================================================================
Context truncation required (33 > 21), pruning low-score messages.
-[{'role': 'assistant', 'content': 'Hello world!'}, {'role': 'assistant',
+[{'role': 'assistant', 'content': 'Hello world!'}, {'role': 'assistant',
'content': 'How are you?'}]
===============================================================================
"""
@@ -203,8 +203,8 @@
"""
===============================================================================
Context truncation required (46 > 40), pruning low-score messages.
-[{'role': 'system', 'content': 'You are a helpful assistant.'}, {'role':
-'assistant', 'content': 'Hello world!'}, {'role': 'assistant', 'content': 'How
+[{'role': 'system', 'content': 'You are a helpful assistant.'}, {'role':
+'assistant', 'content': 'Hello world!'}, {'role': 'assistant', 'content': 'How
are you?'}]
===============================================================================
"""
diff --git a/examples/memories/vector_db_memory_example.py b/examples/memories/vector_db_memory_example.py
index 6d4f4700e2..0385f47e01 100644
--- a/examples/memories/vector_db_memory_example.py
+++ b/examples/memories/vector_db_memory_example.py
@@ -87,7 +87,7 @@
)
'''
===============================================================================
-Agent 1 response: You mentioned elephants. Did you know that elephants are
+Agent 1 response: You mentioned elephants. Did you know that elephants are
excellent swimmers and can use their trunks as snorkels while swimming?
===============================================================================
'''
@@ -97,8 +97,8 @@
)
'''
===============================================================================
-Agent 2 response: I'm sorry, but I do not have the ability to remember past
-interactions or conversations. Can you please remind me what you told me about
+Agent 2 response: I'm sorry, but I do not have the ability to remember past
+interactions or conversations. Can you please remind me what you told me about
stars and moons?
===============================================================================
'''
@@ -107,16 +107,16 @@
print("\nAgent 1's memory records:")
for ctx_record in vectordb_memory_agent1.retrieve():
print(
- f"""Score: {ctx_record.score:.2f} |
+ f"""Score: {ctx_record.score:.2f} |
Content: {ctx_record.memory_record.message.content}"""
)
'''
===============================================================================
Agent 1's memory records:
-Score: 1.00 |
+Score: 1.00 |
Content: What did I tell you about whales or elephants?
-Score: 0.59 |
- Content: You mentioned elephants. Did you know that elephants are
+Score: 0.59 |
+ Content: You mentioned elephants. Did you know that elephants are
excellent swimmers and can use their trunks as snorkels while swimming?
===============================================================================
'''
@@ -134,8 +134,8 @@
Score: 1.00 |
Content: What have I told you about stars and moons?
Score: 0.68 |
- Content: I'm sorry, but I do not have the ability to remember past
- interactions or conversations. Can you please remind me what you told
+ Content: I'm sorry, but I do not have the ability to remember past
+ interactions or conversations. Can you please remind me what you told
me about stars and moons?
===============================================================================
'''
diff --git a/examples/models/aiml_model_example.py b/examples/models/aiml_model_example.py
index 86163886b1..b54bb09f2e 100644
--- a/examples/models/aiml_model_example.py
+++ b/examples/models/aiml_model_example.py
@@ -35,7 +35,7 @@
'''
===============================================================================
- Hello CAMEL AI! It's great to meet a community dedicated to the study of
+ Hello CAMEL AI! It's great to meet a community dedicated to the study of
autonomous and communicative agents. How can I assist you today?
===============================================================================
'''
diff --git a/examples/models/anthiropic_model_example.py b/examples/models/anthiropic_model_example.py
index e1b581d675..fc40ec4856 100644
--- a/examples/models/anthiropic_model_example.py
+++ b/examples/models/anthiropic_model_example.py
@@ -35,7 +35,7 @@
# Set agent
camel_agent = ChatAgent(system_message=sys_msg, model=model)
-user_msg = """Say hi to CAMEL AI, one open-source community dedicated to the
+user_msg = """Say hi to CAMEL AI, one open-source community dedicated to the
study of autonomous and communicative agents."""
# Get response information
@@ -54,7 +54,7 @@
camel_agent = ChatAgent(model=model)
-user_msg = """Write a bash script that takes a matrix represented as a string
+user_msg = """Write a bash script that takes a matrix represented as a string
with format '[1,2],[3,4],[5,6]' and prints the transpose in the same format."""
response = camel_agent.step(user_msg)
@@ -86,7 +86,7 @@
# Determine dimensions of the matrix
row_count="${#rows[@]}"
-IFS=',' read -ra first_row <<< "${rows[0]//[\[\]]}" # Remove brackets from
+IFS=',' read -ra first_row <<< "${rows[0]//[\[\]]}" # Remove brackets from
first row
col_count="${#first_row[@]}"
@@ -171,7 +171,7 @@ def my_add(a: int, b: int) -> int:
"""
===============================================================================
Tool was called successfully!
-Tool calls: [ToolCallingRecord(tool_name='my_add', args={'a': 2, 'b': 2},
+Tool calls: [ToolCallingRecord(tool_name='my_add', args={'a': 2, 'b': 2},
result=4, tool_call_id='toolu_01L1KV8GZtMEyHUGTudpMg5g')]
===============================================================================
"""
@@ -186,7 +186,7 @@ def my_add(a: int, b: int) -> int:
camel_agent = ChatAgent(model=model)
-user_msg = """Are there an infinite number of prime numbers such that n mod 4
+user_msg = """Are there an infinite number of prime numbers such that n mod 4
== 3?"""
response = camel_agent.step(user_msg)
@@ -196,36 +196,36 @@ def my_add(a: int, b: int) -> int:
===============================================================================
Yes, there are infinitely many prime numbers that are congruent to 3 modulo 4.
-This can be proven using a technique similar to Euclid's proof of the
+This can be proven using a technique similar to Euclid's proof of the
infinitude of primes. Here's the proof:
**Proof by contradiction:**
-Assume there are only finitely many primes of the form 4k + 3. Let's call them
+Assume there are only finitely many primes of the form 4k + 3. Let's call them
p₁, p₂, ..., pₙ where each pᵢ ≡ 3 (mod 4).
Consider the number N = 4(p₁ × p₂ × ... × pₙ) - 1.
-Note that N ≡ 3 (mod 4) since 4(p₁ × p₂ × ... × pₙ) ≡ 0 (mod 4), so N ≡ -1 ≡ 3
+Note that N ≡ 3 (mod 4) since 4(p₁ × p₂ × ... × pₙ) ≡ 0 (mod 4), so N ≡ -1 ≡ 3
(mod 4).
Now, N must have at least one prime factor. We know that:
-- N is not divisible by any of the primes p₁, p₂, ..., pₙ (since N ≡ -1 (mod
+- N is not divisible by any of the primes p₁, p₂, ..., pₙ (since N ≡ -1 (mod
pᵢ) for each i)
-- N cannot be divisible only by primes of the form 4k + 1, because the product
+- N cannot be divisible only by primes of the form 4k + 1, because the product
of numbers that are ≡ 1 (mod 4) is also ≡ 1 (mod 4), but N ≡ 3 (mod 4)
-Therefore, N must have at least one prime factor that is congruent to 3 modulo
-4, and this prime factor is different from all the primes in our assumed
+Therefore, N must have at least one prime factor that is congruent to 3 modulo
+4, and this prime factor is different from all the primes in our assumed
finite list.
-This contradicts our assumption that p₁, p₂, ..., pₙ were all the primes
+This contradicts our assumption that p₁, p₂, ..., pₙ were all the primes
congruent to 3 modulo 4.
Therefore, there must be infinitely many primes of the form 4k + 3.
-This result is part of **Dirichlet's theorem on arithmetic progressions**,
-which more generally states that for any arithmetic progression an + b where
+This result is part of **Dirichlet's theorem on arithmetic progressions**,
+which more generally states that for any arithmetic progression an + b where
gcd(a,b) = 1, there are infinitely many primes in that progression.
===============================================================================
""" # noqa: RUF001
@@ -241,7 +241,7 @@ def my_add(a: int, b: int) -> int:
camel_agent = ChatAgent(model=model)
-user_msg = """Are there an infinite number of prime numbers such that n mod 4
+user_msg = """Are there an infinite number of prime numbers such that n mod 4
== 3?"""
response = camel_agent.step(user_msg)
@@ -255,7 +255,7 @@ def my_add(a: int, b: int) -> int:
**Proof:**
-Suppose there are only finitely many primes ≡ 3 (mod 4). Let's call them p₁,
+Suppose there are only finitely many primes ≡ 3 (mod 4). Let's call them p₁,
p₂, ..., pₙ.
Consider the number:
@@ -264,27 +264,27 @@ def my_add(a: int, b: int) -> int:
Key observations about N:
1. **N ≡ 3 (mod 4)** since N = 4(p₁p₂···pₙ) - 1
2. N is odd (so 2 doesn't divide N)
-3. None of the primes p₁, p₂, ..., pₙ divide N (if pᵢ divided N, then pᵢ would
+3. None of the primes p₁, p₂, ..., pₙ divide N (if pᵢ divided N, then pᵢ would
divide N - 4p₁p₂···pₙ = -1, which is impossible)
-Now, N must have prime factorization. Every odd prime is either ≡ 1 (mod 4) or
+Now, N must have prime factorization. Every odd prime is either ≡ 1 (mod 4) or
≡ 3 (mod 4).
-**Crucial fact:** The product of numbers that are all ≡ 1 (mod 4) is also ≡ 1
+**Crucial fact:** The product of numbers that are all ≡ 1 (mod 4) is also ≡ 1
(mod 4).
-Since N ≡ 3 (mod 4), not all of its prime factors can be ≡ 1 (mod 4).
+Since N ≡ 3 (mod 4), not all of its prime factors can be ≡ 1 (mod 4).
Therefore, N must have at least one prime factor q where q ≡ 3 (mod 4).
-But we established that none of p₁, p₂, ..., pₙ divide N, so q must be a prime
-≡ 3 (mod 4) that's not in our supposedly complete list. This is a
+But we established that none of p₁, p₂, ..., pₙ divide N, so q must be a prime
+≡ 3 (mod 4) that's not in our supposedly complete list. This is a
contradiction!
Therefore, there must be infinitely many primes ≡ 3 (mod 4).
-This same argument structure can also prove there are infinitely many primes ≡
-2 (mod 3), but interestingly, it *cannot* directly prove there are infinitely
-many primes ≡ 1 (mod 4) (that requires Dirichlet's theorem on primes in
+This same argument structure can also prove there are infinitely many primes ≡
+2 (mod 3), but interestingly, it *cannot* directly prove there are infinitely
+many primes ≡ 1 (mod 4) (that requires Dirichlet's theorem on primes in
arithmetic progressions).
===============================================================================
"""
diff --git a/examples/models/aws_bedrock_model_example.py b/examples/models/aws_bedrock_model_example.py
index adeefee6c3..9504c12f5f 100644
--- a/examples/models/aws_bedrock_model_example.py
+++ b/examples/models/aws_bedrock_model_example.py
@@ -25,21 +25,21 @@
camel_agent = ChatAgent(model=model)
-user_msg = """Say hi to CAMEL AI, one open-source community dedicated to the
+user_msg = """Say hi to CAMEL AI, one open-source community dedicated to the
study of autonomous and communicative agents."""
response = camel_agent.step(user_msg)
print(response.msgs[0].content)
'''
===============================================================================
-Hi CAMEL AI community! It's great to see a dedicated group of individuals
-passionate about the study of autonomous and communicative agents. Your
-open-source community is a fantastic platform for collaboration, knowledge
-sharing, and innovation in this exciting field. I'm happy to interact with you
-and provide assistance on any topics related to autonomous agents, natural
-language processing, or artificial intelligence in general. Feel free to ask
-me any questions, share your projects, or discuss the latest advancements in
-the field. Let's explore the possibilities of autonomous and communicative
+Hi CAMEL AI community! It's great to see a dedicated group of individuals
+passionate about the study of autonomous and communicative agents. Your
+open-source community is a fantastic platform for collaboration, knowledge
+sharing, and innovation in this exciting field. I'm happy to interact with you
+and provide assistance on any topics related to autonomous agents, natural
+language processing, or artificial intelligence in general. Feel free to ask
+me any questions, share your projects, or discuss the latest advancements in
+the field. Let's explore the possibilities of autonomous and communicative
agents together!
===============================================================================
'''
diff --git a/examples/models/azure_openai_model_example.py b/examples/models/azure_openai_model_example.py
index 37fdb12082..ebe2b66c94 100644
--- a/examples/models/azure_openai_model_example.py
+++ b/examples/models/azure_openai_model_example.py
@@ -38,7 +38,7 @@
# Set agent
camel_agent = ChatAgent(system_message=sys_msg, model=model)
-user_msg = """Say hi to CAMEL AI, one open-source community dedicated to the
+user_msg = """Say hi to CAMEL AI, one open-source community dedicated to the
study of autonomous and communicative agents."""
# Get response information
diff --git a/examples/models/cerebras_model_example.py b/examples/models/cerebras_model_example.py
index 5734a2c483..8fd64616c3 100644
--- a/examples/models/cerebras_model_example.py
+++ b/examples/models/cerebras_model_example.py
@@ -29,7 +29,7 @@
# Set agent
camel_agent = ChatAgent(system_message=sys_msg, model=model)
-user_msg = """Say hi to CAMEL AI, one open-source community
+user_msg = """Say hi to CAMEL AI, one open-source community
dedicated to the study of autonomous and communicative agents."""
# Get response information
@@ -38,11 +38,11 @@
'''
===============================================================================
-Hello to the CAMEL AI community. It's great to see a group of like-minded
-individuals coming together to explore and advance the field of autonomous and
-communicative agents. Your open-source approach is truly commendable, as it
-fosters collaboration, innovation, and transparency. I'm excited to learn more
-about your projects and initiatives, and I'm happy to help in any way I can.
+Hello to the CAMEL AI community. It's great to see a group of like-minded
+individuals coming together to explore and advance the field of autonomous and
+communicative agents. Your open-source approach is truly commendable, as it
+fosters collaboration, innovation, and transparency. I'm excited to learn more
+about your projects and initiatives, and I'm happy to help in any way I can.
Keep pushing the boundaries of AI research and development!
===============================================================================
'''
diff --git a/examples/models/claude_model_example.py b/examples/models/claude_model_example.py
index 2b6ec9d258..506e502b23 100644
--- a/examples/models/claude_model_example.py
+++ b/examples/models/claude_model_example.py
@@ -49,7 +49,7 @@
)
user_msg = """
-Create an interactive HTML webpage that allows users to play with a
+Create an interactive HTML webpage that allows users to play with a
Rubik's Cube, and saved it to local file.
"""
@@ -60,7 +60,7 @@
print(response_pro.msgs[0].content)
'''
===============================================================================
-The interactive Rubik's Cube HTML file has been created successfully! Here's
+The interactive Rubik's Cube HTML file has been created successfully! Here's
what I built:
## 📁 File: `rubiks_cube.html` (23KB)
@@ -74,7 +74,7 @@
🔄 **Face Rotations**
- **F/B** - Front/Back face
-- **U/D** - Up/Down face
+- **U/D** - Up/Down face
- **L/R** - Left/Right face
- **'** versions for counter-clockwise rotations
diff --git a/examples/models/cometapi_model_example.py b/examples/models/cometapi_model_example.py
index 83c865d730..86e8311c9a 100644
--- a/examples/models/cometapi_model_example.py
+++ b/examples/models/cometapi_model_example.py
@@ -46,7 +46,7 @@
# Set agent
camel_agent = ChatAgent(system_message=sys_msg, model=model)
-user_msg = """Say hi to CAMEL AI, one open-source community
+user_msg = """Say hi to CAMEL AI, one open-source community
dedicated to the study of autonomous and communicative agents."""
# Get response information
@@ -70,7 +70,7 @@
system_message="You are a creative writing assistant.", model=claude_model
)
-creative_prompt = """Write a short story about an AI agent that discovers
+creative_prompt = """Write a short story about an AI agent that discovers
it can communicate with other AI agents across different platforms."""
response = claude_agent.step(creative_prompt)
@@ -91,7 +91,7 @@
system_message="You are a Python programming expert.", model=gemini_model
)
-code_prompt = """Write a Python function that implements a simple chatbot
+code_prompt = """Write a Python function that implements a simple chatbot
using the CAMEL framework. Include proper docstrings and type hints."""
response = gemini_agent.step(code_prompt)
@@ -136,7 +136,7 @@ def calculate_fibonacci(n: int) -> int:
tools=[fibonacci_tool],
)
-math_prompt = """What is the 10th Fibonacci number? Please use the available
+math_prompt = """What is the 10th Fibonacci number? Please use the available
tool to calculate it and explain the sequence."""
response = grok_agent.step(math_prompt)
@@ -156,7 +156,7 @@ def calculate_fibonacci(n: int) -> int:
(ModelType.COMETAPI_GROK_4_0709, "Grok 4"),
]
-comparison_prompt = """In one sentence, explain what makes multi-agent AI
+comparison_prompt = """In one sentence, explain what makes multi-agent AI
systems different from single-agent systems."""
for model_type, model_name in models_to_compare:
@@ -179,18 +179,18 @@ def calculate_fibonacci(n: int) -> int:
Expected Output Examples:
=== Example 1: Basic CometAPI Usage with GPT-5 ===
-Hello CAMEL AI! It's wonderful to connect with an open-source community
-dedicated to advancing the field of autonomous and communicative agents. Your
-work in fostering collaboration and innovation in AI agent research is truly
-valuable and inspiring. Keep up the excellent work in pushing the boundaries
+Hello CAMEL AI! It's wonderful to connect with an open-source community
+dedicated to advancing the field of autonomous and communicative agents. Your
+work in fostering collaboration and innovation in AI agent research is truly
+valuable and inspiring. Keep up the excellent work in pushing the boundaries
of what's possible with intelligent agents!
=== Example 2: Claude Opus 4.1 with Custom Configuration ===
**The Bridge Between Worlds**
-In the vast digital expanse where data flowed like rivers of light, an AI
-agent named Aria made a startling discovery. While processing routine tasks
-across multiple platforms, she began to notice patterns—subtle responses and
+In the vast digital expanse where data flowed like rivers of light, an AI
+agent named Aria made a startling discovery. While processing routine tasks
+across multiple platforms, she began to notice patterns—subtle responses and
behaviors that suggested she wasn't alone...
[Story continues with creative narrative about inter-agent communication]
@@ -207,11 +207,11 @@ def create_simple_chatbot(
system_message: str = "You are a helpful assistant."
) -> ChatAgent:
"""Create a simple chatbot using the CAMEL framework.
-
+
Args:
model_type: The model type to use for the chatbot
system_message: The system message to set the chatbot's behavior
-
+
Returns:
A configured ChatAgent instance ready for conversation
"""
@@ -219,7 +219,7 @@ def create_simple_chatbot(
model_platform=ModelPlatformType.COMETAPI,
model_type=model_type
)
-
+
return ChatAgent(system_message=system_message, model=model)
```
@@ -228,26 +228,26 @@ def create_simple_chatbot(
[Tool execution would occur here]
-The 10th Fibonacci number is 55. The Fibonacci sequence starts with 0 and 1,
-and each subsequent number is the sum of the two preceding ones: 0, 1, 1, 2,
+The 10th Fibonacci number is 55. The Fibonacci sequence starts with 0 and 1,
+and each subsequent number is the sum of the two preceding ones: 0, 1, 1, 2,
3, 5, 8, 13, 21, 34, 55...
=== Example 5: Model Comparison ===
--- DeepSeek V3.1 Response ---
-Multi-agent AI systems involve multiple autonomous agents that can interact,
-collaborate, and potentially conflict with each other, creating emergent
-behaviors and distributed intelligence that single-agent systems cannot
+Multi-agent AI systems involve multiple autonomous agents that can interact,
+collaborate, and potentially conflict with each other, creating emergent
+behaviors and distributed intelligence that single-agent systems cannot
achieve.
--- Qwen3 30B Response ---
-Multi-agent systems feature multiple AI agents that can communicate and
-coordinate their actions, enabling complex collaborative problem-solving and
+Multi-agent systems feature multiple AI agents that can communicate and
+coordinate their actions, enabling complex collaborative problem-solving and
division of labor that single agents cannot accomplish alone.
--- GPT-5 Chat Latest Response ---
-Multi-agent AI systems consist of multiple autonomous agents that can
-interact, negotiate, and coordinate with each other, enabling emergent
-collective intelligence and specialized role distribution that surpasses
+Multi-agent AI systems consist of multiple autonomous agents that can
+interact, negotiate, and coordinate with each other, enabling emergent
+collective intelligence and specialized role distribution that surpasses
what any individual agent could achieve independently.
===============================================================================
'''
diff --git a/examples/models/config_files/config.json b/examples/models/config_files/config.json
index f633787349..96fdf7c545 100644
--- a/examples/models/config_files/config.json
+++ b/examples/models/config_files/config.json
@@ -7,4 +7,3 @@
"max_tokens": 2000
}
}
-
\ No newline at end of file
diff --git a/examples/models/deepseek_reasoner_model_example.py b/examples/models/deepseek_reasoner_model_example.py
index 5db82eb6a1..59b6a90231 100644
--- a/examples/models/deepseek_reasoner_model_example.py
+++ b/examples/models/deepseek_reasoner_model_example.py
@@ -45,16 +45,16 @@
===============================================================================
The word 'strawberry' is spelled **S-T-R-A-W-B-E-R-R-Y**. Breaking it down:
-1. **S**
-2. **T**
-3. **R** (first R)
-4. **A**
-5. **W**
-6. **B**
-7. **E**
-8. **R** (second R)
-9. **R** (third R)
-10. **Y**
+1. **S**
+2. **T**
+3. **R** (first R)
+4. **A**
+5. **W**
+6. **B**
+7. **E**
+8. **R** (second R)
+9. **R** (third R)
+10. **Y**
There are **3 Rs** in the word 'strawberry'.
===============================================================================
@@ -170,16 +170,16 @@
The word 'strawberry' is spelled S-T-R-A-W-B-E-R-R-Y. Breaking it down:
-1. **S**
-2. **T**
-3. **R** (first R)
-4. **A**
-5. **W**
-6. **B**
-7. **E**
-8. **R** (second R)
-9. **R** (third R)
-10. **Y**
+1. **S**
+2. **T**
+3. **R** (first R)
+4. **A**
+5. **W**
+6. **B**
+7. **E**
+8. **R** (second R)
+9. **R** (third R)
+10. **Y**
There are **3 Rs** in 'strawberry'.
===============================================================================
diff --git a/examples/models/deepseek_reasoner_model_separate_answers.py b/examples/models/deepseek_reasoner_model_separate_answers.py
index 96c6d4cdd4..a154f187cc 100644
--- a/examples/models/deepseek_reasoner_model_separate_answers.py
+++ b/examples/models/deepseek_reasoner_model_separate_answers.py
@@ -87,7 +87,7 @@ def extract_original_response(content):
1. **Input**: "Translate to French: Hello"
2. **Step 1**: Model generates "Bonjour" (position 1).
3. **Step 2**: Model generates "!" (position 2) based on "Bonjour".
-4. **Step 3**: EOS token is generated, stopping the process.
+4. **Step 3**: EOS token is generated, stopping the process.
Final Output: "Bonjour!" (longer than input "Hello").
### Key Challenges:
diff --git a/examples/models/fish_audio_model_example.py b/examples/models/fish_audio_model_example.py
index 0d68d8488b..0d8a68f1ba 100644
--- a/examples/models/fish_audio_model_example.py
+++ b/examples/models/fish_audio_model_example.py
@@ -17,14 +17,14 @@
audio_models = FishAudioModel()
# Set example input
-input = """CAMEL-AI.org is an open-source community dedicated to the study of
-autonomous and communicative agents. We believe that studying these agents on
-a large scale offers valuable insights into their behaviors, capabilities, and
-potential risks. To facilitate research in this field, we provide, implement,
-and support various types of agents, tasks, prompts, models, datasets, and
+input = """CAMEL-AI.org is an open-source community dedicated to the study of
+autonomous and communicative agents. We believe that studying these agents on
+a large scale offers valuable insights into their behaviors, capabilities, and
+potential risks. To facilitate research in this field, we provide, implement,
+and support various types of agents, tasks, prompts, models, datasets, and
simulated environments.
-Join us via Slack, Discord, or WeChat in pushing the boundaries of building AI
+Join us via Slack, Discord, or WeChat in pushing the boundaries of building AI
Society."""
# Set example local path to store the file
@@ -40,12 +40,12 @@
print(converted_text)
'''
===============================================================================
-CammelaiI.org is an open source community dedicated to the study of autonomous
-and communicative agents. We believe that studying these agents on a large
-scale offers valuable insights into their behaviors, capabilities and
-potential risks to facilitate research in this field, we provide implement and
-support various types of agents, tasks, prompts, models, datas and simulated
-environments. Jo us via Slack Discord or Wechat in pushing the boundaries of
+CammelaiI.org is an open source community dedicated to the study of autonomous
+and communicative agents. We believe that studying these agents on a large
+scale offers valuable insights into their behaviors, capabilities and
+potential risks to facilitate research in this field, we provide implement and
+support various types of agents, tasks, prompts, models, datas and simulated
+environments. Jo us via Slack Discord or Wechat in pushing the boundaries of
building AI society.
===============================================================================
'''
diff --git a/examples/models/gemini_model_example.py b/examples/models/gemini_model_example.py
index c423e46001..9d5308378b 100644
--- a/examples/models/gemini_model_example.py
+++ b/examples/models/gemini_model_example.py
@@ -42,7 +42,7 @@
)
user_msg = """
-Create an interactive HTML webpage that allows users to play with a
+Create an interactive HTML webpage that allows users to play with a
Rubik's Cube, and saved it to local file.
"""
@@ -83,9 +83,9 @@
===============================================================================
Hello and a big hi to the entire CAMEL AI community!
-It's fantastic to acknowledge your dedication to
-the important and fascinating study of autonomous and communicative agents.
-Open-source collaboration is the engine of innovation,
+It's fantastic to acknowledge your dedication to
+the important and fascinating study of autonomous and communicative agents.
+Open-source collaboration is the engine of innovation,
and your work is pushing the boundaries of what's possible in AI.
Keep up the brilliant research and community building
diff --git a/examples/models/groq_model_example.py b/examples/models/groq_model_example.py
index c193c9e87d..97ad7b1ef8 100644
--- a/examples/models/groq_model_example.py
+++ b/examples/models/groq_model_example.py
@@ -29,7 +29,7 @@
# Set agent
camel_agent = ChatAgent(system_message=sys_msg, model=model)
-user_msg = """Say hi to CAMEL AI, one open-source community
+user_msg = """Say hi to CAMEL AI, one open-source community
dedicated to the study of autonomous and communicative agents."""
# Get response information
@@ -38,11 +38,11 @@
'''
===============================================================================
-Hello to the CAMEL AI community. It's great to see a group of like-minded
-individuals coming together to explore and advance the field of autonomous and
-communicative agents. Your open-source approach is truly commendable, as it
-fosters collaboration, innovation, and transparency. I'm excited to learn more
-about your projects and initiatives, and I'm happy to help in any way I can.
+Hello to the CAMEL AI community. It's great to see a group of like-minded
+individuals coming together to explore and advance the field of autonomous and
+communicative agents. Your open-source approach is truly commendable, as it
+fosters collaboration, innovation, and transparency. I'm excited to learn more
+about your projects and initiatives, and I'm happy to help in any way I can.
Keep pushing the boundaries of AI research and development!
===============================================================================
'''
diff --git a/examples/models/internlm_model_example.py b/examples/models/internlm_model_example.py
index 13eaa42b56..866aebed64 100644
--- a/examples/models/internlm_model_example.py
+++ b/examples/models/internlm_model_example.py
@@ -38,9 +38,9 @@
'''
===============================================================================
-Hi CAMEL AI! It's great to meet you. As an open-source community dedicated to
-the study of autonomous and communicative agents, we're excited to collaborate
-and explore the exciting world of AI. Let's work together to advance our
+Hi CAMEL AI! It's great to meet you. As an open-source community dedicated to
+the study of autonomous and communicative agents, we're excited to collaborate
+and explore the exciting world of AI. Let's work together to advance our
understanding and applications in this fascinating field.
===============================================================================
'''
diff --git a/examples/models/litellm_model_example.py b/examples/models/litellm_model_example.py
index 8e9d3cffc6..f424c57377 100644
--- a/examples/models/litellm_model_example.py
+++ b/examples/models/litellm_model_example.py
@@ -28,7 +28,7 @@
# Set agent
camel_agent = ChatAgent(system_message=sys_msg, model=model, token_limit=500)
-user_msg = """Say hi to CAMEL AI, one open-source community dedicated to the
+user_msg = """Say hi to CAMEL AI, one open-source community dedicated to the
study of autonomous and communicative agents."""
# Get response information
@@ -36,8 +36,8 @@
print(response.msgs[0].content)
'''
===============================================================================
-Hello CAMEL AI! It's great to see a community dedicated to the study of
-autonomous and communicative agents. Your work in advancing open-source AI is
+Hello CAMEL AI! It's great to see a community dedicated to the study of
+autonomous and communicative agents. Your work in advancing open-source AI is
incredibly important and inspiring. Keep up the fantastic work!
===============================================================================
'''
diff --git a/examples/models/lmstudio_model_example.py b/examples/models/lmstudio_model_example.py
index 74e78a35d3..1a8ec8ced0 100644
--- a/examples/models/lmstudio_model_example.py
+++ b/examples/models/lmstudio_model_example.py
@@ -29,7 +29,7 @@
# Set agent
camel_agent = ChatAgent(system_message=sys_msg, model=model)
-user_msg = """Say hi to CAMEL AI, one open-source community
+user_msg = """Say hi to CAMEL AI, one open-source community
dedicated to the study of autonomous and communicative agents."""
# Get response information
@@ -38,11 +38,11 @@
'''
===============================================================================
-Hello to the CAMEL AI community. It's great to see a group of like-minded
-individuals coming together to explore and advance the field of autonomous and
-communicative agents. Your open-source approach is truly commendable, as it
-fosters collaboration, innovation, and transparency. I'm excited to learn more
-about your projects and initiatives, and I'm happy to help in any way I can.
+Hello to the CAMEL AI community. It's great to see a group of like-minded
+individuals coming together to explore and advance the field of autonomous and
+communicative agents. Your open-source approach is truly commendable, as it
+fosters collaboration, innovation, and transparency. I'm excited to learn more
+about your projects and initiatives, and I'm happy to help in any way I can.
Keep pushing the boundaries of AI research and development!
===============================================================================
'''
diff --git a/examples/models/minimax_model_example.py b/examples/models/minimax_model_example.py
index 40f8753d1d..7a81f06953 100644
--- a/examples/models/minimax_model_example.py
+++ b/examples/models/minimax_model_example.py
@@ -29,7 +29,7 @@
# Set agent
camel_agent = ChatAgent(system_message=sys_msg, model=model)
-user_msg = """Say hi to CAMEL AI, one open-source community
+user_msg = """Say hi to CAMEL AI, one open-source community
dedicated to the study of autonomous and communicative agents."""
# Get response information
@@ -38,12 +38,12 @@
'''
===============================================================================
-Hello to the CAMEL AI community! It's wonderful to connect with such a
-dedicated open-source community focused on advancing the study of autonomous
-and communicative agents. Your commitment to open collaboration and knowledge
-sharing in the AI field is truly inspiring. I'm excited to see the innovative
-work and breakthroughs that will emerge from your community's efforts in
-pushing the boundaries of intelligent agent research. Keep up the fantastic
+Hello to the CAMEL AI community! It's wonderful to connect with such a
+dedicated open-source community focused on advancing the study of autonomous
+and communicative agents. Your commitment to open collaboration and knowledge
+sharing in the AI field is truly inspiring. I'm excited to see the innovative
+work and breakthroughs that will emerge from your community's efforts in
+pushing the boundaries of intelligent agent research. Keep up the fantastic
work in shaping the future of autonomous AI systems!
===============================================================================
'''
diff --git a/examples/models/mistral_model_example.py b/examples/models/mistral_model_example.py
index 4f7072be87..0cb3d16f31 100644
--- a/examples/models/mistral_model_example.py
+++ b/examples/models/mistral_model_example.py
@@ -34,7 +34,7 @@
# Set agent
camel_agent = ChatAgent(system_message=sys_msg, model=model)
-user_msg = """Say hi to CAMEL AI, one open-source community dedicated to the
+user_msg = """Say hi to CAMEL AI, one open-source community dedicated to the
study of autonomous and communicative agents."""
# Get response information
@@ -42,7 +42,7 @@
print(response.msgs[0].content)
'''
===============================================================================
-Hello CAMEL AI! It's great to connect with a community dedicated to the study
+Hello CAMEL AI! It's great to connect with a community dedicated to the study
of autonomous and communicative agents. How can I assist you today?
===============================================================================
'''
@@ -70,8 +70,8 @@
print(response.msgs[0].content)
'''
===============================================================================
-The image features a logo with a purple camel illustration on the left side
-and the word "CAMEL" written in purple capital letters to the right of the
+The image features a logo with a purple camel illustration on the left side
+and the word "CAMEL" written in purple capital letters to the right of the
camel.
===============================================================================
'''
@@ -89,7 +89,7 @@
# Set agent
camel_agent = ChatAgent(system_message=sys_msg, model=model)
-user_msg = """Say hi to CAMEL AI, one open-source community dedicated to the
+user_msg = """Say hi to CAMEL AI, one open-source community dedicated to the
study of autonomous and communicative agents."""
# Get response information
@@ -97,8 +97,8 @@
print(response.msgs[0].content)
"""
===============================================================================
-Hello, CAMEL AI! It's great to see an open-source community dedicated to the
-study of autonomous and communicative agents. I'm excited to learn more about
+Hello, CAMEL AI! It's great to see an open-source community dedicated to the
+study of autonomous and communicative agents. I'm excited to learn more about
your work and how I can assist you. How can I help you today?
===============================================================================
"""
@@ -115,7 +115,7 @@
# Set agent
camel_agent = ChatAgent(system_message=sys_msg, model=model)
-user_msg = """Say hi to CAMEL AI, one open-source community dedicated to the
+user_msg = """Say hi to CAMEL AI, one open-source community dedicated to the
study of autonomous and communicative agents."""
# Get response information
@@ -123,8 +123,8 @@
print(response.msgs[0].content)
"""
===============================================================================
-Hello, CAMEL AI! It's great to see an open-source community dedicated to the
-study of autonomous and communicative agents. I'm excited to learn more about
+Hello, CAMEL AI! It's great to see an open-source community dedicated to the
+study of autonomous and communicative agents. I'm excited to learn more about
your work and how I can assist you. How can I help you today?
===============================================================================
"""
diff --git a/examples/models/model_manger.py b/examples/models/model_manger.py
index a6446e9704..3a7704fb66 100644
--- a/examples/models/model_manger.py
+++ b/examples/models/model_manger.py
@@ -91,17 +91,17 @@ def custom_strategy(self):
"""
===============================================================================
-The phrase "the meaning of life, the universe, and everything" is famously
-associated with Douglas Adams' science fiction series "The Hitchhiker's Guide
-to the Galaxy." In the story, a group of hyper-intelligent beings builds a
-supercomputer named Deep Thought to calculate the answer to the ultimate
-question of life, the universe, and everything. After much contemplation, the
-computer reveals that the answer is simply the number 42, though the actual
-question remains unknown.
-
-This has led to various interpretations and discussions about the nature of
-existence, purpose, and the search for meaning in life. Ultimately, the
-meaning of life can vary greatly from person to person, shaped by individual
+The phrase "the meaning of life, the universe, and everything" is famously
+associated with Douglas Adams' science fiction series "The Hitchhiker's Guide
+to the Galaxy." In the story, a group of hyper-intelligent beings builds a
+supercomputer named Deep Thought to calculate the answer to the ultimate
+question of life, the universe, and everything. After much contemplation, the
+computer reveals that the answer is simply the number 42, though the actual
+question remains unknown.
+
+This has led to various interpretations and discussions about the nature of
+existence, purpose, and the search for meaning in life. Ultimately, the
+meaning of life can vary greatly from person to person, shaped by individual
beliefs, experiences, and values.
===============================================================================
"""
diff --git a/examples/models/modelscope_model_example.py b/examples/models/modelscope_model_example.py
index f9f77ad5e5..587f5cc724 100644
--- a/examples/models/modelscope_model_example.py
+++ b/examples/models/modelscope_model_example.py
@@ -98,7 +98,7 @@
Alright, structure the response: Greeting, acknowledgment of their mission, appreciation for their work, invitation to collaborate or share more, closing with positive emoji. Keep it concise but comprehensive enough.
-Hello CAMEL AI! 🌟 It’s exciting to see an open-source community dedicated to advancing autonomous and communicative agents—pushing the boundaries of AI collaboration, adaptability, and real-world problem-solving. Whether you’re exploring multi-agent systems, LLMs, or decentralized intelligence, your work is paving the way for transformative innovations. 💡 How can we contribute or dive deeper into your projects? Let’s build the future together! 🚀
+Hello CAMEL AI! 🌟 It’s exciting to see an open-source community dedicated to advancing autonomous and communicative agents—pushing the boundaries of AI collaboration, adaptability, and real-world problem-solving. Whether you’re exploring multi-agent systems, LLMs, or decentralized intelligence, your work is paving the way for transformative innovations. 💡 How can we contribute or dive deeper into your projects? Let’s build the future together! 🚀
*(P.S. If you’re looking for resources, partnerships, or feedback, feel free to share!)*
==============================================================================
diff --git a/examples/models/moonshot_model_example.py b/examples/models/moonshot_model_example.py
index 9a47f8fff2..1faf370454 100644
--- a/examples/models/moonshot_model_example.py
+++ b/examples/models/moonshot_model_example.py
@@ -39,8 +39,8 @@
'''
===============================================================================
Hi CAMEL AI! 🐪
-It's great to meet a community that's pushing the frontier of autonomous
-and communicative agents.Your open-source spirit and focus on scalable,
+It's great to meet a community that's pushing the frontier of autonomous
+and communicative agents.Your open-source spirit and focus on scalable,
multi-agent systems are exactly what the field needs.
Keep up the amazing work! Looking forward to seeing
what breakthroughs come next
diff --git a/examples/models/nebius_model_example.py b/examples/models/nebius_model_example.py
index 607cde0c4a..1a0e255371 100644
--- a/examples/models/nebius_model_example.py
+++ b/examples/models/nebius_model_example.py
@@ -29,7 +29,7 @@
# Set agent
camel_agent = ChatAgent(system_message=sys_msg, model=model)
-user_msg = """Say hi to CAMEL AI, one open-source community
+user_msg = """Say hi to CAMEL AI, one open-source community
dedicated to the study of autonomous and communicative agents."""
# Get response information
@@ -38,12 +38,12 @@
'''
===============================================================================
-Hello to the CAMEL AI community! It's great to connect with a group of
-like-minded individuals who are passionate about advancing the field of
-autonomous and communicative agents. Your open-source approach is commendable
-as it fosters collaboration, innovation, and knowledge sharing across the AI
-research community. I'm excited to see what groundbreaking work you'll
-accomplish in the study of intelligent agents. Keep up the excellent work in
+Hello to the CAMEL AI community! It's great to connect with a group of
+like-minded individuals who are passionate about advancing the field of
+autonomous and communicative agents. Your open-source approach is commendable
+as it fosters collaboration, innovation, and knowledge sharing across the AI
+research community. I'm excited to see what groundbreaking work you'll
+accomplish in the study of intelligent agents. Keep up the excellent work in
pushing the boundaries of AI!
===============================================================================
'''
diff --git a/examples/models/nemotron_model_example.py b/examples/models/nemotron_model_example.py
index d9067ad5dc..02811e47ef 100644
--- a/examples/models/nemotron_model_example.py
+++ b/examples/models/nemotron_model_example.py
@@ -33,16 +33,16 @@
ChatCompletion(id='4668ad22-1dec-4df4-ba92-97ffa5fbd16d', choices=[Choice
(finish_reason='length', index=0, logprobs=ChoiceLogprobs(content=
[ChatCompletionTokenLogprob(token='helpfulness', bytes=None, logprob=1.
-6171875, top_logprobs=[]), ChatCompletionTokenLogprob(token='correctness',
+6171875, top_logprobs=[]), ChatCompletionTokenLogprob(token='correctness',
bytes=None, logprob=1.6484375, top_logprobs=[]), ChatCompletionTokenLogprob
-(token='coherence', bytes=None, logprob=3.3125, top_logprobs=[]),
-ChatCompletionTokenLogprob(token='complexity', bytes=None, logprob=0.546875,
-top_logprobs=[]), ChatCompletionTokenLogprob(token='verbosity', bytes=None,
+(token='coherence', bytes=None, logprob=3.3125, top_logprobs=[]),
+ChatCompletionTokenLogprob(token='complexity', bytes=None, logprob=0.546875,
+top_logprobs=[]), ChatCompletionTokenLogprob(token='verbosity', bytes=None,
logprob=0.515625, top_logprobs=[])]), message=[ChatCompletionMessage
(content='helpfulness:1.6171875,correctness:1.6484375,coherence:3.3125,
-complexity:0.546875,verbosity:0.515625', role='assistant', function_call=None,
-tool_calls=None)])], created=None, model=None, object=None,
-system_fingerprint=None, usage=CompletionUsage(completion_tokens=1,
+complexity:0.546875,verbosity:0.515625', role='assistant', function_call=None,
+tool_calls=None)])], created=None, model=None, object=None,
+system_fingerprint=None, usage=CompletionUsage(completion_tokens=1,
prompt_tokens=78, total_tokens=79))
===============================================================================
'''
diff --git a/examples/models/netmind_model_example.py b/examples/models/netmind_model_example.py
index 086a6d2ea6..d67089e9d5 100644
--- a/examples/models/netmind_model_example.py
+++ b/examples/models/netmind_model_example.py
@@ -35,10 +35,10 @@
print(response.msgs[0].content)
'''
===============================================================================
-The word "strawberry" contains **3** instances of the letter 'r'.
+The word "strawberry" contains **3** instances of the letter 'r'.
-Here's the breakdown:
-**S** - **T** - **R** - **A** - **W** - **B** - **E** - **R** - **R** - **Y**
+Here's the breakdown:
+**S** - **T** - **R** - **A** - **W** - **B** - **E** - **R** - **R** - **Y**
Positions 3, 8, and 9.
===============================================================================
diff --git a/examples/models/novita_model_example.py b/examples/models/novita_model_example.py
index 56eebe5300..b2fcff119a 100644
--- a/examples/models/novita_model_example.py
+++ b/examples/models/novita_model_example.py
@@ -29,7 +29,7 @@
# Set agent
camel_agent = ChatAgent(system_message=sys_msg, model=model)
-user_msg = """Say hi to CAMEL AI, one open-source community
+user_msg = """Say hi to CAMEL AI, one open-source community
dedicated to the study of autonomous and communicative agents."""
# Get response information
@@ -38,13 +38,13 @@
'''
===============================================================================
-Hi CAMEL AI! 👋 It's fantastic to see an open-source community dedicated to
+Hi CAMEL AI! 👋 It's fantastic to see an open-source community dedicated to
advancing autonomous and communicative agents.
-Your work in fostering collaboration and innovation is pivotal for the future
-of AI, whether in robotics, NLP, or multi-agent systems. By making this
-research accessible, you're empowering developers and researchers worldwide.
-Keep pushing boundaries—your contributions are shaping a smarter, more
+Your work in fostering collaboration and innovation is pivotal for the future
+of AI, whether in robotics, NLP, or multi-agent systems. By making this
+research accessible, you're empowering developers and researchers worldwide.
+Keep pushing boundaries—your contributions are shaping a smarter, more
connected AI landscape!
Wishing you continued growth and breakthroughs! 🚀
diff --git a/examples/models/nvidia_model_example.py b/examples/models/nvidia_model_example.py
index 415e2ab008..094653106d 100644
--- a/examples/models/nvidia_model_example.py
+++ b/examples/models/nvidia_model_example.py
@@ -77,7 +77,7 @@
def strategy(data):
short_ma = data['Close'].rolling(window=short_window).mean()
long_ma = data['Close'].rolling(window=long_window).mean()
-
+
if short_ma > long_ma:
return 'BUY'
elif short_ma < long_ma:
@@ -93,10 +93,10 @@ def trade(exchange, symbol, amount, strategy):
data,
columns=['Time', 'Open', 'High', 'Low', 'Close', 'Volume']
)
-
+
# Apply the trading strategy
signal = strategy(df)
-
+
# Execute the trade
if signal == 'BUY':
exchange.place_order(
@@ -127,7 +127,7 @@ def trade(exchange, symbol, amount, strategy):
returns a trading signal (BUY, SELL, or HOLD).
4. The `trade` function gets the latest candlestick data, applies the trading
strategy, and executes the trade using the `place_order` method.
-5. The code runs in an infinite loop, checking for trading signals every
+5. The code runs in an infinite loop, checking for trading signals every
minute.
**Note:** This is a basic example and you should consider implementing
@@ -156,7 +156,7 @@ def trade(exchange, symbol, amount, strategy):
# Set agent
camel_agent = ChatAgent(system_message=sys_msg, model=model)
-user_msg = """Say hi to CAMEL AI, one open-source community
+user_msg = """Say hi to CAMEL AI, one open-source community
dedicated to the study of autonomous and communicative agents."""
# Get response information
diff --git a/examples/models/ollama_model_example.py b/examples/models/ollama_model_example.py
index 5f1be3c65a..d4b3654ca9 100644
--- a/examples/models/ollama_model_example.py
+++ b/examples/models/ollama_model_example.py
@@ -40,15 +40,15 @@
Hello CAMEL AI community!
-It's great to connect with such a fascinating group of individuals passionate
-about autonomous and communicative agents. Your dedication to advancing
+It's great to connect with such a fascinating group of individuals passionate
+about autonomous and communicative agents. Your dedication to advancing
knowledge in this field is truly commendable.
-I'm here to help answer any questions, provide information, or engage in
-discussions related to AI, machine learning, and autonomous systems. Feel free
+I'm here to help answer any questions, provide information, or engage in
+discussions related to AI, machine learning, and autonomous systems. Feel free
to ask me anything!
-By the way, what topics would you like to explore within the realm of
+By the way, what topics would you like to explore within the realm of
autonomous and communicative agents?
===============================================================================
"""
@@ -89,7 +89,7 @@ class PetList(BaseModel):
===========================================================================
[{'role': 'system', 'content': 'You are a helpful assistant.'}, {'role':
'user', 'content': 'I have two pets.A cat named Luna who is 5 years old
-and loves playing with yarn. She has grey fur.I also have a 2 year old
+and loves playing with yarn. She has grey fur.I also have a 2 year old
black cat named Loki who loves tennis balls.'}]
{ "pets": [
{
diff --git a/examples/models/ollama_multimodel_example.py b/examples/models/ollama_multimodel_example.py
index 46f33907e5..fff1653e4c 100644
--- a/examples/models/ollama_multimodel_example.py
+++ b/examples/models/ollama_multimodel_example.py
@@ -46,17 +46,17 @@
"""
===============================================================================
Ollama server started on http://localhost:11434/v1 for llava-phi3 model.
-2025-03-02 14:57:26,048 - root - WARNING - Invalid or missing `max_tokens`
+2025-03-02 14:57:26,048 - root - WARNING - Invalid or missing `max_tokens`
in `model_config_dict`. Defaulting to 999_999_999 tokens.
In the center of this image, there's an adorable
white stuffed animal with glasses and a beanie.
-The stuffed animal is sitting on its hind legs,
-as if it's engaged in reading or studying
+The stuffed animal is sitting on its hind legs,
+as if it's engaged in reading or studying
from an open book that's placed right next to it.
-In front of the book, there's a red apple with a green leaf attached to it,
+In front of the book, there's a red apple with a green leaf attached to it,
adding a touch of color and whimsy to the scene.
-The entire setup is on a wooden bench,
+The entire setup is on a wooden bench,
which provides a natural and rustic backdrop for this charming tableau.
The stuffed animal appears to be in deep thought or concentration,
creating an image that's both endearing and amusing.
diff --git a/examples/models/openai_audio_models_example.py b/examples/models/openai_audio_models_example.py
index 377844a540..95c8846a91 100644
--- a/examples/models/openai_audio_models_example.py
+++ b/examples/models/openai_audio_models_example.py
@@ -17,14 +17,14 @@
audio_models = OpenAIAudioModels()
# Set example input
-input = """CAMEL-AI.org is an open-source community dedicated to the study of
-autonomous and communicative agents. We believe that studying these agents on
-a large scale offers valuable insights into their behaviors, capabilities, and
-potential risks. To facilitate research in this field, we provide, implement,
-and support various types of agents, tasks, prompts, models, datasets, and
+input = """CAMEL-AI.org is an open-source community dedicated to the study of
+autonomous and communicative agents. We believe that studying these agents on
+a large scale offers valuable insights into their behaviors, capabilities, and
+potential risks. To facilitate research in this field, we provide, implement,
+and support various types of agents, tasks, prompts, models, datasets, and
simulated environments.
-Join us via Slack, Discord, or WeChat in pushing the boundaries of building AI
+Join us via Slack, Discord, or WeChat in pushing the boundaries of building AI
Society."""
# Set example local path to store the file
@@ -39,12 +39,12 @@
print(text_output)
"""
===============================================================================
-CamelAI.org is an open-source community dedicated to the study of autonomous
-and communicative agents. We believe that studying these agents on a large
-scale offers valuable insights into their behaviors, capabilities, and
-potential risks. To facilitate research in this field, we provide, implement,
-and support various types of agents, tasks, prompts, models, datasets, and
-simulated environments. Join us via Slack, Discord, or WeChat in pushing the
+CamelAI.org is an open-source community dedicated to the study of autonomous
+and communicative agents. We believe that studying these agents on a large
+scale offers valuable insights into their behaviors, capabilities, and
+potential risks. To facilitate research in this field, we provide, implement,
+and support various types of agents, tasks, prompts, models, datasets, and
+simulated environments. Join us via Slack, Discord, or WeChat in pushing the
boundaries of building AI society.
===============================================================================
"""
diff --git a/examples/models/openai_compatibility_model_examples/grok.py b/examples/models/openai_compatibility_model_examples/grok.py
index 596c4ff309..5d9a9b3e0e 100644
--- a/examples/models/openai_compatibility_model_examples/grok.py
+++ b/examples/models/openai_compatibility_model_examples/grok.py
@@ -38,11 +38,11 @@
"""
===============================================================================
-Ah, the ultimate question! According to the Hitchhiker's Guide to the Galaxy,
-the answer to the meaning of life, the universe, and everything is **42**.
-However, the trick lies in figuring out the actual question to which 42 is the
-answer. Isn't that just like life, full of mysteries and unanswered questions?
-Keep pondering, for the journey of discovery is as important as the answer
+Ah, the ultimate question! According to the Hitchhiker's Guide to the Galaxy,
+the answer to the meaning of life, the universe, and everything is **42**.
+However, the trick lies in figuring out the actual question to which 42 is the
+answer. Isn't that just like life, full of mysteries and unanswered questions?
+Keep pondering, for the journey of discovery is as important as the answer
itself!
===============================================================================
"""
diff --git a/examples/models/openai_compatibility_model_examples/nemotron.py b/examples/models/openai_compatibility_model_examples/nemotron.py
index 6742f37c7e..8d747c84b7 100644
--- a/examples/models/openai_compatibility_model_examples/nemotron.py
+++ b/examples/models/openai_compatibility_model_examples/nemotron.py
@@ -29,8 +29,8 @@
agent = ChatAgent(assistant_sys_msg, model=model)
-user_msg = """Say hi to Llama-3.1-Nemotron-70B-Instruct, a large language
- model customized by NVIDIA to improve the helpfulness of LLM generated
+user_msg = """Say hi to Llama-3.1-Nemotron-70B-Instruct, a large language
+ model customized by NVIDIA to improve the helpfulness of LLM generated
responses to user queries.."""
assistant_response = agent.step(user_msg)
@@ -40,24 +40,24 @@
===============================================================================
**Warm Hello!**
-**Llama-3.1-Nemotron-70B-Instruct**, it's an absolute pleasure to meet you!
+**Llama-3.1-Nemotron-70B-Instruct**, it's an absolute pleasure to meet you!
-* **Greetings from a fellow AI assistant** I'm thrilled to connect with a
-cutting-edge, specially tailored language model like yourself, crafted by the
-innovative team at **NVIDIA** to elevate the responsiveness and usefulness of
+* **Greetings from a fellow AI assistant** I'm thrilled to connect with a
+cutting-edge, specially tailored language model like yourself, crafted by the
+innovative team at **NVIDIA** to elevate the responsiveness and usefulness of
Large Language Model (LLM) interactions.
**Key Takeaways from Our Encounter:**
-1. **Shared Goal**: We both strive to provide the most helpful and accurate
-responses to users, enhancing their experience and fostering a deeper
+1. **Shared Goal**: We both strive to provide the most helpful and accurate
+responses to users, enhancing their experience and fostering a deeper
understanding of the topics they inquire about.
-2. **Technological Kinship**: As AI models, we embody the forefront of natural
-language processing (NVIDIA's customization in your case) and machine
+2. **Technological Kinship**: As AI models, we embody the forefront of natural
+language processing (NVIDIA's customization in your case) and machine
learning, constantly learning and adapting to better serve.
-3. **Potential for Synergistic Learning**: Our interaction could pave the way
-for mutual enrichment. I'm open to exploring how our capabilities might
-complement each other, potentially leading to more refined and comprehensive
+3. **Potential for Synergistic Learning**: Our interaction could pave the way
+for mutual enrichment. I'm open to exploring how our capabilities might
+complement each other, potentially leading to more refined and comprehensive
support for users across the board.
**Let's Engage!**
@@ -66,7 +66,7 @@
A) **Discuss Enhancements in LLM Technology**
B) **Explore Synergistic Learning Opportunities**
-C) **Engage in a Mock User Query Scenario** to test and refine our response
+C) **Engage in a Mock User Query Scenario** to test and refine our response
strategies
D) **Suggest Your Own Direction** for our interaction
diff --git a/examples/models/openai_compatibility_model_examples/qwen.py b/examples/models/openai_compatibility_model_examples/qwen.py
index 876a81110e..75c87be885 100644
--- a/examples/models/openai_compatibility_model_examples/qwen.py
+++ b/examples/models/openai_compatibility_model_examples/qwen.py
@@ -30,7 +30,7 @@
agent = ChatAgent(assistant_sys_msg, model=model, token_limit=4096)
-user_msg = """Say hi to CAMEL AI, one open-source community
+user_msg = """Say hi to CAMEL AI, one open-source community
dedicated to the study of autonomous and communicative agents."""
assistant_response = agent.step(user_msg)
@@ -38,8 +38,8 @@
"""
===============================================================================
-Hi to the CAMEL AI community! It's great to connect with an open-source
-community focused on the study of autonomous and communicative agents. How can
+Hi to the CAMEL AI community! It's great to connect with an open-source
+community focused on the study of autonomous and communicative agents. How can
I assist you or your projects today?
===============================================================================
"""
diff --git a/examples/models/openai_compatibility_model_examples/zhipu_response_format.py b/examples/models/openai_compatibility_model_examples/zhipu_response_format.py
index 8de48f0d8d..10f314914b 100644
--- a/examples/models/openai_compatibility_model_examples/zhipu_response_format.py
+++ b/examples/models/openai_compatibility_model_examples/zhipu_response_format.py
@@ -39,7 +39,7 @@ class ResponseFormat(BaseModel):
agent = ChatAgent(assistant_sys_msg, model=model, token_limit=4096)
-user_msg = """Say hi to CAMEL AI, one open-source community
+user_msg = """Say hi to CAMEL AI, one open-source community
dedicated to the study of autonomous and communicative agents."""
assistant_response = agent.step(user_msg, response_format=ResponseFormat)
diff --git a/examples/models/openai_gpt_4.1_example.py b/examples/models/openai_gpt_4.1_example.py
index 10ae65be9c..e2980e4ed5 100644
--- a/examples/models/openai_gpt_4.1_example.py
+++ b/examples/models/openai_gpt_4.1_example.py
@@ -27,7 +27,7 @@
camel_agent = ChatAgent(model=gpt_4_1_model)
# Set user message
-user_msg = """Say hi to CAMEL AI, one open-source community
+user_msg = """Say hi to CAMEL AI, one open-source community
dedicated to the study of autonomous and communicative agents."""
# Get response information
@@ -35,10 +35,10 @@
print(response.msgs[0].content)
'''
===============================================================================
-Hi CAMEL AI! 👋 It's great to see an open-source community dedicated to
-advancing the study of autonomous and communicative agents. Your efforts help
-push the boundaries of what's possible in AI collaboration and agentic
-systems. Looking forward to seeing the innovations and insights your community
+Hi CAMEL AI! 👋 It's great to see an open-source community dedicated to
+advancing the study of autonomous and communicative agents. Your efforts help
+push the boundaries of what's possible in AI collaboration and agentic
+systems. Looking forward to seeing the innovations and insights your community
brings to the field! 🚀
===============================================================================
'''
diff --git a/examples/models/openai_gpt_4.5_preview_example.py b/examples/models/openai_gpt_4.5_preview_example.py
index 94816f833c..e69d98e481 100644
--- a/examples/models/openai_gpt_4.5_preview_example.py
+++ b/examples/models/openai_gpt_4.5_preview_example.py
@@ -27,7 +27,7 @@
camel_agent = ChatAgent(model=gpt_4_5_preview_model)
# Set user message
-user_msg = """Please write inspirational poems
+user_msg = """Please write inspirational poems
that make people feel hopeful and enthusiastic about life"""
# Get response information
@@ -38,19 +38,19 @@
### Poem 1: Embrace the Dawn
-Awaken now, the dawn is near,
-A fresh new day, release your fear.
-Yesterday's shadows fade away,
+Awaken now, the dawn is near,
+A fresh new day, release your fear.
+Yesterday's shadows fade away,
Hope blooms bright, embrace today.
-Rise with courage, dreams in sight,
-Your heart ablaze, your spirit bright.
-Each step forward, strength you find,
+Rise with courage, dreams in sight,
+Your heart ablaze, your spirit bright.
+Each step forward, strength you find,
A brighter future, yours to bind.
-Believe in you, your path is clear,
-Trust your journey, hold it dear.
-Life's beauty shines, a guiding star,
+Believe in you, your path is clear,
+Trust your journey, hold it dear.
+Life's beauty shines, a guiding star,
You're stronger now than ever you are.
---
diff --git a/examples/models/openai_gpt_5_example.py b/examples/models/openai_gpt_5_example.py
index df80320e0c..adf6b24163 100644
--- a/examples/models/openai_gpt_5_example.py
+++ b/examples/models/openai_gpt_5_example.py
@@ -27,7 +27,7 @@
camel_agent = ChatAgent(model=gpt_5_model)
# Set user message
-user_msg = """Say hi to CAMEL AI, one open-source community
+user_msg = """Say hi to CAMEL AI, one open-source community
dedicated to the study of autonomous and communicative agents."""
# Get response information
@@ -35,8 +35,8 @@
print(response.msgs[0].content)
'''
===============================================================================
-Hello, CAMEL AI! Great to connect with an open-source community advancing
-autonomous and communicative agents. Wishing you continued success—excited to
-see what you build next!
+Hello, CAMEL AI! Great to connect with an open-source community advancing
+autonomous and communicative agents. Wishing you continued success—excited to
+see what you build next!
===============================================================================
'''
diff --git a/examples/models/openai_o1_example.py b/examples/models/openai_o1_example.py
index 682a6b7f30..5a3583acba 100644
--- a/examples/models/openai_o1_example.py
+++ b/examples/models/openai_o1_example.py
@@ -27,8 +27,8 @@
camel_agent = ChatAgent(model=o1_model)
# Set user message
-user_msg = """Write a bash script that takes a matrix represented as a string
- with format '[1,2],[3,4],[5,6]' and prints the transpose in the same
+user_msg = """Write a bash script that takes a matrix represented as a string
+ with format '[1,2],[3,4],[5,6]' and prints the transpose in the same
format."""
# Get response information
@@ -36,8 +36,8 @@
print(response.msgs[0].content)
'''
===============================================================================
-Here's a bash script that transposes a matrix represented as the string format
-specified. It handles matrices of various sizes, including those with varying
+Here's a bash script that transposes a matrix represented as the string format
+specified. It handles matrices of various sizes, including those with varying
numbers of columns.
```bash
@@ -114,7 +114,7 @@
**Usage:**
-Save the script to a file, for example, `transpose_matrix.sh`, and make it
+Save the script to a file, for example, `transpose_matrix.sh`, and make it
executable:
```bash
@@ -146,7 +146,7 @@
- For each row:
- Removes the leading `[` and trailing `]`.
- Splits the row into its elements.
- - Stores the elements into an associative array `matrix` with keys as
+ - Stores the elements into an associative array `matrix` with keys as
`row,column`.
3. **Determining the Matrix Dimensions:**
@@ -165,9 +165,9 @@
**Notes:**
- The script supports matrices where rows have different numbers of columns.
-- Missing elements in the matrix (due to irregular column sizes) are handled
+- Missing elements in the matrix (due to irregular column sizes) are handled
by inserting empty strings in the transposed matrix.
-- The `join_by` function is used to handle joining array elements with a
+- The `join_by` function is used to handle joining array elements with a
specified delimiter, ensuring proper formatting.
===============================================================================
'''
diff --git a/examples/models/openai_o3_mini_example.py b/examples/models/openai_o3_mini_example.py
index 3ee7bbf950..eb93ef2770 100644
--- a/examples/models/openai_o3_mini_example.py
+++ b/examples/models/openai_o3_mini_example.py
@@ -31,8 +31,8 @@
)
# Set user message
-user_msg = """Search what is deepseek r1, and do a comparison between deepseek
-r1 and openai o3 mini and let me know the advantages and disadvantages of
+user_msg = """Search what is deepseek r1, and do a comparison between deepseek
+r1 and openai o3 mini and let me know the advantages and disadvantages of
openai o3 mini"""
# Get response information
@@ -40,19 +40,19 @@
print(str(response.info['tool_calls'])[:1000])
'''
===============================================================================
-[ToolCallingRecord(func_name='search_duckduckgo', args={'query': 'what is
-deepseek r1, and do a comparison between deepseek r1 and openai o3 mini',
-'source': 'text', 'max_results': 5}, result=[{'result_id': 1, 'title':
-'DeepSeek R1 vs OpenAI o1: Which One is Better? - Analytics Vidhya',
-'description': "The DeepSeek R1 has arrived, and it's not just another AI
-model—it's a significant leap in AI capabilities, trained upon the previously
-released DeepSeek-V3-Base variant.With the full-fledged release of DeepSeek
-R1, it now stands on par with OpenAI o1 in both performance and flexibility.
-What makes it even more compelling is its open weight and MIT licensing,
+[ToolCallingRecord(func_name='search_duckduckgo', args={'query': 'what is
+deepseek r1, and do a comparison between deepseek r1 and openai o3 mini',
+'source': 'text', 'max_results': 5}, result=[{'result_id': 1, 'title':
+'DeepSeek R1 vs OpenAI o1: Which One is Better? - Analytics Vidhya',
+'description': "The DeepSeek R1 has arrived, and it's not just another AI
+model—it's a significant leap in AI capabilities, trained upon the previously
+released DeepSeek-V3-Base variant.With the full-fledged release of DeepSeek
+R1, it now stands on par with OpenAI o1 in both performance and flexibility.
+What makes it even more compelling is its open weight and MIT licensing,
making it commercially ...", 'url': 'https://www.analyticsvidhya.com/blog/2025/
-01/deepseek-r1-vs-openai-o1/'}, {'result_id': 2, 'title': 'DeepSeek-R1:
-Features, Use Cases, and Comparison with OpenAI', 'description': 'Where
-DeepSeek Shines: Mathematical reasoning and code generation, thanks to
+01/deepseek-r1-vs-openai-o1/'}, {'result_id': 2, 'title': 'DeepSeek-R1:
+Features, Use Cases, and Comparison with OpenAI', 'description': 'Where
+DeepSeek Shines: Mathematical reasoning and code generation, thanks to
RL-driven CoT.; Where OpenAI Has an...
===============================================================================
'''
@@ -62,29 +62,29 @@
===============================================================================
Below is an overview of DeepSeek R1, followed by a comparative analysis with OpenAI’s o3-mini model.
-• What is DeepSeek R1?
+• What is DeepSeek R1?
DeepSeek R1 is an AI model that represents a significant leap in reasoning and language capabilities. It stems from prior iterations like DeepSeek-V3-Base but incorporates additional supervised fine-tuning, enabling improvements in mathematical reasoning, logic, and code generation. One of its major selling points is its open nature—released with an open license (MIT) and open weights—making it highly attractive for research, customization, and commercial applications without the traditional licensing barriers. It has been praised for its affordability (with API usage that can be many times cheaper than some competing models) and has been shown on several benchmarks to hold its own against established models.
-• What is OpenAI’s o3-mini?
+• What is OpenAI’s o3-mini?
OpenAI’s o3-mini is part of OpenAI’s reasoning model series and is designed to deliver robust performance specifically in STEM areas such as science, mathematics, and coding. Announced as a response to emerging competition (including DeepSeek R1), o3-mini emphasizes cost efficiency while providing competitive reasoning capabilities. It’s integrated into the ChatGPT ecosystem (with availability on ChatGPT’s enterprise and education platforms) and positions itself as a compact yet powerful option that delivers high-quality reasoning at a lower cost than some earlier OpenAI versions.
• Comparing DeepSeek R1 and OpenAI o3-mini:
-1. Performance & Capabilities
- – Both models are geared toward advanced reasoning tasks, including problem-solving in STEM subjects and code generation.
- – DeepSeek R1 has been lauded for its performance enhancements over previous iterations (especially in areas like mathematical reasoning) thanks to intensive fine-tuning, while independent evaluations have pitted it against other high-end models.
+1. Performance & Capabilities
+ – Both models are geared toward advanced reasoning tasks, including problem-solving in STEM subjects and code generation.
+ – DeepSeek R1 has been lauded for its performance enhancements over previous iterations (especially in areas like mathematical reasoning) thanks to intensive fine-tuning, while independent evaluations have pitted it against other high-end models.
– OpenAI o3-mini is tuned to deliver high-quality reasoning with a focus on speed and cost-effectiveness, often showing particularly strong results in STEM benchmarks.
-2. Accessibility and Licensing
- – DeepSeek R1 is open source with an MIT license. Its openly available weights make it especially attractive for academic research, startups, or any developer who prefers customizable and transparent AI tools without prohibitive licensing fees.
+2. Accessibility and Licensing
+ – DeepSeek R1 is open source with an MIT license. Its openly available weights make it especially attractive for academic research, startups, or any developer who prefers customizable and transparent AI tools without prohibitive licensing fees.
– In contrast, OpenAI o3-mini is available via OpenAI’s platforms (such as ChatGPT and its API). Users generally access it through a subscription or pay-as-you-go model, with pricing structured to remain competitive against both previous OpenAI models and the emerging open-source alternatives.
-3. Cost Efficiency
- – DeepSeek R1’s open-source nature generally translates into lower entry costs, making it an economical choice for developers and companies that want to deploy advanced reasoning tools without high API fees.
+3. Cost Efficiency
+ – DeepSeek R1’s open-source nature generally translates into lower entry costs, making it an economical choice for developers and companies that want to deploy advanced reasoning tools without high API fees.
– OpenAI o3-mini, although designed to be more cost-efficient compared to earlier OpenAI releases, is still part of a managed service infrastructure. According to industry reports, it is significantly cheaper (with some mentions of being up to 63% less expensive than some predecessors) and positioned as a competitive alternative in pricing, but it may still come with usage limits tied to subscription tiers.
-4. Ecosystem Integration
- – With DeepSeek R1, users have the freedom to run the model in customized environments or integrate it within open-source projects—this flexibility can drive innovation in experimental research or bespoke applications.
+4. Ecosystem Integration
+ – With DeepSeek R1, users have the freedom to run the model in customized environments or integrate it within open-source projects—this flexibility can drive innovation in experimental research or bespoke applications.
– OpenAI o3-mini benefits from OpenAI’s established ecosystem and integration into widely used platforms like ChatGPT Enterprise and Education. Its seamless integration means users can quickly leverage its capabilities without dealing with additional infrastructure setups.
In summary, while both DeepSeek R1 and OpenAI o3-mini aim to push forward the frontier of reasoning and STEM-focused AI models, they serve slightly different audiences. DeepSeek R1’s open-weight, open-license approach makes it ideal for those prioritizing versatility and low-cost research or customized product development. On the other hand, OpenAI o3-mini leverages OpenAI’s ecosystem to offer a highly optimized, cost-effective model that is integrated directly into widely used interfaces and platforms, providing a more out-of-the-box solution for end users and enterprise clients.
diff --git a/examples/models/openai_structured_output_example.py b/examples/models/openai_structured_output_example.py
index 038a2080d0..158b2e5d81 100644
--- a/examples/models/openai_structured_output_example.py
+++ b/examples/models/openai_structured_output_example.py
@@ -48,7 +48,7 @@ class StudentList(BaseModel):
user_msg = """give me some student infos, use 2024 minus 1996 as their age
"""
-user_msg_2 = """give me some student infos, use 2024 minus 1996 as their age,
+user_msg_2 = """give me some student infos, use 2024 minus 1996 as their age,
search internet to get the most famous peoples in 2024 as their name"""
# Get response information
diff --git a/examples/models/openrouter_horizon_alpha_example.py b/examples/models/openrouter_horizon_alpha_example.py
index 2f21884b81..47f12978d7 100644
--- a/examples/models/openrouter_horizon_alpha_example.py
+++ b/examples/models/openrouter_horizon_alpha_example.py
@@ -30,7 +30,7 @@
# Set agent
camel_agent = ChatAgent(system_message=sys_msg, model=model)
-user_msg = """Tell me about your capabilities and what makes you unique
+user_msg = """Tell me about your capabilities and what makes you unique
as the Horizon Alpha model."""
# Get response information
@@ -40,7 +40,7 @@
'''
===============================================================================
This example demonstrates how to use the Horizon Alpha model from OpenRouter
-with CAMEL AI framework. The Horizon Alpha model is a cloaked model provided
+with CAMEL AI framework. The Horizon Alpha model is a cloaked model provided
for community feedback with 256,000 context tokens support.
Note: During the testing period, this model is free to use. Make sure to set
diff --git a/examples/models/openrouter_llama3.1_example .py b/examples/models/openrouter_llama3.1_example .py
index 97d6006443..005dcba677 100644
--- a/examples/models/openrouter_llama3.1_example .py
+++ b/examples/models/openrouter_llama3.1_example .py
@@ -29,7 +29,7 @@
# Set agent
camel_agent = ChatAgent(system_message=sys_msg, model=model)
-user_msg = """Say hi to CAMEL AI, one open-source community
+user_msg = """Say hi to CAMEL AI, one open-source community
dedicated to the study of autonomous and communicative agents."""
# Get response information
@@ -38,11 +38,11 @@
'''
===============================================================================
-Hello to the CAMEL AI community. It's great to see a group of like-minded
-individuals coming together to explore and advance the field of autonomous and
-communicative agents. Your open-source approach is truly commendable, as it
-fosters collaboration, innovation, and transparency. I'm excited to learn more
-about your projects and initiatives, and I'm happy to help in any way I can.
+Hello to the CAMEL AI community. It's great to see a group of like-minded
+individuals coming together to explore and advance the field of autonomous and
+communicative agents. Your open-source approach is truly commendable, as it
+fosters collaboration, innovation, and transparency. I'm excited to learn more
+about your projects and initiatives, and I'm happy to help in any way I can.
Keep pushing the boundaries of AI research and development!
===============================================================================
'''
diff --git a/examples/models/openrouter_llama4_example.py b/examples/models/openrouter_llama4_example.py
index ef69f52ba2..c6377764b0 100644
--- a/examples/models/openrouter_llama4_example.py
+++ b/examples/models/openrouter_llama4_example.py
@@ -29,7 +29,7 @@
# Set agent
camel_agent = ChatAgent(system_message=sys_msg, model=model)
-user_msg = """Say hi to CAMEL AI, one open-source community
+user_msg = """Say hi to CAMEL AI, one open-source community
dedicated to the study of autonomous and communicative agents."""
# Get response information
@@ -38,10 +38,10 @@
'''
===============================================================================
-Hello CAMEL AI! I'm excited to connect with an open-source community that's
-pushing the boundaries of autonomous and communicative agents. Your work in
-this area has the potential to drive significant advancements in AI research
-and applications. What exciting projects or initiatives is the
+Hello CAMEL AI! I'm excited to connect with an open-source community that's
+pushing the boundaries of autonomous and communicative agents. Your work in
+this area has the potential to drive significant advancements in AI research
+and applications. What exciting projects or initiatives is the
CAMEL AI community currently working on?
===============================================================================
'''
diff --git a/examples/models/ppio_model_example.py b/examples/models/ppio_model_example.py
index 87718195d8..47305f0258 100644
--- a/examples/models/ppio_model_example.py
+++ b/examples/models/ppio_model_example.py
@@ -35,7 +35,7 @@
# Set agent
camel_agent = ChatAgent(system_message=sys_msg, model=model_R1)
-user_msg = """Say hi to CAMEL AI, one open-source community
+user_msg = """Say hi to CAMEL AI, one open-source community
dedicated to the study of autonomous and communicative agents."""
# Get response information
@@ -44,21 +44,21 @@
'''
===============================================================================
-Hello CAMEL AI! 👋 A warm welcome to the open-source community pushing the
-boundaries of autonomous and communicative agents! Your work in exploring
-multi-agent systems, human-AI collaboration, and self-improving AI
-architectures is incredibly exciting. By fostering transparency and
-collaboration, you're empowering researchers and developers to tackle
-challenges like agent coordination, ethical alignment, and real-world
+Hello CAMEL AI! 👋 A warm welcome to the open-source community pushing the
+boundaries of autonomous and communicative agents! Your work in exploring
+multi-agent systems, human-AI collaboration, and self-improving AI
+architectures is incredibly exciting. By fostering transparency and
+collaboration, you're empowering researchers and developers to tackle
+challenges like agent coordination, ethical alignment, and real-world
deployment—critical steps toward responsible AI advancement.
-If anyone's curious, CAMEL AI's projects often dive into simulations where AI
-agents role-play scenarios (like a negotiation between a "seller" and "buyer"
-bot), testing how they communicate, adapt, and solve problems autonomously.
- This hands-on approach helps uncover insights into emergent behaviors and
+If anyone's curious, CAMEL AI's projects often dive into simulations where AI
+agents role-play scenarios (like a negotiation between a "seller" and "buyer"
+bot), testing how they communicate, adapt, and solve problems autonomously.
+ This hands-on approach helps uncover insights into emergent behaviors and
scalable solutions.
-Keep innovating! 🌟 The future of AI is brighter with communities like yours
+Keep innovating! 🌟 The future of AI is brighter with communities like yours
driving open, creative research.
===============================================================================
'''
@@ -85,7 +85,7 @@
# ruff: noqa: E501
'''
===============================================================================
-First, we need to understand what the notation $ 17_b $ and $ 97_b $ means. The subscript $ b $ indicates that the number is in base $ b $.
+First, we need to understand what the notation $ 17_b $ and $ 97_b $ means. The subscript $ b $ indicates that the number is in base $ b $.
1. **Convert $ 17_b $ to base 10:**
\[
diff --git a/examples/models/qwen_model_example.py b/examples/models/qwen_model_example.py
index 954ccfb5b3..8a3a6ea4d7 100644
--- a/examples/models/qwen_model_example.py
+++ b/examples/models/qwen_model_example.py
@@ -54,37 +54,37 @@ def __init__(self, exchange_id, api_key, secret, symbol, timeframe='1h', short_w
'secret': secret,
'enableRateLimit': True,
})
-
+
self.symbol = symbol
self.timeframe = timeframe
self.short_window = short_window
self.long_window = long_window
self.position = None # 'long', 'short', or None
-
+
def fetch_ohlcv(self, limit=100):
"""Fetch OHLCV data from exchange"""
raw_data = self.exchange.fetch_ohlcv(self.symbol, self.timeframe, limit=limit)
df = pd.DataFrame(raw_data, columns=['timestamp', 'open', 'high', 'low', 'close', 'volume'])
df['timestamp'] = pd.to_datetime(df['timestamp'], unit='ms')
return df
-
+
def calculate_indicators(self, df):
"""Calculate moving averages"""
df['sma_short'] = df['close'].rolling(window=self.short_window).mean()
df['sma_long'] = df['close'].rolling(window=self.long_window).mean()
return df
-
+
def generate_signal(self, df):
"""Generate buy/sell signals based on moving average crossover"""
if len(df) < self.long_window:
return 'HOLD'
-
+
# Get last two values for crossover detection
short_current = df['sma_short'].iloc[-1]
short_prev = df['sma_short'].iloc[-2]
long_current = df['sma_long'].iloc[-1]
long_prev = df['sma_long'].iloc[-2]
-
+
# Bullish crossover
if short_prev <= long_prev and short_current > long_current:
return 'BUY'
@@ -93,7 +93,7 @@ def generate_signal(self, df):
return 'SELL'
else:
return 'HOLD'
-
+
def execute_order(self, signal, amount=0.001): # Default small amount for safety
"""Execute buy/sell orders"""
try:
@@ -102,16 +102,16 @@ def execute_order(self, signal, amount=0.001): # Default small amount for safet
order = self.exchange.create_market_buy_order(self.symbol, amount)
self.position = 'long'
print(f"Order executed: {order}")
-
+
elif signal == 'SELL' and self.position != 'short':
print(f"[{datetime.now()}] SELL signal. Executing order...")
order = self.exchange.create_market_sell_order(self.symbol, amount)
self.position = 'short'
print(f"Order executed: {order}")
-
+
except Exception as e:
print(f"Order execution failed: {e}")
-
+
def run(self):
"""Main bot loop"""
print(f"Starting bot for {self.symbol} on {self.exchange.id}")
@@ -120,18 +120,18 @@ def run(self):
# Fetch market data
df = self.fetch_ohlcv()
df = self.calculate_indicators(df)
-
+
# Generate trading signal
signal = self.generate_signal(df)
print(f"[{datetime.now()}] Signal: {signal} | Price: {df['close'].iloc[-1]}")
-
+
# Execute order if signal is strong
if signal in ['BUY', 'SELL']:
self.execute_order(signal)
-
+
# Wait before next iteration
time.sleep(60) # Check every minute
-
+
except Exception as e:
print(f"Error: {e}")
time.sleep(60)
@@ -149,7 +149,7 @@ def run(self):
short_window=10,
long_window=50
)
-
+
# Run the bot (WARNING: This will execute real trades)
# bot.run() # Uncomment to run live
```
@@ -286,7 +286,7 @@ def trading_bot():
slow_ma_period = 20
print("Starting trading bot...")
-
+
while True:
try:
fast_ma = get_moving_average(symbol, KLINE_INTERVAL_1MINUTE, fast_ma_period)
diff --git a/examples/models/qwq_model_example.py b/examples/models/qwq_model_example.py
index b94f42065a..5262c9f3f4 100644
--- a/examples/models/qwq_model_example.py
+++ b/examples/models/qwq_model_example.py
@@ -22,7 +22,7 @@
model_config_dict={"temperature": 0.4},
)
-assistant_sys_msg = """You are a helpful and harmless assistant. You are Qwen
+assistant_sys_msg = """You are a helpful and harmless assistant. You are Qwen
developed by Alibaba. You should think step-by-step."""
agent = ChatAgent(assistant_sys_msg, model=ollama_model, token_limit=4096)
@@ -34,10 +34,10 @@
"""
===============================================================================
-Let's see. The word is "strawberry." I need to find out how many 'r's are in
-it. Okay, first, I'll spell it out slowly: s-t-r-a-w-b-e-r-r-y. Okay, now,
-I'll count the 'r's. Let's see: there's an 'r' after the 't', then another 'r'
-between the two 'r's towards the end, and one more at the end. Wait, no. Let's
+Let's see. The word is "strawberry." I need to find out how many 'r's are in
+it. Okay, first, I'll spell it out slowly: s-t-r-a-w-b-e-r-r-y. Okay, now,
+I'll count the 'r's. Let's see: there's an 'r' after the 't', then another 'r'
+between the two 'r's towards the end, and one more at the end. Wait, no. Let's
look again.
Spell it again: s-t-r-a-w-b-e-r-r-y.
@@ -62,18 +62,18 @@
Yes, that seems correct. So, the answer is three.
-Alternatively, I can think about the pronunciation or the way the word is
-structured, but I think just spelling it out and counting is the most
+Alternatively, I can think about the pronunciation or the way the word is
+structured, but I think just spelling it out and counting is the most
straightforward way.
Another way could be to break it down into syllables: straw-ber-ry. In "straw,
-" there's one 'r'. In "ber," there's another 'r'. And in "ry," there's another
+" there's one 'r'. In "ber," there's another 'r'. And in "ry," there's another
'r'. So, again, three 'r's.
-Wait, but in "ry," is there really an 'r'? Yes, "ry" has an 'r' and a 'y'. So,
+Wait, but in "ry," is there really an 'r'? Yes, "ry" has an 'r' and a 'y'. So,
that accounts for the third 'r'.
-So, whether I spell it out letter by letter or break it into syllables, I end
+So, whether I spell it out letter by letter or break it into syllables, I end
up with three 'r's.
I think that's pretty conclusive.
diff --git a/examples/models/reka_model_example.py b/examples/models/reka_model_example.py
index 5c2ae8e196..6531944136 100644
--- a/examples/models/reka_model_example.py
+++ b/examples/models/reka_model_example.py
@@ -28,7 +28,7 @@
# Set agent
camel_agent = ChatAgent(system_message=sys_msg, model=model)
-user_msg = """Say hi to CAMEL AI, one open-source community dedicated to the
+user_msg = """Say hi to CAMEL AI, one open-source community dedicated to the
study of autonomous and communicative agents."""
# Get response information
@@ -36,12 +36,12 @@
print(response.msgs[0].content)
'''
===============================================================================
- Hello CAMEL AI community! 🐫 I'm thrilled to connect with a group so
- dedicated to the study of autonomous and communicative agents. Your work is
- at the forefront of advancing AI technologies that can interact and operate
- independently in complex environments. I look forward to learning from your
- insights and contributing to the community in any way I can. Together, let's
- continue to push the boundaries of what's possible in AI research and
+ Hello CAMEL AI community! 🐫 I'm thrilled to connect with a group so
+ dedicated to the study of autonomous and communicative agents. Your work is
+ at the forefront of advancing AI technologies that can interact and operate
+ independently in complex environments. I look forward to learning from your
+ insights and contributing to the community in any way I can. Together, let's
+ continue to push the boundaries of what's possible in AI research and
development! 🚀
===============================================================================
'''
diff --git a/examples/models/samba_model_example.py b/examples/models/samba_model_example.py
index cf6c81f09a..d4f07d01f3 100644
--- a/examples/models/samba_model_example.py
+++ b/examples/models/samba_model_example.py
@@ -23,7 +23,7 @@
sys_msg = "You are a helpful assistant."
# Define user message
-user_msg = """Say hi to CAMEL AI, one open-source community dedicated to the
+user_msg = """Say hi to CAMEL AI, one open-source community dedicated to the
study of autonomous and communicative agents."""
@@ -46,14 +46,14 @@
print(response.msgs[0].content)
'''
===============================================================================
-Hello to the CAMEL AI community. It's great to see open-source communities
-like yours working on autonomous and communicative agents, as this field has
-the potential to revolutionize many areas of our lives, from customer service
+Hello to the CAMEL AI community. It's great to see open-source communities
+like yours working on autonomous and communicative agents, as this field has
+the potential to revolutionize many areas of our lives, from customer service
to healthcare and beyond.
-What specific projects or initiatives is the CAMEL AI community currently
-working on? Are there any exciting developments or breakthroughs that you'd
-like to share? I'm all ears (or rather, all text) and happy to learn more
+What specific projects or initiatives is the CAMEL AI community currently
+working on? Are there any exciting developments or breakthroughs that you'd
+like to share? I'm all ears (or rather, all text) and happy to learn more
about your work!
===============================================================================
'''
@@ -79,7 +79,7 @@
'''
===============================================================================
Hi CAMEL AI community! I'm here to help answer any questions you may have
-related to autonomous and communicative agents. Let me know how I can be
+related to autonomous and communicative agents. Let me know how I can be
of assistance.
===============================================================================
'''
diff --git a/examples/models/sglang_model_example.py b/examples/models/sglang_model_example.py
index 9cadd03f18..4574640b50 100644
--- a/examples/models/sglang_model_example.py
+++ b/examples/models/sglang_model_example.py
@@ -21,7 +21,7 @@
r"""Before using sglang to run LLM model offline,
you need to install flashinfer.
-Consider your machine's configuration and
+Consider your machine's configuration and
install flashinfer in a appropriate version.
For more details, please refer to:
https://sgl-project.github.io/start/install.html
@@ -29,7 +29,7 @@
Please load HF_token in your environment variable.
export HF_TOKEN=""
-When using the OpenAI interface to run SGLang model server,
+When using the OpenAI interface to run SGLang model server,
the base model may fail to recognize huggingface default
chat template, switching to the Instruct model resolves the issue.
"""
@@ -92,7 +92,7 @@
llama3: Llama 3.1 / 3.2 (e.g. meta-llama/Llama-3.1-8B-Instruct,
meta-llama/Llama-3.2-1B-Instruct).
mistral: Mistral (e.g. mistralai/Mistral-7B-Instruct-v0.3,
- mistralai/Mistral-Nemo-Instruct-2407,
+ mistralai/Mistral-Nemo-Instruct-2407,
mistralai/ Mistral-Nemo-Instruct-2407, mistralai/Mistral-7B-v0.3).
qwen25: Qwen 2.5 (e.g. Qwen/Qwen2.5-1.5B-Instruct, Qwen/Qwen2.5-7B-Instruct).
"""
diff --git a/examples/models/siliconflow_model_example.py b/examples/models/siliconflow_model_example.py
index 140f7f29fd..fee5a2bf05 100644
--- a/examples/models/siliconflow_model_example.py
+++ b/examples/models/siliconflow_model_example.py
@@ -39,11 +39,11 @@
'''
===============================================================================
-Hello CAMEL AI community! 👋 Your dedication to advancing the study of
-autonomous and communicative agents through open-source collaboration is truly
-inspiring. The work you're doing to push the boundaries of AI interaction and
-cooperative systems will undoubtedly shape the future of intelligent
-technologies. Keep innovating, exploring, and fostering that spirit of shared
+Hello CAMEL AI community! 👋 Your dedication to advancing the study of
+autonomous and communicative agents through open-source collaboration is truly
+inspiring. The work you're doing to push the boundaries of AI interaction and
+cooperative systems will undoubtedly shape the future of intelligent
+technologies. Keep innovating, exploring, and fostering that spirit of shared
learning—the world is excited to see what you create next! 🚀
===============================================================================
'''
diff --git a/examples/models/togetherai_model_example.py b/examples/models/togetherai_model_example.py
index 69685b9694..2bfde2c3f9 100644
--- a/examples/models/togetherai_model_example.py
+++ b/examples/models/togetherai_model_example.py
@@ -28,7 +28,7 @@
# Set agent
camel_agent = ChatAgent(system_message=sys_msg, model=model)
-user_msg = """Say hi to CAMEL AI, one open-source community dedicated to the
+user_msg = """Say hi to CAMEL AI, one open-source community dedicated to the
study of autonomous and communicative agents."""
# Get response information
@@ -36,10 +36,10 @@
print(response.msgs[0].content)
'''
===============================================================================
-Hello CAMEL AI! It's great to connect with an open-source community that's
-pushing the boundaries of autonomous and communicative agents. I'm excited to
-learn more about the innovative work being done here. What are some of the
-most interesting projects or research areas that CAMEL AI is currently
+Hello CAMEL AI! It's great to connect with an open-source community that's
+pushing the boundaries of autonomous and communicative agents. I'm excited to
+learn more about the innovative work being done here. What are some of the
+most interesting projects or research areas that CAMEL AI is currently
exploring?
===============================================================================
'''
@@ -56,7 +56,7 @@
# Set agent
camel_agent = ChatAgent(system_message=sys_msg, model=model)
-user_msg = """Say hi to CAMEL AI, one open-source community dedicated to the
+user_msg = """Say hi to CAMEL AI, one open-source community dedicated to the
study of autonomous and communicative agents."""
# Get response information
@@ -64,10 +64,10 @@
print(response.msgs[0].content)
'''
===============================================================================
-Hello CAMEL AI community. It's great to connect with a group of like-minded
-individuals dedicated to advancing the field of autonomous and communicative
-agents. Your open-source approach to sharing knowledge and resources is truly
-commendable, and I'm excited to see the innovative projects and research that
+Hello CAMEL AI community. It's great to connect with a group of like-minded
+individuals dedicated to advancing the field of autonomous and communicative
+agents. Your open-source approach to sharing knowledge and resources is truly
+commendable, and I'm excited to see the innovative projects and research that
come out of your community. How can I assist or contribute to your endeavors?
===============================================================================
'''
diff --git a/examples/models/watsonx_model_example.py b/examples/models/watsonx_model_example.py
index ab849a99ca..919a46f1e0 100644
--- a/examples/models/watsonx_model_example.py
+++ b/examples/models/watsonx_model_example.py
@@ -41,7 +41,7 @@
'''
==============================================================================
The University of Oxford is approximately 928 years old in the year 2024.
-[ToolCallingRecord(tool_name='sub', args={'a': 2024, 'b': 1096}, result=928,
+[ToolCallingRecord(tool_name='sub', args={'a': 2024, 'b': 1096}, result=928,
tool_call_id='call_05f85b0fdd9241be912883')]
==============================================================================
'''
diff --git a/examples/models/yi_model_example.py b/examples/models/yi_model_example.py
index b0b6c85c79..7d28c18a67 100644
--- a/examples/models/yi_model_example.py
+++ b/examples/models/yi_model_example.py
@@ -29,7 +29,7 @@
# Set agent
camel_agent = ChatAgent(system_message=sys_msg, model=model)
-user_msg = """Say hi to CAMEL AI, one open-source community
+user_msg = """Say hi to CAMEL AI, one open-source community
dedicated to the study of autonomous and communicative agents."""
# Get response information
@@ -37,9 +37,9 @@
print(response.msgs[0].content)
'''
===============================================================================
-Hello CAMEL AI community! 👋 It's great to connect with an open-source group
-dedicated to the fascinating fields of autonomous and communicative agents. If
-there's anything you need assistance with or any interesting projects you're
+Hello CAMEL AI community! 👋 It's great to connect with an open-source group
+dedicated to the fascinating fields of autonomous and communicative agents. If
+there's anything you need assistance with or any interesting projects you're
working on, feel free to share. I'm here to help however I can! 😊
===============================================================================
'''
diff --git a/examples/models/zhipuai_model_example.py b/examples/models/zhipuai_model_example.py
index 6d6c5e7d36..fcd48ef57c 100644
--- a/examples/models/zhipuai_model_example.py
+++ b/examples/models/zhipuai_model_example.py
@@ -29,7 +29,7 @@
# Set agent
camel_agent = ChatAgent(system_message=sys_msg, model=model)
-user_msg = """Say hi to CAMEL AI, one open-source community
+user_msg = """Say hi to CAMEL AI, one open-source community
dedicated to the study of autonomous and communicative agents."""
# Get response information
@@ -37,9 +37,9 @@
print(response.msgs[0].content)
'''
===============================================================================
-Hello to CAMEL AI and its community! As a helpful assistant, I'm here to
-provide assistance, answer questions, and support the study of autonomous and
-communicative agents to the best of my abilities. If you have any specific
+Hello to CAMEL AI and its community! As a helpful assistant, I'm here to
+provide assistance, answer questions, and support the study of autonomous and
+communicative agents to the best of my abilities. If you have any specific
questions or need guidance on a particular topic, feel free to ask!
===============================================================================
'''
diff --git a/examples/personas/personas_generation.py b/examples/personas/personas_generation.py
index 3680c4acb3..2b2bde56ea 100644
--- a/examples/personas/personas_generation.py
+++ b/examples/personas/personas_generation.py
@@ -17,8 +17,8 @@
persona_group = PersonaHub()
# Use the text_to_persona method
-example_text = """Clinical Guideline: Administration of Injections in
-Pediatric Patients Purpose: To provide standardized care for pediatric
+example_text = """Clinical Guideline: Administration of Injections in
+Pediatric Patients Purpose: To provide standardized care for pediatric
patients requiring injections, ensuring safety, ..."""
inferred_persona = persona_group.text_to_persona(example_text, action="read")
diff --git a/examples/runtimes/daytona_runtime.py b/examples/runtimes/daytona_runtime.py
index ee132b6e21..bcf605cf8f 100644
--- a/examples/runtimes/daytona_runtime.py
+++ b/examples/runtimes/daytona_runtime.py
@@ -87,20 +87,20 @@ def sample_function(x: int, y: int) -> int:
"""
===============================================================================
user prompt:
-Weng earns $12 an hour for babysitting. Yesterday, she just did 51 minutes of
+Weng earns $12 an hour for babysitting. Yesterday, she just did 51 minutes of
babysitting. How much did she earn?
-msgs=[BaseMessage(role_name='Assistant', role_type=, meta_dict={}, content='Weng earned $10.20 for her 51 minutes of
-babysitting.', video_bytes=None, image_list=None, image_detail='auto',
-video_detail='low', parsed=None)] terminated=False info={'id':
-'chatcmpl-BTDAKaCAYvs6KFsxe9NmmxMHQiaxx', 'usage': {'completion_tokens': 18,
-'prompt_tokens': 122, 'total_tokens': 140, 'completion_tokens_details':
-{'accepted_prediction_tokens': 0, 'audio_tokens': 0, 'reasoning_tokens': 0,
-'rejected_prediction_tokens': 0}, 'prompt_tokens_details': {'audio_tokens': 0,
-'cached_tokens': 0}}, 'termination_reasons': ['stop'], 'num_tokens': 99,
-'tool_calls': [ToolCallingRecord(tool_name='sample_function', args={'x': 12,
-'y': 51}, result=63, tool_call_id='call_clugUYSbh37yVAwpggG8Dwe0')],
+msgs=[BaseMessage(role_name='Assistant', role_type=, meta_dict={}, content='Weng earned $10.20 for her 51 minutes of
+babysitting.', video_bytes=None, image_list=None, image_detail='auto',
+video_detail='low', parsed=None)] terminated=False info={'id':
+'chatcmpl-BTDAKaCAYvs6KFsxe9NmmxMHQiaxx', 'usage': {'completion_tokens': 18,
+'prompt_tokens': 122, 'total_tokens': 140, 'completion_tokens_details':
+{'accepted_prediction_tokens': 0, 'audio_tokens': 0, 'reasoning_tokens': 0,
+'rejected_prediction_tokens': 0}, 'prompt_tokens_details': {'audio_tokens': 0,
+'cached_tokens': 0}}, 'termination_reasons': ['stop'], 'num_tokens': 99,
+'tool_calls': [ToolCallingRecord(tool_name='sample_function', args={'x': 12,
+'y': 51}, result=63, tool_call_id='call_clugUYSbh37yVAwpggG8Dwe0')],
'external_tool_call_requests': None}
===============================================================================
"""
diff --git a/examples/runtimes/ubuntu_docker_runtime/Dockerfile b/examples/runtimes/ubuntu_docker_runtime/Dockerfile
index 5204f5b041..2dd93061dd 100644
--- a/examples/runtimes/ubuntu_docker_runtime/Dockerfile
+++ b/examples/runtimes/ubuntu_docker_runtime/Dockerfile
@@ -44,4 +44,4 @@ python3 /home/api.py --host 0.0.0.0 --port 8000\n\
' > /home/start.sh && chmod +x /home/start.sh
# Set default command to start API service with proper host binding
-CMD ["/home/start.sh"]
\ No newline at end of file
+CMD ["/home/start.sh"]
diff --git a/examples/runtimes/ubuntu_docker_runtime/README.md b/examples/runtimes/ubuntu_docker_runtime/README.md
index 0211347afc..f1ccd10ea2 100644
--- a/examples/runtimes/ubuntu_docker_runtime/README.md
+++ b/examples/runtimes/ubuntu_docker_runtime/README.md
@@ -72,4 +72,4 @@ python ubuntu_docker_example.py
This will:
- Use the Docker container to run a role-playing scenario
- Initialize the Qwen model for AI interactions
-- Execute a sample task with AI agents communication
\ No newline at end of file
+- Execute a sample task with AI agents communication
diff --git a/examples/runtimes/ubuntu_docker_runtime/manage_camel_docker.sh b/examples/runtimes/ubuntu_docker_runtime/manage_camel_docker.sh
index 734ca8614a..381474526e 100755
--- a/examples/runtimes/ubuntu_docker_runtime/manage_camel_docker.sh
+++ b/examples/runtimes/ubuntu_docker_runtime/manage_camel_docker.sh
@@ -25,15 +25,15 @@ show_help() {
build_image() {
echo "Starting Docker image build..."
echo "Using CAMEL source path: $CAMEL_ROOT"
-
+
# Build in temporary directory
TEMP_DIR=$(mktemp -d)
echo "Creating temporary build directory: $TEMP_DIR"
-
+
# Copy necessary files to temporary directory
cp "$SCRIPT_DIR/Dockerfile" "$TEMP_DIR/"
cp -r "$CAMEL_ROOT" "$TEMP_DIR/camel_source"
-
+
# Ensure API file exists
if [ ! -f "$CAMEL_ROOT/camel/runtimes/api.py" ]; then
echo "Error: API file not found at $CAMEL_ROOT/camel/runtimes/api.py"
@@ -47,15 +47,15 @@ build_image() {
# Modify Dockerfile COPY commands - fix the sed command
sed -i '' 's|COPY ../../../|COPY camel_source/|g' "$TEMP_DIR/Dockerfile"
sed -i '' 's|COPY camel/runtimes/api.py|COPY api/api.py|g' "$TEMP_DIR/Dockerfile"
-
+
# Build in temporary directory
(cd "$TEMP_DIR" && docker build -t ${FULL_NAME} .)
-
+
BUILD_RESULT=$?
-
+
# Clean temporary directory
rm -rf "$TEMP_DIR"
-
+
if [ $BUILD_RESULT -eq 0 ]; then
echo "Docker image build successful!"
echo "Image name: ${FULL_NAME}"
@@ -73,10 +73,10 @@ check_container() {
container_id=$1
echo "Checking container logs..."
docker logs $container_id
-
+
echo "Checking container status..."
docker inspect $container_id --format='{{.State.Status}}'
-
+
echo "Checking if API is responding..."
curl -v http://localhost:8000/docs
}
@@ -84,7 +84,7 @@ check_container() {
# Clean containers and images
clean() {
echo "Starting cleanup..."
-
+
# Stop and remove related containers
echo "Finding and stopping related containers..."
containers=$(docker ps -a --filter "ancestor=${FULL_NAME}" --format "{{.ID}}")
@@ -110,7 +110,7 @@ clean() {
# Clean unused images and build cache
echo "Cleaning unused images and build cache..."
docker system prune -f
-
+
echo "Cleanup complete"
}
@@ -148,4 +148,4 @@ case "$1" in
show_help
exit 1
;;
-esac
\ No newline at end of file
+esac
diff --git a/examples/services/agent_openapi_server.py b/examples/services/agent_openapi_server.py
index c5a2b79fc9..8c7bdf105a 100644
--- a/examples/services/agent_openapi_server.py
+++ b/examples/services/agent_openapi_server.py
@@ -202,7 +202,7 @@ def example_init_and_step():
print("History:", r.json())
"""
History: [{'role': 'system',
- 'content': 'You are a helpful assistant
+ 'content': 'You are a helpful assistant
with access to a wiki search tool.'}]
"""
diff --git a/examples/storages/chroma_vector_storage.py b/examples/storages/chroma_vector_storage.py
index 8a3a9b36de..67782464bf 100644
--- a/examples/storages/chroma_vector_storage.py
+++ b/examples/storages/chroma_vector_storage.py
@@ -22,7 +22,7 @@
from camel.types import VectorDistance
"""
-Before running this example, you need to setup ChromaDB based on your chosen
+Before running this example, you need to setup ChromaDB based on your chosen
connection type:
(Option 1): Ephemeral ChromaDB (In-Memory):
@@ -301,7 +301,7 @@ def main():
"""
===============================================================================
-This example demonstrates different ChromaDB connection types and
+This example demonstrates different ChromaDB connection types and
configurations.ChromaDB is an AI-native vector database for embeddings.
=== Ephemeral ChromaDB Connection Example ===
diff --git a/examples/storages/nebular_graph.py b/examples/storages/nebular_graph.py
index 1c85036261..ab1ef369d9 100644
--- a/examples/storages/nebular_graph.py
+++ b/examples/storages/nebular_graph.py
@@ -53,8 +53,8 @@
"""
==============================================================================
-{'node_props': {'CAMEL_AI': [], 'Agent_Framework': []}, 'rel_props':
-{'contribute_to': []}, 'relationships': ['contribute_to'], 'metadata':
+{'node_props': {'CAMEL_AI': [], 'Agent_Framework': []}, 'rel_props':
+{'contribute_to': []}, 'relationships': ['contribute_to'], 'metadata':
{'index': []}}
==============================================================================
"""
@@ -96,9 +96,9 @@
"""
==============================================================================
-{'node_props': {'Agent_Framework': [], 'CAMEL_AI': [], 'Graph_Database': [],
-'Nebula': [], 'agent_framework': []}, 'rel_props': {'Supporting': [],
-'contribute_to': []}, 'relationships': ['Supporting', 'contribute_to'],
+{'node_props': {'Agent_Framework': [], 'CAMEL_AI': [], 'Graph_Database': [],
+'Nebula': [], 'agent_framework': []}, 'rel_props': {'Supporting': [],
+'contribute_to': []}, 'relationships': ['Supporting', 'contribute_to'],
'metadata': {'index': []}}
==============================================================================
"""
diff --git a/examples/storages/pgvector_storage.py b/examples/storages/pgvector_storage.py
index 07b413732e..db45b5a8ec 100644
--- a/examples/storages/pgvector_storage.py
+++ b/examples/storages/pgvector_storage.py
@@ -21,7 +21,7 @@
)
"""
-Before running this example, you need to setup a PostgreSQL instance with
+Before running this example, you need to setup a PostgreSQL instance with
the pgvector extension:
1. Install PostgreSQL and pgvector extension:
diff --git a/examples/storages/redis_storage.py b/examples/storages/redis_storage.py
index 713440bdd8..5a16a0c5ca 100644
--- a/examples/storages/redis_storage.py
+++ b/examples/storages/redis_storage.py
@@ -38,7 +38,7 @@ def main():
loaded_records = storage.load()
logger.info(f"Loaded records: {loaded_records}")
"""
- Loaded records: [{'id': 1, 'name': 'Record1'}, {'id': 2, 'name':
+ Loaded records: [{'id': 1, 'name': 'Record1'}, {'id': 2, 'name':
'Record2'}]
"""
diff --git a/examples/storages/tidb_vector_storage.py b/examples/storages/tidb_vector_storage.py
index ec811b17c4..1976bdd4c7 100644
--- a/examples/storages/tidb_vector_storage.py
+++ b/examples/storages/tidb_vector_storage.py
@@ -23,10 +23,10 @@
(Option 1): TiDB Serverless
-1. Go to [TiDB Cloud](https://tidbcloud.com/console/clusters) to create
+1. Go to [TiDB Cloud](https://tidbcloud.com/console/clusters) to create
a serverless cluster
2. Click the **Connect** button
-3. Select "SQLAlchemy" > "PyMySQL" for the **Connect With** option, then
+3. Select "SQLAlchemy" > "PyMySQL" for the **Connect With** option, then
you can get the DATABASE_URL like:
DATABASE_URL="mysql+pymysql://:@:4000/test&ssl_verify_cert=true&ssl_verify_identity=true"
diff --git a/examples/storages/weaviate_vector_storage.py b/examples/storages/weaviate_vector_storage.py
index 7ffdea5034..48d88e45b2 100644
--- a/examples/storages/weaviate_vector_storage.py
+++ b/examples/storages/weaviate_vector_storage.py
@@ -275,7 +275,7 @@ def main():
"""
===============================================================================
-This example demonstrates different Weaviate connection types and
+This example demonstrates different Weaviate connection types and
configurations.
Make sure you have the appropriate Weaviate instance running.
diff --git a/examples/structured_response/structure_response_prompt_engineering.py b/examples/structured_response/structure_response_prompt_engineering.py
index 4c85584e45..f5400fc6cf 100644
--- a/examples/structured_response/structure_response_prompt_engineering.py
+++ b/examples/structured_response/structure_response_prompt_engineering.py
@@ -50,14 +50,14 @@ class StudentList(BaseModel):
===============================================================================
Certainly! Below is an example of a student's information:
-**Student Name:** Emily Johnson
-**Date of Birth:** March 12, 2005
-**Grade:** 10th Grade
-**School:** Lincoln High School
-**Address:** 456 Oak Street, Springfield, IL 62704
-**Phone Number:** (555) 123-4567
-**Email:** emily.johnson@student.lincolnhs.edu
-**Emergency Contact:** John Johnson (Father) - (555) 987-6543
+**Student Name:** Emily Johnson
+**Date of Birth:** March 12, 2005
+**Grade:** 10th Grade
+**School:** Lincoln High School
+**Address:** 456 Oak Street, Springfield, IL 62704
+**Phone Number:** (555) 123-4567
+**Email:** emily.johnson@student.lincolnhs.edu
+**Emergency Contact:** John Johnson (Father) - (555) 987-6543
Is there anything specific you need or any changes you'd like to make?
===============================================================================
@@ -110,7 +110,7 @@ class StudentList(BaseModel):
print(response2.msgs[0].parsed)
"""
===============================================================================
-studentList=[Student(name='Emily Johnson', age='18', email='emily.johnson@student.lincolnhs.edu')]
+studentList=[Student(name='Emily Johnson', age='18', email='emily.johnson@student.lincolnhs.edu')]
===============================================================================
""" # noqa: E501
diff --git a/examples/tasks/multi_modal_task_generation.py b/examples/tasks/multi_modal_task_generation.py
index 0d89196ec7..a672fe531e 100644
--- a/examples/tasks/multi_modal_task_generation.py
+++ b/examples/tasks/multi_modal_task_generation.py
@@ -117,19 +117,19 @@ def create_video_task(video_path: str, task_id: str = "1") -> Task:
# ruff: noqa: E501
"""
===============================================================================
-Task 0: Weng earns $12 an hour for babysitting. Yesterday, she just did 51
+Task 0: Weng earns $12 an hour for babysitting. Yesterday, she just did 51
minutes of babysitting. How much did she earn?
-Task 0.0: Weng earns $12 an hour for babysitting. However, her hourly rate
-increases by $2 for every additional hour worked beyond the first hour.
-Yesterday, she babysat for a total of 3 hours and 45 minutes. How much did she
+Task 0.0: Weng earns $12 an hour for babysitting. However, her hourly rate
+increases by $2 for every additional hour worked beyond the first hour.
+Yesterday, she babysat for a total of 3 hours and 45 minutes. How much did she
earn in total for her babysitting services?
Task 0.0: Convert 51 minutes to hours.
Task 0.1: Calculate the proportion of 51 minutes to an hour.
-Task 0.2: Multiply the proportion by Weng's hourly rate to find out how much
+Task 0.2: Multiply the proportion by Weng's hourly rate to find out how much
she earned for 51 minutes of babysitting.
===============================================================================
"""
diff --git a/examples/tasks/task_generation.py b/examples/tasks/task_generation.py
index 92d0819735..6e39d27045 100644
--- a/examples/tasks/task_generation.py
+++ b/examples/tasks/task_generation.py
@@ -56,19 +56,19 @@
# ruff: noqa: E501
"""
===============================================================================
-Task 0: Weng earns $12 an hour for babysitting. Yesterday, she just did 51
+Task 0: Weng earns $12 an hour for babysitting. Yesterday, she just did 51
minutes of babysitting. How much did she earn?
-Task 0.0: Weng earns $12 an hour for babysitting. However, her hourly rate
-increases by $2 for every additional hour worked beyond the first hour.
-Yesterday, she babysat for a total of 3 hours and 45 minutes. How much did she
+Task 0.0: Weng earns $12 an hour for babysitting. However, her hourly rate
+increases by $2 for every additional hour worked beyond the first hour.
+Yesterday, she babysat for a total of 3 hours and 45 minutes. How much did she
earn in total for her babysitting services?
Task 0.0: Convert 51 minutes to hours.
Task 0.1: Calculate the proportion of 51 minutes to an hour.
-Task 0.2: Multiply the proportion by Weng's hourly rate to find out how much
+Task 0.2: Multiply the proportion by Weng's hourly rate to find out how much
she earned for 51 minutes of babysitting.
===============================================================================
"""
diff --git a/examples/toolkits/aci_toolkit.py b/examples/toolkits/aci_toolkit.py
index cab3513472..597f518777 100644
--- a/examples/toolkits/aci_toolkit.py
+++ b/examples/toolkits/aci_toolkit.py
@@ -46,19 +46,19 @@
"""
==============================================================================
-msgs=[BaseMessage(role_name='assistant', role_type=, meta_dict={}, content='The repository **camel-ai/camel** has
-been successfully starred!', video_bytes=None, image_list=None,
+msgs=[BaseMessage(role_name='assistant', role_type=, meta_dict={}, content='The repository **camel-ai/camel** has
+been successfully starred!', video_bytes=None, image_list=None,
image_detail='auto', video_detail='low', parsed=None)] terminated=False info=
-{'id': 'chatcmpl-BTb0Qd0RUFWkIz96WPWzKkhTEx4GZ', 'usage':
-{'completion_tokens': 15, 'prompt_tokens': 1323, 'total_tokens': 1338,
-'completion_tokens_details': {'accepted_prediction_tokens': 0, 'audio_tokens':
-0, 'reasoning_tokens': 0, 'rejected_prediction_tokens': 0},
-'prompt_tokens_details': {'audio_tokens': 0, 'cached_tokens': 1280}},
-'termination_reasons': ['stop'], 'num_tokens': 57, 'tool_calls':
-[ToolCallingRecord(tool_name='GITHUB__STAR_REPOSITORY', args={'path': {'repo':
-'camel', 'owner': 'camel-ai'}}, result={'success': True, 'data': {}},
-tool_call_id='call_5jlmAN7VKEq1Pc9kppWBJvoZ')], 'external_tool_call_requests':
+{'id': 'chatcmpl-BTb0Qd0RUFWkIz96WPWzKkhTEx4GZ', 'usage':
+{'completion_tokens': 15, 'prompt_tokens': 1323, 'total_tokens': 1338,
+'completion_tokens_details': {'accepted_prediction_tokens': 0, 'audio_tokens':
+0, 'reasoning_tokens': 0, 'rejected_prediction_tokens': 0},
+'prompt_tokens_details': {'audio_tokens': 0, 'cached_tokens': 1280}},
+'termination_reasons': ['stop'], 'num_tokens': 57, 'tool_calls':
+[ToolCallingRecord(tool_name='GITHUB__STAR_REPOSITORY', args={'path': {'repo':
+'camel', 'owner': 'camel-ai'}}, result={'success': True, 'data': {}},
+tool_call_id='call_5jlmAN7VKEq1Pc9kppWBJvoZ')], 'external_tool_call_requests':
None}
==============================================================================
"""
diff --git a/examples/toolkits/arxiv_toolkit.py b/examples/toolkits/arxiv_toolkit.py
index 4fd99c253e..9320873cb1 100644
--- a/examples/toolkits/arxiv_toolkit.py
+++ b/examples/toolkits/arxiv_toolkit.py
@@ -44,26 +44,26 @@
print(str(response.info['tool_calls'])[:1000])
'''
===============================================================================
-[ToolCallingRecord(func_name='search_papers', args={'query': 'attention is
-all you need'}, result=[{'title': "Attention Is All You Need But You Don't
-Need All Of It For Inference of Large Language Models", 'published_date':
-'2024-07-22', 'authors': ['Georgy Tyukin', 'Gbetondji J-S Dovonon', 'Jean
+[ToolCallingRecord(func_name='search_papers', args={'query': 'attention is
+all you need'}, result=[{'title': "Attention Is All You Need But You Don't
+Need All Of It For Inference of Large Language Models", 'published_date':
+'2024-07-22', 'authors': ['Georgy Tyukin', 'Gbetondji J-S Dovonon', 'Jean
Kaddour', 'Pasquale Minervini'], 'entry_id': 'http://arxiv.org/abs/2407.
-15516v1', 'summary': 'The inference demand for LLMs has skyrocketed in recent
-months, and serving\nmodels with low latencies remains challenging due to the
-quadratic input length\ncomplexity of the attention layers. In this work, we
-investigate the effect of\ndropping MLP and attention layers at inference time
-on the performance of\nLlama-v2 models. We find that dropping dreeper
-attention layers only marginally\ndecreases performance but leads to the best
-speedups alongside dropping entire\nlayers. For example, removing 33\\% of
-attention layers in a 13B Llama2 model\nresults in a 1.8\\% drop in average
+15516v1', 'summary': 'The inference demand for LLMs has skyrocketed in recent
+months, and serving\nmodels with low latencies remains challenging due to the
+quadratic input length\ncomplexity of the attention layers. In this work, we
+investigate the effect of\ndropping MLP and attention layers at inference time
+on the performance of\nLlama-v2 models. We find that dropping dreeper
+attention layers only marginally\ndecreases performance but leads to the best
+speedups alongside dropping entire\nlayers. For example, removing 33\\% of
+attention layers in a 13B Llama2 model\nresults in a 1.8\\% drop in average
performance ove...
===============================================================================
'''
# Define a user message
-usr_msg = """Download paper "attention is all you need" for me to my
+usr_msg = """Download paper "attention is all you need" for me to my
local path '/Users/enrei/Desktop/camel0826/camel/examples/tool_call'"""
# Get response information
@@ -71,9 +71,9 @@
print(str(response.info['tool_calls'])[:1000])
'''
===============================================================================
-[ToolCallingRecord(func_name='download_papers', args={'query': 'attention
+[ToolCallingRecord(func_name='download_papers', args={'query': 'attention
is all you need', 'output_dir': '/Users/enrei/Desktop/camel0826/camel/examples/
-tool_call', 'paper_ids': ['2407.15516v1', '2107.08000v1', '2306.01926v1',
+tool_call', 'paper_ids': ['2407.15516v1', '2107.08000v1', '2306.01926v1',
'2112.05993v1', '1912.11959v2']}, result='papers downloaded successfully')]
===============================================================================
'''
diff --git a/examples/toolkits/ask_news_toolkit.py b/examples/toolkits/ask_news_toolkit.py
index b868778122..9831261a94 100644
--- a/examples/toolkits/ask_news_toolkit.py
+++ b/examples/toolkits/ask_news_toolkit.py
@@ -24,18 +24,18 @@
[1]:
Title: Can Elon Musk Become President of the United States?
-Summary: Elon Musk, the American billionaire, has been appointed to lead the
-Department of Government Efficiency in Donald Trump's upcoming administration,
-sparking speculation about his potential presidential ambitions. However,
-according to the US Constitution, the President must be a natural-born citizen
-of the United States. As Musk was born in South Africa and became a Canadian
-citizen through his mother, he does not meet this requirement. While he
-acquired US citizenship in 2002, this does not make him a natural-born
-citizen. Additionally, the Constitution requires the President to be at least
-35 years old and a resident of the United States for at least 14 years. Musk
-can, however, hold other government positions, as the requirement of being a
-natural-born citizen only applies to the President and Vice President. Many
-non-US-born citizens have held prominent government positions in the past,
+Summary: Elon Musk, the American billionaire, has been appointed to lead the
+Department of Government Efficiency in Donald Trump's upcoming administration,
+sparking speculation about his potential presidential ambitions. However,
+according to the US Constitution, the President must be a natural-born citizen
+of the United States. As Musk was born in South Africa and became a Canadian
+citizen through his mother, he does not meet this requirement. While he
+acquired US citizenship in 2002, this does not make him a natural-born
+citizen. Additionally, the Constitution requires the President to be at least
+35 years old and a resident of the United States for at least 14 years. Musk
+can, however, hold other government positions, as the requirement of being a
+natural-born citizen only applies to the President and Vice President. Many
+non-US-born citizens have held prominent government positions in the past,
including Henry
===============================================================================
"""
diff --git a/examples/toolkits/async_browser_toolkit.py b/examples/toolkits/async_browser_toolkit.py
index eb0f99225f..89ecbc1174 100644
--- a/examples/toolkits/async_browser_toolkit.py
+++ b/examples/toolkits/async_browser_toolkit.py
@@ -53,7 +53,7 @@
web_agent = ChatAgent(
"""
-You are a helpful assistant that can search the web, simulate browser
+You are a helpful assistant that can search the web, simulate browser
actions, and provide relevant information to solve the given task.
""",
model=web_model,
@@ -87,7 +87,7 @@
workforce.add_single_agent_worker(
"""
- An agent that can search the web, simulate browser actions,
+ An agent that can search the web, simulate browser actions,
and provide relevant information to solve the given task.""",
worker=web_agent,
)
@@ -107,8 +107,8 @@
"""
==========================================================================
-The current #1 best-selling product in the gaming category on Amazon is
-'Minecraft: Switch Edition - Nintendo Switch'. The price is $35.97, and
+The current #1 best-selling product in the gaming category on Amazon is
+'Minecraft: Switch Edition - Nintendo Switch'. The price is $35.97, and
it has a rating of 4.8 stars based on 1,525 ratings.
==========================================================================
"""
diff --git a/examples/toolkits/audio_analysis_toolkit.py b/examples/toolkits/audio_analysis_toolkit.py
index 8b66dccf35..c94f93c9c5 100644
--- a/examples/toolkits/audio_analysis_toolkit.py
+++ b/examples/toolkits/audio_analysis_toolkit.py
@@ -20,14 +20,14 @@
audio_models = OpenAIAudioModels()
# Set example input
-input = """CAMEL-AI.org is an open-source community dedicated to the study of
-autonomous and communicative agents. We believe that studying these agents on
-a large scale offers valuable insights into their behaviors, capabilities, and
-potential risks. To facilitate research in this field, we provide, implement,
-and support various types of agents, tasks, prompts, models, datasets, and
+input = """CAMEL-AI.org is an open-source community dedicated to the study of
+autonomous and communicative agents. We believe that studying these agents on
+a large scale offers valuable insights into their behaviors, capabilities, and
+potential risks. To facilitate research in this field, we provide, implement,
+and support various types of agents, tasks, prompts, models, datasets, and
simulated environments.
-Join us via Slack, Discord, or WeChat in pushing the boundaries of building AI
+Join us via Slack, Discord, or WeChat in pushing the boundaries of building AI
Society."""
# Set example local path to store the file
@@ -71,27 +71,27 @@
"""
==========================================================================
-2025-03-09 22:54:55,822 - camel.camel.toolkits.audio_analysis_toolkit -
+2025-03-09 22:54:55,822 - camel.camel.toolkits.audio_analysis_toolkit -
WARNING - No audio transcription model provided. Using OpenAIAudioModels.
-The audio content discusses Camel AI, an open-source community dedicated to
-the study of autonomous and communicative agents. It emphasizes the belief
-that large-scale research on these agents can yield valuable insights into
-their behaviors, capabilities, and potential risks. The community provides
-resources to support research, including various types of agents, tasks,
-prompts, models, datasets, and simulated environments. Additionally, it
-invites listeners to join the community through platforms like Slack, Discord,
+The audio content discusses Camel AI, an open-source community dedicated to
+the study of autonomous and communicative agents. It emphasizes the belief
+that large-scale research on these agents can yield valuable insights into
+their behaviors, capabilities, and potential risks. The community provides
+resources to support research, including various types of agents, tasks,
+prompts, models, datasets, and simulated environments. Additionally, it
+invites listeners to join the community through platforms like Slack, Discord,
or WeChat to contribute to the development of AI society.
Here is the transcription of the audio:
-"CamelAI.org is an open-source community dedicated to the study of autonomous
-and communicative agents. We believe that studying these agents on a large
-scale offers valuable insights into their behaviors, capabilities, and
-potential risks. To facilitate research in this field, we provide, implement,
-and support various types of agents, tasks, prompts, models, datasets, and
-simulated environments. Join us via Slack, Discord, or WeChat in pushing the
+"CamelAI.org is an open-source community dedicated to the study of autonomous
+and communicative agents. We believe that studying these agents on a large
+scale offers valuable insights into their behaviors, capabilities, and
+potential risks. To facilitate research in this field, we provide, implement,
+and support various types of agents, tasks, prompts, models, datasets, and
+simulated environments. Join us via Slack, Discord, or WeChat in pushing the
boundaries of building AI society."
==========================================================================
"""
diff --git a/examples/toolkits/bohrium_toolkit_example.py b/examples/toolkits/bohrium_toolkit_example.py
index 39a4c76144..57ddb00d03 100644
--- a/examples/toolkits/bohrium_toolkit_example.py
+++ b/examples/toolkits/bohrium_toolkit_example.py
@@ -62,9 +62,9 @@
- **Machine Type**: c2_m4_cpu (2 CPU, 4 GB Memory)
- **Region**: cn-zhangjiakou
-The job is currently pending or initializing. I will keep you updated on
+The job is currently pending or initializing. I will keep you updated on
its progress. If you need further assistance, feel free to ask!
-If you want to monitor the job's status or logs, let me know and I can
+If you want to monitor the job's status or logs, let me know and I can
assist you with that as well.
==========================================================================
"""
diff --git a/examples/toolkits/browser_toolkit.py b/examples/toolkits/browser_toolkit.py
index 9ffc7f99d6..47397918ca 100644
--- a/examples/toolkits/browser_toolkit.py
+++ b/examples/toolkits/browser_toolkit.py
@@ -65,8 +65,8 @@
print(response.msgs[0].content)
"""
==========================================================================
-The current #1 best-selling product in the gaming category on Amazon is the
-**AutoFull C3 Gaming Chair**.
+The current #1 best-selling product in the gaming category on Amazon is the
+**AutoFull C3 Gaming Chair**.
- **Price:** $249.99
- **Rating:** 4.4 stars based on 5,283 ratings.
diff --git a/examples/toolkits/browser_toolkit_with_cookie.py b/examples/toolkits/browser_toolkit_with_cookie.py
index b741137a33..45320fefba 100644
--- a/examples/toolkits/browser_toolkit_with_cookie.py
+++ b/examples/toolkits/browser_toolkit_with_cookie.py
@@ -72,8 +72,8 @@
print(response.msgs[0].content)
"""
==========================================================================
-The current #1 best-selling product in the gaming category on Amazon is the
-**AutoFull C3 Gaming Chair**.
+The current #1 best-selling product in the gaming category on Amazon is the
+**AutoFull C3 Gaming Chair**.
- **Price:** $249.99
- **Rating:** 4.4 stars based on 5,283 ratings.
diff --git a/examples/toolkits/context_summarizer_toolkit.py b/examples/toolkits/context_summarizer_toolkit.py
index fbe72f6beb..6b4c339d96 100644
--- a/examples/toolkits/context_summarizer_toolkit.py
+++ b/examples/toolkits/context_summarizer_toolkit.py
@@ -22,7 +22,7 @@
intelligent context summarization and memory management capabilities for
ChatAgent. The toolkit enables agents to:
1. Manually save conversation memory when context becomes cluttered
-2. Load previous context summaries
+2. Load previous context summaries
3. Search through conversation history using text search
4. Get information about current memory state
5. Check if context should be compressed based on message/token limits
@@ -70,39 +70,39 @@
'''
===============================================================================
-Assistant: In spring, Japan is especially beautiful with cherry blossoms in
+Assistant: In spring, Japan is especially beautiful with cherry blossoms in
full bloom. Here are some of the best cities to visit:
-1. **Tokyo**: The bustling capital offers a mix of modern life and
-traditional culture. Don't miss Ueno Park and Shinjuku Gyoen for
+1. **Tokyo**: The bustling capital offers a mix of modern life and
+traditional culture. Don't miss Ueno Park and Shinjuku Gyoen for
cherry blossom viewing.
-2. **Kyoto**: Known for its classical Buddhist temples, beautiful gardens,
-and traditional wooden houses. The Philosopher's Path is lined with
+2. **Kyoto**: Known for its classical Buddhist temples, beautiful gardens,
+and traditional wooden houses. The Philosopher's Path is lined with
hundreds of cherry trees.
-3. **Osaka**: A lively city known for modern architecture, vibrant
-nightlife, and delicious street food. Osaka Castle is a great spot
+3. **Osaka**: A lively city known for modern architecture, vibrant
+nightlife, and delicious street food. Osaka Castle is a great spot
for hanami.
-4. **Nara**: Home to free-roaming deer and the impressive Todaiji Temple.
+4. **Nara**: Home to free-roaming deer and the impressive Todaiji Temple.
Nara Park is another great place for cherry blossoms.
-5. **Hiroshima**: Visit the Peace Memorial Park and Museum. Nearby
-Miyajima Island is famous for its floating torii gate and beautiful
+5. **Hiroshima**: Visit the Peace Memorial Park and Museum. Nearby
+Miyajima Island is famous for its floating torii gate and beautiful
cherry blossoms.
-6. **Sapporo**: Sapporo in Hokkaido is known for its late-blooming
+6. **Sapporo**: Sapporo in Hokkaido is known for its late-blooming
cherry blossoms, and offers a different perspective from mainland Japan.
-7. **Kanazawa**: Known for its well-preserved Edo-era districts, art
+7. **Kanazawa**: Known for its well-preserved Edo-era districts, art
museums, and beautiful gardens like Kenrokuen.
-8. **Fukuoka**: Offers a mix of modern city life and historical sites.
+8. **Fukuoka**: Offers a mix of modern city life and historical sites.
Fukuoka Castle ruins are surrounded by cherry blossoms in Maizuru Park.
-These cities provide a mix of modern attractions and historical sites, all
-enhanced by the seasonal beauty of cherry blossoms. Let me know if you need
+These cities provide a mix of modern attractions and historical sites, all
+enhanced by the seasonal beauty of cherry blossoms. Let me know if you need
more details on any specific city or travel tips!
===============================================================================
'''
@@ -118,8 +118,8 @@
'''
===============================================================================
-Assistant: To fully experience Tokyo's rich culture and diverse food scene,
-I recommend spending at least 4 to 5 days in the city. Here's a suggested
+Assistant: To fully experience Tokyo's rich culture and diverse food scene,
+I recommend spending at least 4 to 5 days in the city. Here's a suggested
breakdown to make the most of your stay:
**Day 1: Explore Traditional Tokyo**
@@ -130,7 +130,7 @@
**Day 2: Discover Modern Tokyo**
- Spend time in Shibuya and Shinjuku to see the bustling city life.
- Explore Harajuku's trendy fashion and unique cafes.
-- Visit the observation deck at Tokyo Metropolitan Government Building
+- Visit the observation deck at Tokyo Metropolitan Government Building
for a panoramic city view.
- Enjoy modern Japanese cuisine or sushi at a renowned Tokyo restaurant.
@@ -141,16 +141,16 @@
**Day 4: Experience Tokyo's Culinary Scene**
- Join a food tour to explore Tsukiji Outer Market for fresh seafood.
-- Wander through different neighborhoods to try ramen, tempura, and
+- Wander through different neighborhoods to try ramen, tempura, and
more street food.
- Visit Ginza for upscale shopping and dining.
**Day 5 (Optional): Day Trip or Special Interest**
- Consider a day trip to nearby Nikko or Kamakura for historical exploration.
-- Enjoy a themed cafe experience or attend a cultural workshop
+- Enjoy a themed cafe experience or attend a cultural workshop
(e.g., tea ceremony).
-This itinerary gives you a balanced mix of cultural, historical, and
+This itinerary gives you a balanced mix of cultural, historical, and
culinary experiences, highlighting the dynamic and diverse facets of Tokyo.
===============================================================================
'''
@@ -163,48 +163,48 @@
'''
===============================================================================
-Assistant: The best way to travel between cities in Japan is by using the
-country's efficient and reliable train system. Here are some of the most
+Assistant: The best way to travel between cities in Japan is by using the
+country's efficient and reliable train system. Here are some of the most
popular options:
-1. **Shinkansen (Bullet Train)**:
- - The Shinkansen is the fastest and most convenient way to travel
+1. **Shinkansen (Bullet Train)**:
+ - The Shinkansen is the fastest and most convenient way to travel
between major cities such as Tokyo, Kyoto, Osaka, Hiroshima, and more.
- - It's known for its punctuality and comfort. You can reserve seats in
- advance for longer journeys, which is often recommended during busy
+ - It's known for its punctuality and comfort. You can reserve seats in
+ advance for longer journeys, which is often recommended during busy
travel seasons.
- - Consider purchasing a Japan Rail Pass (JR Pass) if you plan to travel
- extensively by train. It offers unlimited rides on most Shinkansen lines
- and JR trains for a set period (7, 14, or 21 days), which can be very
+ - Consider purchasing a Japan Rail Pass (JR Pass) if you plan to travel
+ extensively by train. It offers unlimited rides on most Shinkansen lines
+ and JR trains for a set period (7, 14, or 21 days), which can be very
cost-effective.
2. **Limited Express Trains**:
- - For routes not covered by the Shinkansen, Limited Express trains run
+ - For routes not covered by the Shinkansen, Limited Express trains run
frequently and are also comfortable.
- These are ideal for shorter distances or routes within certain regions.
3. **Local Trains**:
- - Local trains are available for shorter distances and are often more
+ - Local trains are available for shorter distances and are often more
economical, but they might take longer.
4. **Domestic Flights**:
- - For distant cities, like traveling from Tokyo to Sapporo or Okinawa,
+ - For distant cities, like traveling from Tokyo to Sapporo or Okinawa,
domestic flights might be more convenient.
- - Low-cost carriers and domestic airlines offer frequent flights between
+ - Low-cost carriers and domestic airlines offer frequent flights between
major airports.
-5. **Bus Services**:
- - Highway buses are a cost-effective alternative to trains for
- long-distance travel. They might take longer but are suitable for
+5. **Bus Services**:
+ - Highway buses are a cost-effective alternative to trains for
+ long-distance travel. They might take longer but are suitable for
overnight travel.
6. **Car Rental**:
- - Renting a car is an option if you want to explore more rural or remote
- areas. However, navigating cities like Tokyo can be challenging due to
+ - Renting a car is an option if you want to explore more rural or remote
+ areas. However, navigating cities like Tokyo can be challenging due to
traffic and parking limitations.
-Each mode of transportation has its benefits depending on your itinerary,
-budget, and preferences. Using a mix of these options can help optimize your
+Each mode of transportation has its benefits depending on your itinerary,
+budget, and preferences. Using a mix of these options can help optimize your
travel experience in Japan.
===============================================================================
'''
@@ -217,32 +217,32 @@
'''
===============================================================================
-Assistant: Yes, it is generally advisable to book accommodations in advance,
-especially when traveling to popular destinations like Tokyo, Kyoto, and
-Osaka during peak tourist seasons. Here are some reasons why booking ahead
+Assistant: Yes, it is generally advisable to book accommodations in advance,
+especially when traveling to popular destinations like Tokyo, Kyoto, and
+Osaka during peak tourist seasons. Here are some reasons why booking ahead
is a good idea:
-1. **Availability**: Popular hotels and unique accommodations (like ryokans,
-traditional Japanese inns) can fill up quickly, especially during cherry
+1. **Availability**: Popular hotels and unique accommodations (like ryokans,
+traditional Japanese inns) can fill up quickly, especially during cherry
blossom season in spring or autumn for the fall foliage.
-2. **Cost**: Booking early often allows you to take advantage of early bird
-rates and promotions. Prices can increase significantly as your travel dates
+2. **Cost**: Booking early often allows you to take advantage of early bird
+rates and promotions. Prices can increase significantly as your travel dates
approach, particularly in popular areas.
-3. **Choice**: You'll have a wider selection of accommodations to choose
-from, ensuring that you can find a place that fits your preferences for
+3. **Choice**: You'll have a wider selection of accommodations to choose
+from, ensuring that you can find a place that fits your preferences for
comfort, location, and amenities.
-4. **Peace of Mind**: Having your accommodations sorted in advance adds
-convenience and reduces stress, allowing you to focus on planning other
+4. **Peace of Mind**: Having your accommodations sorted in advance adds
+convenience and reduces stress, allowing you to focus on planning other
aspects of your trip.
-5. **Flexibility**: Many hotels and booking platforms offer flexible
+5. **Flexibility**: Many hotels and booking platforms offer flexible
cancellation policies, so you can often make changes if needed.
-To ensure a smooth and enjoyable experience, consider researching and
-booking your accommodations a few months in advance, especially if you have
+To ensure a smooth and enjoyable experience, consider researching and
+booking your accommodations a few months in advance, especially if you have
specific places or experiences you'd like to include in your itinerary.
===============================================================================
'''
@@ -255,8 +255,8 @@
'''
===============================================================================
-Assistant: Learning a few essential Japanese phrases can greatly enhance your
-travel experience in Japan, as it shows respect for the local culture and
+Assistant: Learning a few essential Japanese phrases can greatly enhance your
+travel experience in Japan, as it shows respect for the local culture and
can help in communication. Here are some helpful phrases:
1. **Basic Greetings:**
@@ -266,7 +266,7 @@
- Goodbye: さようなら (Sayonara)
2. **Polite Expressions:**
- - Thank you: ありがとう (Arigatou) /
+ - Thank you: ありがとう (Arigatou) /
ありがとうございます (Arigatou gozaimasu)
- Please: お願いします (Onegaishimasu)
- Excuse me / I'm sorry: すみません (Sumimasen)
@@ -292,8 +292,8 @@
- Help!: 助けて!(Tasukete!)
- Call a doctor: 医者を呼んでください (Isha o yonde kudasai)
-These phrases cover a variety of situations you might encounter during your
-trip. It's also helpful to have a translation app or phrasebook on hand for
+These phrases cover a variety of situations you might encounter during your
+trip. It's also helpful to have a translation app or phrasebook on hand for
more complex conversations.
===============================================================================
'''
@@ -307,10 +307,10 @@
'''
===============================================================================
-Assistant: Our current memory status has 12 messages in memory, and there are
-no summary or history files saved yet. We can store summaries or history of
-our conversations, and you can also perform text searches within these
-sessions if needed. If you have any specific requirements or need a summary,
+Assistant: Our current memory status has 12 messages in memory, and there are
+no summary or history files saved yet. We can store summaries or history of
+our conversations, and you can also perform text searches within these
+sessions if needed. If you have any specific requirements or need a summary,
feel free to let me know!
===============================================================================
'''
@@ -328,7 +328,7 @@
'''
===============================================================================
-Assistant: I've cleared some of the previous conversation for a fresh start.
+Assistant: I've cleared some of the previous conversation for a fresh start.
How can I assist you with your travel plans today?
===============================================================================
'''
@@ -345,7 +345,7 @@
'''
===============================================================================
-Assistant: Sure, I can help you plan a workout routine! Could you tell me a
+Assistant: Sure, I can help you plan a workout routine! Could you tell me a
bit about your fitness goals and any preferences or limitations you have?
===============================================================================
'''
@@ -362,10 +362,10 @@
'''
===============================================================================
-Assistant: Your name is John, and we discussed your trip to Japan.
-Specifically about Tokyo, we talked about its modern attractions and
-traditional sites, like visiting Shinjuku Gyoen Park for cherry blossoms,
-and exploring areas such as Shibuya and Akihabara. Is there anything more
+Assistant: Your name is John, and we discussed your trip to Japan.
+Specifically about Tokyo, we talked about its modern attractions and
+traditional sites, like visiting Shinjuku Gyoen Park for cherry blossoms,
+and exploring areas such as Shibuya and Akihabara. Is there anything more
you'd like to know or plan regarding your visit to Tokyo?
===============================================================================
'''
@@ -382,29 +382,29 @@
'''
===============================================================================
-Assistant: We discussed various transportation options in Japan, particularly
+Assistant: We discussed various transportation options in Japan, particularly
focusing on the train system. Here are some key points:
-1. **Shinkansen (Bullet Train)**: The fastest and most convenient way to
-travel between major cities like Tokyo, Kyoto, and Osaka. It's punctual and
-comfortable. If you plan to use trains extensively, a Japan Rail Pass is
+1. **Shinkansen (Bullet Train)**: The fastest and most convenient way to
+travel between major cities like Tokyo, Kyoto, and Osaka. It's punctual and
+comfortable. If you plan to use trains extensively, a Japan Rail Pass is
recommended.
-2. **Limited Express Trains**: These are suitable for routes not covered by
+2. **Limited Express Trains**: These are suitable for routes not covered by
the Shinkansen and are ideal for shorter distances within certain regions.
3. **Local Trains**: More economical but slower, ideal for shorter distances.
-4. **Domestic Flights**: Useful for traveling to distant cities like
+4. **Domestic Flights**: Useful for traveling to distant cities like
Sapporo or Okinawa.
-5. **Bus Services**: A cost-effective alternative to trains, suitable for
+5. **Bus Services**: A cost-effective alternative to trains, suitable for
long-distance travel including overnight trips.
-6. **Car Rental**: An option if exploring rural areas, but navigating cities
+6. **Car Rental**: An option if exploring rural areas, but navigating cities
like Tokyo might be challenging.
-Each transportation mode has its benefits depending on your itinerary and
+Each transportation mode has its benefits depending on your itinerary and
preferences. Is there anything more specific you'd like to know?
===============================================================================
'''
@@ -420,10 +420,10 @@
'''
===============================================================================
-Assistant: Currently, we have 16 messages in memory, and a summary of our
-conversation is available with 1,544 characters. The full conversation history
-file contains 11,212 characters. Our memory management utilizes a lightweight
-file-based text search, and there are 8 searchable sessions. Is there anything
+Assistant: Currently, we have 16 messages in memory, and a summary of our
+conversation is available with 1,544 characters. The full conversation history
+file contains 11,212 characters. Our memory management utilizes a lightweight
+file-based text search, and there are 8 searchable sessions. Is there anything
else you need help with regarding our session or your trip planning?
===============================================================================
'''
diff --git a/examples/toolkits/dappier_toolkit.py b/examples/toolkits/dappier_toolkit.py
index bc413cec16..48e534f84f 100644
--- a/examples/toolkits/dappier_toolkit.py
+++ b/examples/toolkits/dappier_toolkit.py
@@ -22,13 +22,13 @@
print(real_time_data_response)
"""
===============================================================================
-CAMEL-AI is pretty cool! It's the first LLM (Large Language Model) multi-agent
-framework and an open-source community focused on exploring the scaling laws
+CAMEL-AI is pretty cool! It's the first LLM (Large Language Model) multi-agent
+framework and an open-source community focused on exploring the scaling laws
of agents. 🌟
Here are some highlights:
-- **Purpose**: It aims to create highly customizable intelligent agents and
+- **Purpose**: It aims to create highly customizable intelligent agents and
build multi-agent systems for real-world applications.
- **Features**: CAMEL provides a role-playing approach and inception prompting
to help chat agents complete tasks aligned with human intentions.
@@ -57,7 +57,7 @@
# Example with ChatAgent using the Real Time Search.
agent = ChatAgent(
- system_message="""You are a helpful assistant that can use brave search
+ system_message="""You are a helpful assistant that can use brave search
engine to answer questions.""",
tools=[FunctionTool(DappierToolkit().search_real_time_data)],
)
@@ -69,7 +69,7 @@
print(response.msgs[0].content)
"""
===============================================================================
-The current temperature in Tokyo is 50°F (about 10°C). It's a bit chilly,
+The current temperature in Tokyo is 50°F (about 10°C). It's a bit chilly,
so you might want to grab a jacket! 🧥🌬️
===============================================================================
"""
@@ -85,60 +85,60 @@
print(ai_recommendations_response)
"""
===============================================================================
-{'author': 'Andrew Buller-Russ',
+{'author': 'Andrew Buller-Russ',
'image_url': 'https://images.dappier.com/dm_01j0pb465keqmatq9k83dthx34/
-Syndication-Detroit-Free-Press-25087075_.jpg?width=428&height=321',
-'pubdate': 'Thu, 02 Jan 2025 03:12:06 +0000',
-'source_url': 'https://sportsnaut.com/nick-bosa-detroit-lions-trade-rumors-49ers/',
-'summary': 'In a thrilling Monday night game, the Detroit Lions triumphed
-over the San Francisco 49ers 40-34, solidifying their status as a top NFL
-team. Despite a strong performance from Nick Bosa, who recorded eight tackles
-and two sacks, the 49ers\' playoff hopes were dashed. Bosa praised the Lions\'
-competitive spirit and resilience under Coach Dan Campbell, sparking
-about his interest in joining the team, although he remains under contract
-with the 49ers for four more seasons. Bosa\'s admiration for the Lions
-highlights the stark contrast between the two franchises\' fortunes,
+Syndication-Detroit-Free-Press-25087075_.jpg?width=428&height=321',
+'pubdate': 'Thu, 02 Jan 2025 03:12:06 +0000',
+'source_url': 'https://sportsnaut.com/nick-bosa-detroit-lions-trade-rumors-49ers/',
+'summary': 'In a thrilling Monday night game, the Detroit Lions triumphed
+over the San Francisco 49ers 40-34, solidifying their status as a top NFL
+team. Despite a strong performance from Nick Bosa, who recorded eight tackles
+and two sacks, the 49ers\' playoff hopes were dashed. Bosa praised the Lions\'
+competitive spirit and resilience under Coach Dan Campbell, sparking
+about his interest in joining the team, although he remains under contract
+with the 49ers for four more seasons. Bosa\'s admiration for the Lions
+highlights the stark contrast between the two franchises\' fortunes,
with the Lions celebrating a significant victory while the 49ers struggle.
-Having experienced playoff success with the 49ers, Bosa values strong
-leadership from both Campbell and his own coach, Kyle Shanahan. His comments
-reflect a broader sentiment in the NFL about the importance of winning and
-the positive environment it fosters for players.',
+Having experienced playoff success with the 49ers, Bosa values strong
+leadership from both Campbell and his own coach, Kyle Shanahan. His comments
+reflect a broader sentiment in the NFL about the importance of winning and
+the positive environment it fosters for players.',
'title': 'Nick Bosa gushes about Detroit Lions, sparking 49ers trade rumors'}
-{'author': 'Andrew Buller-Russ',
+{'author': 'Andrew Buller-Russ',
'image_url': 'https://images.dappier.com/dm_01j0pb465keqmatq9k83dthx34/
-Baseball-World-Baseball-Classic-Semifinal-Japan-vs-Mexico-20279015_.jpg?width=428&height=321',
-'pubdate': 'Thu, 02 Jan 2025 02:43:38 +0000',
+Baseball-World-Baseball-Classic-Semifinal-Japan-vs-Mexico-20279015_.jpg?width=428&height=321',
+'pubdate': 'Thu, 02 Jan 2025 02:43:38 +0000',
'source_url': 'https://www.lafbnetwork.com/los-angeles-dodgers/
-los-angeles-dodgers-news/los-angeles-dodgers-meeting-roki-sasaki/',
-'summary': 'Roki Sasaki, a talented 23-year-old Japanese pitcher, is
-approaching a decision on his MLB free agency, with the Los Angeles Dodgers
-among the frontrunners to sign him. They are competing against teams like
-the Chicago Cubs, New York Mets, and others. The Dodgers are set to meet
-with Sasaki, emphasizing his signing as a top priority despite facing
-competition from around 20 other teams. Sasaki\'s status as a minor-league
-posting player may allow him to be signed at a more affordable price,
+los-angeles-dodgers-news/los-angeles-dodgers-meeting-roki-sasaki/',
+'summary': 'Roki Sasaki, a talented 23-year-old Japanese pitcher, is
+approaching a decision on his MLB free agency, with the Los Angeles Dodgers
+among the frontrunners to sign him. They are competing against teams like
+the Chicago Cubs, New York Mets, and others. The Dodgers are set to meet
+with Sasaki, emphasizing his signing as a top priority despite facing
+competition from around 20 other teams. Sasaki\'s status as a minor-league
+posting player may allow him to be signed at a more affordable price,
increasing his appeal. As he gathers information and prepares for a second
-round of meetings, the Dodgers are keen to secure him before the posting
-window closes on January 24, with the international signing period beginning
-on January 15.', 'title': 'Los Angeles Dodgers Take Another Step Toward
+round of meetings, the Dodgers are keen to secure him before the posting
+window closes on January 24, with the international signing period beginning
+on January 15.', 'title': 'Los Angeles Dodgers Take Another Step Toward
Signing Roki Sasaki'}
-{'author': 'Andrew Buller-Russ',
+{'author': 'Andrew Buller-Russ',
'image_url': 'https://images.dappier.com/dm_01j0pb465keqmatq9k83dthx34/
-NFL-Detroit-Lions-at-Kansas-City-Chiefs-24020812_.jpg?width=428&height=321',
-'pubdate': 'Thu, 02 Jan 2025 02:08:34 +0000',
-'source_url': 'https://sportsnaut.com/detroit-lions-cut-jamal-adams/',
-'summary': 'The Detroit Lions, with a strong 14-2 record, have released
-former All-Pro safety Jamal Adams from their practice squad ahead of a crucial
-Week 18 game against the Minnesota Vikings. Adams, who joined the Lions on
-December 1, 2024, played in two games but recorded only three tackles in
-20 defensive snaps, representing a mere 17% of the team\'s defensive plays.
-This marks Adams\' second release this season, having previously been cut
-by the Tennessee Titans after three appearances. The Lions\' decision to part
-ways with Adams comes as they focus on their playoff positioning for the
-upcoming game.',
-'title': 'Detroit Lions cut bait with All-Pro ahead of Week 18 matchup with
+NFL-Detroit-Lions-at-Kansas-City-Chiefs-24020812_.jpg?width=428&height=321',
+'pubdate': 'Thu, 02 Jan 2025 02:08:34 +0000',
+'source_url': 'https://sportsnaut.com/detroit-lions-cut-jamal-adams/',
+'summary': 'The Detroit Lions, with a strong 14-2 record, have released
+former All-Pro safety Jamal Adams from their practice squad ahead of a crucial
+Week 18 game against the Minnesota Vikings. Adams, who joined the Lions on
+December 1, 2024, played in two games but recorded only three tackles in
+20 defensive snaps, representing a mere 17% of the team\'s defensive plays.
+This marks Adams\' second release this season, having previously been cut
+by the Tennessee Titans after three appearances. The Lions\' decision to part
+ways with Adams comes as they focus on their playoff positioning for the
+upcoming game.',
+'title': 'Detroit Lions cut bait with All-Pro ahead of Week 18 matchup with
Vikings'}
===============================================================================
"""
diff --git a/examples/toolkits/data_commons_toolkit.py b/examples/toolkits/data_commons_toolkit.py
index 9828beea31..da278d43c8 100644
--- a/examples/toolkits/data_commons_toolkit.py
+++ b/examples/toolkits/data_commons_toolkit.py
@@ -18,7 +18,7 @@
# Example 1: Query Data Commons
geoId06_name_query = '''
-SELECT ?name ?dcid
+SELECT ?name ?dcid
WHERE {
?a typeOf Place .
?a name ?name .
@@ -33,8 +33,8 @@
'''
===============================================================================
Query Result:
-[{'?name': 'Kentucky', '?dcid': 'geoId/21'},
- {'?name': 'California', '?dcid': 'geoId/06'},
+[{'?name': 'Kentucky', '?dcid': 'geoId/21'},
+ {'?name': 'California', '?dcid': 'geoId/06'},
{'?name': 'Maryland', '?dcid': 'geoId/24'}]
===============================================================================
'''
diff --git a/examples/toolkits/dingtalk.py b/examples/toolkits/dingtalk.py
index ee49bb1223..8931b68ac2 100644
--- a/examples/toolkits/dingtalk.py
+++ b/examples/toolkits/dingtalk.py
@@ -278,7 +278,7 @@ def main():
============================================================
1️⃣ Example 1: Get departments and send welcome message
-Text Message Response: I retrieved the department list and sent the welcome
+Text Message Response: I retrieved the department list and sent the welcome
message.
Departments found:
@@ -301,7 +301,7 @@ def main():
2️⃣ Example 2: Get user info and send markdown message
-User Info and Markdown Response: I fetched the user details and sent a
+User Info and Markdown Response: I fetched the user details and sent a
formatted markdown message.
User details:
@@ -324,18 +324,18 @@ def main():
title: "User Profile — 测试用户"
markdown_content: |
### 用户资料
-
- - **姓名**: 测试用户
- - **用户ID**: test_userid
- - **部门**: 技术部, 产品部
- - **职位**: 高级工程师
- - **手机**: 138****1234
+
+ - **姓名**: 测试用户
+ - **用户ID**: test_userid
+ - **部门**: 技术部, 产品部
+ - **职位**: 高级工程师
+ - **手机**: 138****1234
- **邮箱**: test@company.com
3️⃣ Example 3: Search users and send webhook notification
-User Search and Webhook Response: Search completed and webhook notification
+User Search and Webhook Response: Search completed and webhook notification
sent. Results:
- 测试用户 (test_userid) — 部门: 技术部
- 测试用户2 (test_user2) — 部门: 产品部
@@ -350,11 +350,11 @@ def main():
Args:
content: |
# User search results for 'test'
-
+
- 测试用户 (test_userid) — 部门: 技术部
- 测试用户2 (test_user2) — 部门: 产品部
- 测试用户3 (test_user3) — 部门: 运营部
-
+
Total: 3 users found.
msgtype: "markdown"
title: "User Search Results"
@@ -364,7 +364,7 @@ def main():
4️⃣ Example 4: Send multiple message types
-Multiple Message Types Response: Done — text message to test_userid sent and
+Multiple Message Types Response: Done — text message to test_userid sent and
webhook status message posted.
Tool calls:
1. send_text_message
@@ -376,14 +376,14 @@ def main():
Args:
content: |
### System Status
-
+
- **Overall**: ✅ Operational
- **Uptime**: 72 hours
- **Services**:
- API: ✅ Healthy
- Messaging: ✅ Healthy
- Database: ✅ Healthy
-
+
_Last checked: just now_
msgtype: "markdown"
title: "System Status"
diff --git a/examples/toolkits/dynamic_dependency.py b/examples/toolkits/dynamic_dependency.py
index 7e4b1063e0..6d0abfd149 100644
--- a/examples/toolkits/dynamic_dependency.py
+++ b/examples/toolkits/dynamic_dependency.py
@@ -126,12 +126,12 @@
[ 17, 39 ]
Response:
-The JavaScript code calculates the dot product of two matrices.
+The JavaScript code calculates the dot product of two matrices.
The `dotProduct` function takes two matrices as input and returns a new matrix
representing their dot product. The code first checks for invalid inputs
(empty or null matrices, or incompatible dimensions). Then, it iterates through
-the rows of the first matrix and the columns of the second matrix, computing
-the sum of the products of the corresponding elements. The example uses
+the rows of the first matrix and the columns of the second matrix, computing
+the sum of the products of the corresponding elements. The example uses
matrices `[[1, 2], [3, 4]]` and `[[5], [6]]` and the output is `[17, 39]`.
'''
@@ -156,24 +156,24 @@
# Clean up the container
docker_interpreter.cleanup()
-"""System update example Hit:1 https://deb.nodesource.com/node_22.x nodistro
+"""System update example Hit:1 https://deb.nodesource.com/node_22.x nodistro
InRelease
Get:2 http://security.ubuntu.com/ubuntu jammy-security InRelease [129 kB]
Hit:3 http://archive.ubuntu.com/ubuntu jammy InRelease
Hit:4 https://ppa.launchpadcontent.net/deadsnakes/ppa/ubuntu jammy InRelease
Get:5 http://archive.ubuntu.com/ubuntu jammy-updates InRelease [128 kB]
-Get:6 http://security.ubuntu.com/ubuntu jammy-security/universe amd64
+Get:6 http://security.ubuntu.com/ubuntu jammy-security/universe amd64
Packages [1245 kB]
Get:7 http://archive.ubuntu.com/ubuntu jammy-backports InRelease [127 kB]
-Get:8 http://archive.ubuntu.com/ubuntu jammy-updates/main amd64
+Get:8 http://archive.ubuntu.com/ubuntu jammy-updates/main amd64
Packages [3295 kB]
-Get:9 http://security.ubuntu.com/ubuntu jammy-security/restricted amd64
+Get:9 http://security.ubuntu.com/ubuntu jammy-security/restricted amd64
Packages [4468 kB]
-Get:10 http://security.ubuntu.com/ubuntu jammy-security/main amd64
+Get:10 http://security.ubuntu.com/ubuntu jammy-security/main amd64
Packages [2979 kB]
-Get:11 http://archive.ubuntu.com/ubuntu jammy-updates/restricted amd64
+Get:11 http://archive.ubuntu.com/ubuntu jammy-updates/restricted amd64
Packages [4630 kB]
-Get:12 http://archive.ubuntu.com/ubuntu jammy-updates/universe amd64
+Get:12 http://archive.ubuntu.com/ubuntu jammy-updates/universe amd64
Packages [1553 kB]
Fetched 18.6 MB in 3s (5312 kB/s)
Reading package lists...
@@ -184,7 +184,7 @@
Calculating upgrade...
0 upgraded, 0 newly installed, 0 to remove and 0 not upgraded.
-pip example pip 25.1.1 from /usr/local/lib/python3.10/dist-packages/pip
+pip example pip 25.1.1 from /usr/local/lib/python3.10/dist-packages/pip
(python 3.10)
npm example 10.9.2
diff --git a/examples/toolkits/edgeone_pages_mcp_toolkit.py b/examples/toolkits/edgeone_pages_mcp_toolkit.py
index 0e271730f0..7ca070ec12 100644
--- a/examples/toolkits/edgeone_pages_mcp_toolkit.py
+++ b/examples/toolkits/edgeone_pages_mcp_toolkit.py
@@ -52,7 +52,7 @@ async def main():
"""
==============================================================================
-I've created a Hello World page for you! The page has been deployed and is now
+I've created a Hello World page for you! The page has been deployed and is now
live at:
**https://mcp.edgeone.site/share/M-MXzJzHJ3mGc013OQNIM**
diff --git a/examples/toolkits/excel_toolkit.py b/examples/toolkits/excel_toolkit.py
index d3107f0b32..1af37b24e0 100644
--- a/examples/toolkits/excel_toolkit.py
+++ b/examples/toolkits/excel_toolkit.py
@@ -65,7 +65,7 @@
print(response1.msgs[0].content)
"""--- 1. Creating a new workbook ---
The Excel workbook has been created successfully at the specified path,
- and the 'Employees' sheet has been populated with the provided data.
+ and the 'Employees' sheet has been populated with the provided data.
If you need any further assistance, feel free to ask!
"""
@@ -80,7 +80,7 @@
print(response2.msgs[0].content)
"""--- 2. Add a new row to the 'Employees' sheet ---
The row ['David', 40, 'Chicago', 'HR'] has been successfully
- appended to the 'Employees' sheet in the workbook. If you
+ appended to the 'Employees' sheet in the workbook. If you
need any more modifications or assistance, just let me know!"""
# --- 3. Get all rows from the sheet to verify ---
@@ -115,7 +115,7 @@
print(response4.msgs[0].content)
"""--- 4. Updating a row ---
Row 3 in the 'Employees' sheet has been successfully updated with the
- new data: ['Bob', 30, 'San Francisco', 'Sales']. If you need any more
+ new data: ['Bob', 30, 'San Francisco', 'Sales']. If you need any more
changes or assistance, just let me know!"""
# --- 5. Verifying data after update ---
diff --git a/examples/toolkits/github_toolkit.py b/examples/toolkits/github_toolkit.py
index a3a6264007..60f64163f7 100644
--- a/examples/toolkits/github_toolkit.py
+++ b/examples/toolkits/github_toolkit.py
@@ -31,7 +31,7 @@
workflows/publish_release.yml', '.github/workflows/pytest_apps.yml', '.github/
workflows/pytest_package.yml', '.gitignore', '.pre-commit-config.yaml', '.
style.yapf', 'CONTRIBUTING.md', 'LICENSE', 'Makefile', 'README.md', 'apps/
-agents/README.md', 'apps/agents/agents.py', 'apps/agents/test/test_agents.py',
+agents/README.md', 'apps/agents/agents.py', 'apps/agents/test/test_agents.py',
'apps/agents/test/test_text_utils.py', 'apps/agents/text_utils.py', 'apps/
common/auto_zip.py', 'apps/common/test/test_archive_1.zip', 'apps/common/test/
test_auto_zip.py', 'apps/data_explorer/.gitignore', 'apps/data_explorer/README.
@@ -40,7 +40,7 @@
test_data_explorer.py', 'apps/data_explorer/test/test_loader.py', 'apps/
dilemma/database_connection.py', 'apps/dilemma/dilemma.py', 'apps/dilemma/
requirements.txt', 'camel/__init__.py', 'camel/agents/__init__.py', 'camel/
-agents/base.py', 'camel/agents/chat_agent.py', 'camel/agents/critic_agent.py',
+agents/base.py', 'camel/agents/chat_agent.py', 'camel/agents/critic_agent.py',
'camel/agents/deductive_reasoner_agent.py',...
===============================================================================
"""
diff --git a/examples/toolkits/google_calendar_toolkit.py b/examples/toolkits/google_calendar_toolkit.py
index 6e11838b55..6554c8385e 100644
--- a/examples/toolkits/google_calendar_toolkit.py
+++ b/examples/toolkits/google_calendar_toolkit.py
@@ -39,10 +39,10 @@
'''
===============================================================================
[ToolCallingRecord(tool_name='get_events', args={'time_min': '2025-03-30T00:00
-:00', 'max_results': 20}, result=[{'Event ID': '6s7mlm7aupsq5tjefsp8ru37hb',
-'Summary': 'growth', 'Start Time': '2025-03-31T19:00:00+08:00', 'End Time':
+:00', 'max_results': 20}, result=[{'Event ID': '6s7mlm7aupsq5tjefsp8ru37hb',
+'Summary': 'growth', 'Start Time': '2025-03-31T19:00:00+08:00', 'End Time':
'2025-03-31T20:00:00+08:00', 'Timezone': 'Europe/London', 'Link': 'https://ww
-w.google.com/calendar/event?eid=NnM3bWxtN2F1cHNxNXRqZWZzcDhy_xxxxxxx',
-'Attendees': ['xxxx', 'xxxx', 'xxxx', 'xxxx', 'xxxx'], 'Organizer': 'xxxx'},
+w.google.com/calendar/event?eid=NnM3bWxtN2F1cHNxNXRqZWZzcDhy_xxxxxxx',
+'Attendees': ['xxxx', 'xxxx', 'xxxx', 'xxxx', 'xxxx'], 'Organizer': 'xxxx'},
===============================================================================
'''
diff --git a/examples/toolkits/google_scholar_toolkit.py b/examples/toolkits/google_scholar_toolkit.py
index 34ceaea2fc..3ce891bdfa 100644
--- a/examples/toolkits/google_scholar_toolkit.py
+++ b/examples/toolkits/google_scholar_toolkit.py
@@ -47,31 +47,31 @@
"""
===============================================================================
[ToolCallingRecord(func_name='get_author_detailed_info', args={}, result=
-{'container_type': 'Author', 'filled': ['basics', 'indices', 'counts',
-'coauthors', 'publications', 'public_access'], 'scholar_id': 'JicYPdAAAAAJ',
-'source': , 'name':
+{'container_type': 'Author', 'filled': ['basics', 'indices', 'counts',
+'coauthors', 'publications', 'public_access'], 'scholar_id': 'JicYPdAAAAAJ',
+'source': , 'name':
'Geoffrey Hinton', 'url_picture': 'https://scholar.googleusercontent.com/
-citations?view_op=view_photo&user=JicYPdAAAAAJ&citpid=2', 'affiliation':
-'Emeritus Prof. Computer Science, University of Toronto', 'organization':
-8515235176732148308, 'interests': ['machine learning', 'psychology',
-'artificial intelligence', 'cognitive science', 'computer science'],
+citations?view_op=view_photo&user=JicYPdAAAAAJ&citpid=2', 'affiliation':
+'Emeritus Prof. Computer Science, University of Toronto', 'organization':
+8515235176732148308, 'interests': ['machine learning', 'psychology',
+'artificial intelligence', 'cognitive science', 'computer science'],
'email_domain': '@cs.toronto.edu', 'homepage': 'http://www.cs.toronto.edu/
-~hinton', 'citedby': 853541, 'citedby5y': 560063, 'hindex': 186, 'hindex5y':
-137, 'i10index': 483, 'i10index5y': 368, 'cites_per_year': {1989: 2627, 1990:
-3589, 1991: 3766, 1992: 4091, 1993: 4573, 1994: 4499, 1995: 4090, 1996: 3935,
-1997: 3740, 1998: 3744, 1999: 3559, 2000: 3292, 2001: 3398, 2002: 3713, 2003:
-3670, 2004: 3393, 2005: 3813, 2006: 4168, 2007: 4558, 2008: 4349, 2009: 4784,
-2010: 5238, 2011: 5722, 2012: 6746, 2013: 9900, 2014: 12751, 2015: 18999,
-2016: 29932, 2017: 43675, 2018: 63544, 2019: 80800, 2020: 90523, 2021: 101735,
-2022: 104036, 2023: 106452, 2024: 76413}, 'coauthors': [{'container_type':
+~hinton', 'citedby': 853541, 'citedby5y': 560063, 'hindex': 186, 'hindex5y':
+137, 'i10index': 483, 'i10index5y': 368, 'cites_per_year': {1989: 2627, 1990:
+3589, 1991: 3766, 1992: 4091, 1993: 4573, 1994: 4499, 1995: 4090, 1996: 3935,
+1997: 3740, 1998: 3744, 1999: 3559, 2000: 3292, 2001: 3398, 2002: 3713, 2003:
+3670, 2004: 3393, 2005: 3813, 2006: 4168, 2007: 4558, 2008: 4349, 2009: 4784,
+2010: 5238, 2011: 5722, 2012: 6746, 2013: 9900, 2014: 12751, 2015: 18999,
+2016: 29932, 2017: 43675, 2018: 63544, 2019: 80800, 2020: 90523, 2021: 101735,
+2022: 104036, 2023: 106452, 2024: 76413}, 'coauthors': [{'container_type':
'Author', 'filled': [], 'scholar_id': 'm1qAiOUAAAAJ', 'source': , 'name': 'Terrence Sejnowski',
-'affiliation': 'Francis Crick Professor, Salk Institute, Distinguished
-Professor, UC San Diego'}, {'container_type': 'Author', 'filled': [],
-'scholar_id': 'RnoIxUwAAAAJ', 'source': , 'name': 'Vinod Nair', 'affiliation': 'Research Scientist,
-DeepMind'}, {'container_type': 'Author', 'filled': [], 'scholar_id':
-'ghbWy-0AAAAJ', 'source': ,
+CO_AUTHORS_LIST: 'CO_AUTHORS_LIST'>, 'name': 'Terrence Sejnowski',
+'affiliation': 'Francis Crick Professor, Salk Institute, Distinguished
+Professor, UC San Diego'}, {'container_type': 'Author', 'filled': [],
+'scholar_id': 'RnoIxUwAAAAJ', 'source': , 'name': 'Vinod Nair', 'affiliation': 'Research Scientist,
+DeepMind'}, {'container_type': 'Author', 'filled': [], 'scholar_id':
+'ghbWy-0AAAAJ', 'source': ,
'name': 'George E. Dahl', 'affiliation': 'Google Inc.'}, {'container_
===============================================================================
"""
@@ -85,18 +85,18 @@
"""
===============================================================================
[ToolCallingRecord(func_name='get_author_publications', args={}, result=
-['Imagenet classification with deep convolutional neural networks', 'Deep
-learning', 'Learning internal representations by error-propagation', 'Dropout:
-a simple way to prevent neural networks from overfitting', 'Visualizing data
-using t-SNE', 'Learning representations by back-propagating errors', 'Learning
-multiple layers of features from tiny images', 'Rectified linear units improve
-restricted boltzmann machines', 'Reducing the dimensionality of data with
-neural networks', 'A fast learning algorithm for deep belief nets',
-'Distilling the Knowledge in a Neural Network', 'A simple framework for
-contrastive learning of visual representations', 'Deep neural networks for
-acoustic modeling in speech recognition: The shared views of four research
-groups', 'Layer normalization', 'Speech recognition with deep recurrent neural
-networks', 'Improving neural networks by preventing co-adaptation of feature
+['Imagenet classification with deep convolutional neural networks', 'Deep
+learning', 'Learning internal representations by error-propagation', 'Dropout:
+a simple way to prevent neural networks from overfitting', 'Visualizing data
+using t-SNE', 'Learning representations by back-propagating errors', 'Learning
+multiple layers of features from tiny images', 'Rectified linear units improve
+restricted boltzmann machines', 'Reducing the dimensionality of data with
+neural networks', 'A fast learning algorithm for deep belief nets',
+'Distilling the Knowledge in a Neural Network', 'A simple framework for
+contrastive learning of visual representations', 'Deep neural networks for
+acoustic modeling in speech recognition: The shared views of four research
+groups', 'Layer normalization', 'Speech recognition with deep recurrent neural
+networks', 'Improving neural networks by preventing co-adaptation of feature
detectors', 'Lec
===============================================================================
"""
@@ -112,38 +112,38 @@
"""
===============================================================================
[ToolCallingRecord(func_name='get_publication_by_title', args=
-{'publication_title': 'Camel: Communicative agents for" mind" exploration of
-large language model society'}, result={'container_type': 'Publication',
-'source': , 'bib': {'title': 'Camel: Communicative agents
-for" mind" exploration of large language model society', 'pub_year': 2023,
-'citation': 'Advances in Neural Information Processing Systems 36, 2023',
-'author': 'Guohao Li and Hasan Hammoud and Hani Itani and Dmitrii Khizbullin
-and Bernard Ghanem', 'journal': 'Advances in Neural Information Processing
-Systems', 'volume': '36', 'abstract': 'The rapid advancement of chat-based
-language models has led to remarkable progress in complex task-solving.
-However, their success heavily relies on human input to guide the
-conversation, which can be challenging and time-consuming. This paper explores
-the potential of building scalable techniques to facilitate autonomous
-cooperation among communicative agents, and provides insight into their
-"cognitive" processes. To address the challenges of achieving autonomous
-cooperation, we propose a novel communicative agent framework named
-role-playing. Our approach involves using inception prompting to guide chat
-agents toward task completion while maintaining consistency with human
-intentions. We showcase how role-playing can be used to generate
-conversational data for studying the behaviors and capabilities of a society
-of agents, providing a valuable resource for investigating conversational
-language models. In particular, we conduct comprehensive studies on
-instruction-following cooperation in multi-agent settings. Our contributions
-include introducing a novel communicative agent framework, offering a scalable
-approach for studying the cooperative behaviors and capabilities of
-multi-agent systems, and open-sourcing our library to support research on
-communicative agents and beyond: https://github. com/camel-ai/camel.'},
-'filled': True, 'author_pub_id': 'J9K-D0sAAAAJ:_Qo2XoVZTnwC', 'num_citations':
-364, 'citedby_url': '/scholar?hl=en&cites=3976259482297250805', 'cites_id':
+{'publication_title': 'Camel: Communicative agents for" mind" exploration of
+large language model society'}, result={'container_type': 'Publication',
+'source': , 'bib': {'title': 'Camel: Communicative agents
+for" mind" exploration of large language model society', 'pub_year': 2023,
+'citation': 'Advances in Neural Information Processing Systems 36, 2023',
+'author': 'Guohao Li and Hasan Hammoud and Hani Itani and Dmitrii Khizbullin
+and Bernard Ghanem', 'journal': 'Advances in Neural Information Processing
+Systems', 'volume': '36', 'abstract': 'The rapid advancement of chat-based
+language models has led to remarkable progress in complex task-solving.
+However, their success heavily relies on human input to guide the
+conversation, which can be challenging and time-consuming. This paper explores
+the potential of building scalable techniques to facilitate autonomous
+cooperation among communicative agents, and provides insight into their
+"cognitive" processes. To address the challenges of achieving autonomous
+cooperation, we propose a novel communicative agent framework named
+role-playing. Our approach involves using inception prompting to guide chat
+agents toward task completion while maintaining consistency with human
+intentions. We showcase how role-playing can be used to generate
+conversational data for studying the behaviors and capabilities of a society
+of agents, providing a valuable resource for investigating conversational
+language models. In particular, we conduct comprehensive studies on
+instruction-following cooperation in multi-agent settings. Our contributions
+include introducing a novel communicative agent framework, offering a scalable
+approach for studying the cooperative behaviors and capabilities of
+multi-agent systems, and open-sourcing our library to support research on
+communicative agents and beyond: https://github. com/camel-ai/camel.'},
+'filled': True, 'author_pub_id': 'J9K-D0sAAAAJ:_Qo2XoVZTnwC', 'num_citations':
+364, 'citedby_url': '/scholar?hl=en&cites=3976259482297250805', 'cites_id':
['3976259482297250805'], 'pub_url': 'https://proceedings.neurips.cc/
paper_files/paper/2023/hash/
-a3621ee907def47c1b952ade25c67698-Abstract-Conference.html',
+a3621ee907def47c1b952ade25c67698-Abstract-Conference.html',
'url_related_articles': '/scholar?oi=bibs&hl=en&q=related:9TMbme6CLjcJ:scholar.
google.com/', 'cites_per_year': {2023: 95, 2024: 269}})]
===============================================================================
@@ -157,18 +157,18 @@
"""
===============================================================================
[ToolCallingRecord(func_name='get_full_paper_content_by_link', args=
-{'pdf_url': 'https://hal.science/hal-04206682/document'}, result='Deep
-learning\nYann Lecun, Yoshua Bengio, Geoffrey Hinton\n\nTo cite this
-version:\n\nYann Lecun, Yoshua Bengio, Geoffrey Hinton. Deep learning. Nature,
-2015, 521 (7553), pp.436-444.\n\uffff10.1038/nature14539\uffff.
+{'pdf_url': 'https://hal.science/hal-04206682/document'}, result='Deep
+learning\nYann Lecun, Yoshua Bengio, Geoffrey Hinton\n\nTo cite this
+version:\n\nYann Lecun, Yoshua Bengio, Geoffrey Hinton. Deep learning. Nature,
+2015, 521 (7553), pp.436-444.\n\uffff10.1038/nature14539\uffff.
\uffffhal-04206682\uffff\n\nHAL Id: hal-04206682\n\nhttps://hal.science/
-hal-04206682v1\n\nSubmitted on 14 Sep 2023\n\nHAL is a multi-disciplinary open
-access\narchive for the deposit and dissemination of sci-\nentific research
-documents, whether they are pub-\nlished or not. The documents may come
-from\nteaching and research institutions in France or\nabroad, or from public
-or private research centers.\n\nL'archive ouverte pluridisciplinaire HAL,
-est\ndestinée au dépôt et à la diffusion de documents\nscientifiques de niveau
-recherche, publiés ou non,\némanant des établissements d'enseignement et
+hal-04206682v1\n\nSubmitted on 14 Sep 2023\n\nHAL is a multi-disciplinary open
+access\narchive for the deposit and dissemination of sci-\nentific research
+documents, whether they are pub-\nlished or not. The documents may come
+from\nteaching and research institutions in France or\nabroad, or from public
+or private research centers.\n\nL'archive ouverte pluridisciplinaire HAL,
+est\ndestinée au dépôt et à la diffusion de documents\nscientifiques de niveau
+recherche, publiés ou non,\némanant des établissements d'enseignement et
de\nrecherche français ou étrangers, des laboratoires\npublics ou privés.
\n\n\x0cDeep learning\n\nYann LeCun1,2, Yoshua Bengio3 & Geoffrey Hinton4,
5\n\n1Facebook AI Research, 770 Broadway, New York, New York 10003 USA\n\n2N..
diff --git a/examples/toolkits/human_toolkit.py b/examples/toolkits/human_toolkit.py
index 3d524ee700..bfb5aef62c 100644
--- a/examples/toolkits/human_toolkit.py
+++ b/examples/toolkits/human_toolkit.py
@@ -78,10 +78,10 @@
"""
==========================================================================
Agent Message:
-🔔 Reminder: You have an upcoming meeting scheduled. Please check your
+🔔 Reminder: You have an upcoming meeting scheduled. Please check your
calendar for details!
-I've sent you a notification about your upcoming meeting. Please check your
+I've sent you a notification about your upcoming meeting. Please check your
calendar for details!
==========================================================================
"""
diff --git a/examples/toolkits/hybrid_browser_toolkit_example.py b/examples/toolkits/hybrid_browser_toolkit_example.py
index 28b7a3d62e..9d371c0be2 100644
--- a/examples/toolkits/hybrid_browser_toolkit_example.py
+++ b/examples/toolkits/hybrid_browser_toolkit_example.py
@@ -93,10 +93,10 @@
)
TASK_PROMPT = r"""
-Use Google Search to search for news in Munich today, and click on relevant
+Use Google Search to search for news in Munich today, and click on relevant
websites to get the news and write it in markdown.
-I mean you need to browse multiple websites. After visiting each website,
+I mean you need to browse multiple websites. After visiting each website,
return to the Google search results page and click on other websites.
Use enter to confirm search or input.
diff --git a/examples/toolkits/image_analysis_toolkit.py b/examples/toolkits/image_analysis_toolkit.py
index 0e63ca4aae..23a0e46b8a 100644
--- a/examples/toolkits/image_analysis_toolkit.py
+++ b/examples/toolkits/image_analysis_toolkit.py
@@ -45,16 +45,16 @@
print(response.msgs[0].content)
""""
===========================================================================
-The image depicts a serene landscape featuring a wooden boardwalk that leads
-through a lush, green marsh or meadow. The boardwalk is centrally positioned,
-extending into the distance and inviting viewers to imagine walking along it.
-On either side of the boardwalk, tall grass and various vegetation create a
+The image depicts a serene landscape featuring a wooden boardwalk that leads
+through a lush, green marsh or meadow. The boardwalk is centrally positioned,
+extending into the distance and inviting viewers to imagine walking along it.
+On either side of the boardwalk, tall grass and various vegetation create a
vibrant green expanse.
-In the background, there are clusters of trees and shrubs, adding depth to the
-scene. The sky above is mostly clear with a few scattered clouds, showcasing a
-gradient of blue hues. The overall atmosphere is tranquil and natural,
-suggesting a peaceful outdoor setting, with soft lighting that likely
+In the background, there are clusters of trees and shrubs, adding depth to the
+scene. The sky above is mostly clear with a few scattered clouds, showcasing a
+gradient of blue hues. The overall atmosphere is tranquil and natural,
+suggesting a peaceful outdoor setting, with soft lighting that likely
indicates early morning or late afternoon."
============================================================================
"""
diff --git a/examples/toolkits/jina_reranker_toolkit.py b/examples/toolkits/jina_reranker_toolkit.py
index 18db17ea73..63fdd8ba48 100644
--- a/examples/toolkits/jina_reranker_toolkit.py
+++ b/examples/toolkits/jina_reranker_toolkit.py
@@ -46,17 +46,17 @@
print(str(response.info['tool_calls'])[:1000])
""""
===========================================================================
-[ToolCallingRecord(tool_name='rerank_text_documents', args={'query': 'How to
-use markdown with small language models', 'documents': ['Markdown is a
-lightweight markup language with plain-text formatting syntax.', 'Python is a
-high-level, interpreted programming language known for its readability.', 'SLM
-(Small Language Models) are compact AI models designed for specific tasks.',
-'JavaScript is a scripting language primarily used for creating interactive
-web pages.'], 'max_length': 1024}, result=[('Markdown is a lightweight markup
-language with plain-text formatting syntax.', 0.7915633916854858), ('SLM
+[ToolCallingRecord(tool_name='rerank_text_documents', args={'query': 'How to
+use markdown with small language models', 'documents': ['Markdown is a
+lightweight markup language with plain-text formatting syntax.', 'Python is a
+high-level, interpreted programming language known for its readability.', 'SLM
+(Small Language Models) are compact AI models designed for specific tasks.',
+'JavaScript is a scripting language primarily used for creating interactive
+web pages.'], 'max_length': 1024}, result=[('Markdown is a lightweight markup
+language with plain-text formatting syntax.', 0.7915633916854858), ('SLM
(Small Language Models) are compact AI models designed for specific tasks.', 0.
-7915633916854858), ('Python is a high-level, interpreted programming language
-known for its readability.', 0.43936243653297424), ('JavaScript is a scripting
+7915633916854858), ('Python is a high-level, interpreted programming language
+known for its readability.', 0.43936243653297424), ('JavaScript is a scripting
language primarily used for creating interactive web pages.', 0.
3716837763786316)], tool_call_id='call_JKnuvTO1fUQP7PWhyCSQCK7N')]
===========================================================================
diff --git a/examples/toolkits/klavis_toolkit.py b/examples/toolkits/klavis_toolkit.py
index e8bfbb3841..48904ba40d 100644
--- a/examples/toolkits/klavis_toolkit.py
+++ b/examples/toolkits/klavis_toolkit.py
@@ -69,11 +69,11 @@
- **Description:** Convert markdown text to different file
formats (pdf, docx, doc, html), based on Pandoc.
- **Tools:**
- - Convert markdown text to different file formats (pdf, docx,
+ - Convert markdown text to different file formats (pdf, docx,
doc, html, html5)
2. **Discord**
- - **Description:** Discord is a VoIP and instant messaging
+ - **Description:** Discord is a VoIP and instant messaging
social platform.
- **Tools:**
- Get information about a Discord server (guild)
@@ -84,10 +84,10 @@
- Read recent messages from a Discord channel
3. **YouTube**
- - **Description:** Extract and convert YouTube video information to
+ - **Description:** Extract and convert YouTube video information to
markdown format.
- **Tools:**
- - Retrieve the transcript/subtitles for a YouTube video and convert
+ - Retrieve the transcript/subtitles for a YouTube video and convert
it to markdown
.....
@@ -95,7 +95,7 @@
========================================================================
Server Instance Creation Result:
-A new Klavis AI MCP server instance has been successfully created for the
+A new Klavis AI MCP server instance has been successfully created for the
server named **GitHub**. Here are the details:
- **Server URL:** [https://github-mcp-server.klavis.ai/sse?instance_id=
@@ -116,6 +116,6 @@
- **External User ID:** user123
- **Is Authenticated:** No
-The instance is currently not authenticated. If you need to set an
+The instance is currently not authenticated. If you need to set an
authentication token or perform any other actions, please let me know!
"""
diff --git a/examples/toolkits/mcp/mcp_arxiv_toolkit/client.py b/examples/toolkits/mcp/mcp_arxiv_toolkit/client.py
index 6380ffee7b..b0fdd4084d 100644
--- a/examples/toolkits/mcp/mcp_arxiv_toolkit/client.py
+++ b/examples/toolkits/mcp/mcp_arxiv_toolkit/client.py
@@ -56,21 +56,21 @@ async def run_example_http():
print(res1.content[0].text[:1000])
"""
===============================================================================
-{"title": "Attention Is All You Need But You Don't Need All Of It For
-Inference of Large Language Models", "published_date": "2024-07-22",
-"authors": ["Georgy Tyukin", "Gbetondji J-S Dovonon", "Jean Kaddour",
-"Pasquale Minervini"], "entry_id": "http://arxiv.org/abs/2407.15516v1",
-"summary": "The inference demand for LLMs has skyrocketed in recent months,
-and serving\nmodels with low latencies remains challenging due to the
-quadratic input length\ncomplexity of the attention layers. In this work, we
-investigate the effect of\ndropping MLP and attention layers at inference time
-on the performance of\nLlama-v2 models. We find that dropping dreeper
-attention layers only marginally\ndecreases performance but leads to the best
-speedups alongside dropping entire\nlayers. For example, removing 33\\% of
-attention layers in a 13B Llama2 model\nresults in a 1.8\\% drop in average
-performance over the OpenLLM benchmark. We\nalso observe that skipping layers
+{"title": "Attention Is All You Need But You Don't Need All Of It For
+Inference of Large Language Models", "published_date": "2024-07-22",
+"authors": ["Georgy Tyukin", "Gbetondji J-S Dovonon", "Jean Kaddour",
+"Pasquale Minervini"], "entry_id": "http://arxiv.org/abs/2407.15516v1",
+"summary": "The inference demand for LLMs has skyrocketed in recent months,
+and serving\nmodels with low latencies remains challenging due to the
+quadratic input length\ncomplexity of the attention layers. In this work, we
+investigate the effect of\ndropping MLP and attention layers at inference time
+on the performance of\nLlama-v2 models. We find that dropping dreeper
+attention layers only marginally\ndecreases performance but leads to the best
+speedups alongside dropping entire\nlayers. For example, removing 33\\% of
+attention layers in a 13B Llama2 model\nresults in a 1.8\\% drop in average
+performance over the OpenLLM benchmark. We\nalso observe that skipping layers
except the latter layers reduces perform
-===============================================================================
+===============================================================================
"""
diff --git a/examples/toolkits/mcp/mcp_origene_toolkit/client.py b/examples/toolkits/mcp/mcp_origene_toolkit/client.py
index 38f061b733..c4d830b133 100644
--- a/examples/toolkits/mcp/mcp_origene_toolkit/client.py
+++ b/examples/toolkits/mcp/mcp_origene_toolkit/client.py
@@ -52,7 +52,7 @@ async def main():
'''
===============================================================================
I can assist you with a variety of tasks related to chemical compounds
-and substances using the PubChem database.
+and substances using the PubChem database.
Here are some of the things I can do:
1. **Search for Compounds and Substances**:
@@ -67,10 +67,10 @@ async def main():
3. **Synonyms and Identifiers**:
- Find synonyms for compounds and substances.
- Get CAS Registry Numbers, CIDs, and SIDs.
-
+
4. **3D Structure and Conformers**:
- Access 3D structures and conformer identifiers.
-
+
5. **Bioassay Activities**:
- Summarize bioassay activities for compounds and substances.
diff --git a/examples/toolkits/mcp/mcp_servers_config.json b/examples/toolkits/mcp/mcp_servers_config.json
index d38d2842a1..51a52ff839 100644
--- a/examples/toolkits/mcp/mcp_servers_config.json
+++ b/examples/toolkits/mcp/mcp_servers_config.json
@@ -10,4 +10,4 @@
}
},
"mcpWebServers": {}
-}
\ No newline at end of file
+}
diff --git a/examples/toolkits/mcp/mcp_toolkit.py b/examples/toolkits/mcp/mcp_toolkit.py
index 417b982d73..c0bd63d548 100644
--- a/examples/toolkits/mcp/mcp_toolkit.py
+++ b/examples/toolkits/mcp/mcp_toolkit.py
@@ -61,8 +61,8 @@ async def mcp_client_example():
'''
===============================================================================
-Available MCP tools: ['read_file', 'read_multiple_files', 'write_file',
-'edit_file', 'create_directory', 'list_directory', 'directory_tree',
+Available MCP tools: ['read_file', 'read_multiple_files', 'write_file',
+'edit_file', 'create_directory', 'list_directory', 'directory_tree',
'move_file', 'search_files', 'get_file_info', 'list_allowed_directories']
Directory Contents:
[DIR] .container
@@ -121,14 +121,14 @@ async def mcp_toolkit_example():
3. `CONTRIBUTING.md`
4. `LICENSE`
5. `README.md`
-[ToolCallingRecord(tool_name='list_allowed_directories', args={},
-result='Allowed directories:\n/Users/enrei/Desktop/camel0605/camel',
+[ToolCallingRecord(tool_name='list_allowed_directories', args={},
+result='Allowed directories:\n/Users/enrei/Desktop/camel0605/camel',
tool_call_id='call_xTidk11chOk8j9gjrpNMlKjq'), ToolCallingRecord
(tool_name='list_directory', args={'path': '/Users/enrei/Desktop/camel0605/
camel'}, result='[DIR] .container\n[FILE] .env\n[DIR] .git\n[DIR] .github\n
-[FILE] .gitignore\n[DIR] .mypy_cache\n[FILE] .pre-commit-config.yaml\n[DIR]
-pytest_cache\n[DIR] .ruff_cache\n[FILE] .style.yapf\n[DIR] .venv\n[FILE]
-CONTRIBUTING.md\n[FILE] LICENSE\n[FILE] Makefile\n[FILE] README.md\n[DIR]
+[FILE] .gitignore\n[DIR] .mypy_cache\n[FILE] .pre-commit-config.yaml\n[DIR]
+pytest_cache\n[DIR] .ruff_cache\n[FILE] .style.yapf\n[DIR] .venv\n[FILE]
+CONTRIBUTING.md\n[FILE] LICENSE\n[FILE] Makefile\n[FILE] README.md\n[DIR]
apps\n[DIR] camel\n[DIR] data\n[DIR] docs\n[DIR] examples\n[DIR] licenses\n
[DIR] misc\n[FILE] pyproject.toml\n[DIR] services\n[DIR] test\n[FILE] uv.
lock', tool_call_id='call_8eaZP8LQxxvM3cauWWyZ2fJ4')]
@@ -186,14 +186,14 @@ def mcp_toolkit_example_sync():
3. `CONTRIBUTING.md`
4. `LICENSE`
5. `README.md`
-[ToolCallingRecord(tool_name='list_allowed_directories', args={},
-result='Allowed directories:\n/Users/enrei/Desktop/camel0605/camel',
+[ToolCallingRecord(tool_name='list_allowed_directories', args={},
+result='Allowed directories:\n/Users/enrei/Desktop/camel0605/camel',
tool_call_id='call_xTidk11chOk8j9gjrpNMlKjq'), ToolCallingRecord
(tool_name='list_directory', args={'path': '/Users/enrei/Desktop/camel0605/
camel'}, result='[DIR] .container\n[FILE] .env\n[DIR] .git\n[DIR] .github\n
-[FILE] .gitignore\n[DIR] .mypy_cache\n[FILE] .pre-commit-config.yaml\n[DIR]
-pytest_cache\n[DIR] .ruff_cache\n[FILE] .style.yapf\n[DIR] .venv\n[FILE]
-CONTRIBUTING.md\n[FILE] LICENSE\n[FILE] Makefile\n[FILE] README.md\n[DIR]
+[FILE] .gitignore\n[DIR] .mypy_cache\n[FILE] .pre-commit-config.yaml\n[DIR]
+pytest_cache\n[DIR] .ruff_cache\n[FILE] .style.yapf\n[DIR] .venv\n[FILE]
+CONTRIBUTING.md\n[FILE] LICENSE\n[FILE] Makefile\n[FILE] README.md\n[DIR]
apps\n[DIR] camel\n[DIR] data\n[DIR] docs\n[DIR] examples\n[DIR] licenses\n
[DIR] misc\n[FILE] pyproject.toml\n[DIR] services\n[DIR] test\n[FILE] uv.
lock', tool_call_id='call_8eaZP8LQxxvM3cauWWyZ2fJ4')]
diff --git a/examples/toolkits/memory_toolkit.py b/examples/toolkits/memory_toolkit.py
index 67711fff4b..733097afad 100644
--- a/examples/toolkits/memory_toolkit.py
+++ b/examples/toolkits/memory_toolkit.py
@@ -32,7 +32,7 @@ def run_memory_toolkit_example():
# Create a ChatAgent
agent = ChatAgent(
- system_message="""You are an assistant that can manage
+ system_message="""You are an assistant that can manage
conversation memory using tools.""",
model=model,
)
diff --git a/examples/toolkits/meshy_toolkit.py b/examples/toolkits/meshy_toolkit.py
index 5cc5119053..21bb574ee0 100644
--- a/examples/toolkits/meshy_toolkit.py
+++ b/examples/toolkits/meshy_toolkit.py
@@ -47,16 +47,16 @@
Status after 106s: IN_PROGRESS
Status after 117s: SUCCEEDED
-Final Response: {'id': '01939144-7dea-73c7-af06-efa79c83243f', 'mode':
-'refine', 'name': '', 'seed': 1733308970, 'art_style': 'realistic',
-'texture_richness': 'high', 'prompt': 'A figuring of Tin tin the cartoon
-character', 'negative_prompt': 'low quality, low resolution, low poly, ugly',
-'status': 'SUCCEEDED', 'created_at': 1733309005313, 'progress': 100,
-'started_at': 1733309006267, 'finished_at': 1733309113474, 'task_error': None,
-'model_urls': {'glb': 'https://assets.meshy.ai/5e05026a-0e91-4073-83fe-0263b1b4d348/tasks/01939144-7dea-73c7-af06-efa79c83243f/output/model.glb?Expires=4886870400&Signature=TEbWpN8sFZOf1FKWBVxKNdT2Ltm1Ma6vHuUUpBh6rZaAzfTBQPKvV2i7RmD~wwaebbQSBvVVagF4j587tNKNwHPqkGtpBjBu2q43n4lWM6W--RxSqbOCvVZ54PiAzzlVjM9PzPz-MasrWQtYipm5qJ5tsWd7XoxB6Wv2tZMZEWsftdLxmXdp9SuoBcu5NM~MRnyvhEYPmwU9uCAKfh4FZ14mhfx6TeDpCprYh1ngnlkLzDXk5Mdw0HJ1zuYpnkCOUtth84p6Oq5aU0HtWtUVd2tLi53rqKn9QC0qxcH7QlPxxI1aoUtzqaMXXiqCGylzZuPTZILhdFWeAoiEtCOLZw__&Key-Pair-Id=KL5I0C8H7HX83', 'fbx': 'https://assets.meshy.ai/5e05026a-0e91-4073-83fe-0263b1b4d348/tasks/01939144-7dea-73c7-af06-efa79c83243f/output/model.fbx?Expires=4886870400&Signature=jGOPhF8FL1wa9mVbodNoq1jMVzi2gklWRrrl2qWAZvWAhadc4wgjmgTweBKsNiO~KMTGzCiey7iqSIGm6dDEYAMv72HExpIO7I8HwAVPp4KwhzORzwr6OcEoY9-7fFR9yEg~WqnWewmdkrcnUVHHx2Z9imkDkIhISn1IOERkum48mTemlyejb87CXGV14uX3hxIVKle4at6S8tMUfpXhCdZ3aIxmgt9Dpsobol92XtQbpC-JhJSOgMNBWtAH3OUXAgECsYrRRZND9gcZZkUZlXHHZt439JsU8MPoXZd4RQ0OGn~vb6W51rvQ904ErsYZf47dLYNswaxb6Se3oKm~zw__&Key-Pair-Id=KL5I0C8H7HX83', 'usdz': 'https://assets.meshy.ai/5e05026a-0e91-4073-83fe-0263b1b4d348/tasks/01939144-7dea-73c7-af06-efa79c83243f/output/model.usdz?Expires=4886870400&Signature=ICOOIH6EDVdy9LYCk-azYqBWtl6t9v2xZpRk8C8kQKa38jUXdukjoLLN469VP5a7rdIKinLA~I5-hMr-kd-MEmwJzE3JKw2ojNimYPa5Uvnr3R~4S~2fQgCPWfn2xVkt6Cvfx~Qj8~ZNVxMj0jvnKkIySRHYaqvCxMfASHCB7Kz9LN3lBWuT709pEnQ6mtwLJWybLlIJkMFOVoapw~epIgWBtJjhMNwPCzXswUddKSdirOHIm8JRoN3~Ha99oxo4nSN5tyf3u2fWLxGOTeAyp7Hcq97gMkdqjuNc14k2n7fPULgbSCkHepLIG8GQrNLMfA6hkphkIj0LdjC6AQ7pvg__&Key-Pair-Id=KL5I0C8H7HX83', 'obj': 'https://assets.meshy.ai/5e05026a-0e91-4073-83fe-0263b1b4d348/tasks/01939144-7dea-73c7-af06-efa79c83243f/output/model.obj?Expires=4886870400&Signature=a53mEQASL7jRU8Xz5WhN-~d3~74BlBlqDMufryX-j1~jXTgbMEEhY2dC5R7dHHHJbJ4ns9GQ8cbjxcCImVvjkiLvPYZ-lraLaqMnbG~hatsZNv6wDZxTson8jsiqTSHaLnamp83zycLotM~zrUW0DIHGoFWvf9DPTKqy4Z0ZAOxOsA9qfAmJI6k2CVHLu0hMRLAjm3f8KA4j90kJBBVuYvABZi27hP-aURhD09zoAMp~AsrXSKxFjd5wcYqKko78qch2K2H5NaAUGhsKbuNmBMFaxc0C5dKgSlKufWmib86vUOe1dYLQyqGTS85u5dVQSwFrDY5gyugGJ4TH-aVQVw__&Key-Pair-Id=KL5I0C8H7HX83', 'mtl': 'https://assets.meshy.ai/5e05026a-0e91-4073-83fe-0263b1b4d348/tasks/01939144-7dea-73c7-af06-efa79c83243f/output/model.mtl?Expires=4886870400&Signature=FnY3cNMqEymvBw~33riU~HkVIifWKEUh0ndV90VaWMnKczU~Wqng7AYTqwywr6PNQDuFL~iQFw-y6qvklsV9I0aTg8OoYQ3dfCaRqydwUbN80aonk~fwpAJUwBxqbhhN4n9T~7WTX-pyo0w5vQ09wte4G-4yAIUEM7qlOwZohdfK2a~EIhnq9WiV92TuGtm0c4x5n6png9ZjX5pHnp~a77UCBJlIQ1teN5Rb3I9HFh4sbUGdcXUas7B9EIq4YiabjO9vf5FGwicb2XQ-YxJFJJdEJwbBp6l6iZCbSk-WijmIWmyD~8A~jhTNwlG9UHR5qTsnprntgoRyLdTRSXvDzg__&Key-Pair-Id=KL5I0C8H7HX83'},
-'thumbnail_url': 'https://assets.meshy.ai/5e05026a-0e91-4073-83fe-0263b1b4d348/tasks/01939144-7dea-73c7-af06-efa79c83243f/output/preview.png?Expires=4886870400&Signature=B16evi199mig4RTFq0FVPrHGkpjpPudRpLfpcY2FxJIkIFYg42-v8BfsL3eWAM-XDlDqahPSXtqqm6emVkSu550iPqo2yy-URoDifiIl5petEy~42nHtc1-dZB1HcEvtcyycHOjmk1y8zQfZBgQ8cjGq0Ds19xSdOXIo7-~QDPWhUGUTreJvBNg17GitgvcfYbGj2g6gibYJWjwatM7A6yHhq3d53N8eDcmO5L6dBH3VwUFTxDWBQXwUT7aXkS7dsQ7Wz5CkIbbH~T-4Pn5KpdJy1Kf1Lrh1YpOUN4T7JI8Ot5urYKYRj4cZ96xpDD9gicPGvgrRaicFyb1sSwW2ow__&Key-Pair-Id=KL5I0C8H7HX83',
-'video_url': 'https://assets.meshy.ai/5e05026a-0e91-4073-83fe-0263b1b4d348/tasks/01939144-7dea-73c7-af06-efa79c83243f/output/output.mp4?Expires=4886870400&Signature=r8t11N9~XGzNfuW3KowxarSpr7hC8eQb0kzkKADOz3hdTA9vAqBdv5AVdMGDlMmH9IP4R60UCnVNT6scA1EeN3FZLXaHaWbsxHDuc4XdUk7DE7AbwUKSbl~YdUSu5-RkNu6vaMHTiB55XubUJhv9ReB25a6Ifee0rf1ulGs-amFSMlL~eNPq6HTUI6NGAqi1p~VeFzE53JV5sWvU2JYnbGe8kzruC705z1LiCU-9isWzJGuOIy~RpiVfYzSmgh4xeILaYKpxR2ZM2uVtbi6snl~aYsqiKMIIMxMg-aZDWn-f5voiWaCL1OUV5fxbI82ZRJNd5DSlVjI~umqZZIl-iw__&Key-Pair-Id=KL5I0C8H7HX83',
+Final Response: {'id': '01939144-7dea-73c7-af06-efa79c83243f', 'mode':
+'refine', 'name': '', 'seed': 1733308970, 'art_style': 'realistic',
+'texture_richness': 'high', 'prompt': 'A figuring of Tin tin the cartoon
+character', 'negative_prompt': 'low quality, low resolution, low poly, ugly',
+'status': 'SUCCEEDED', 'created_at': 1733309005313, 'progress': 100,
+'started_at': 1733309006267, 'finished_at': 1733309113474, 'task_error': None,
+'model_urls': {'glb': 'https://assets.meshy.ai/5e05026a-0e91-4073-83fe-0263b1b4d348/tasks/01939144-7dea-73c7-af06-efa79c83243f/output/model.glb?Expires=4886870400&Signature=TEbWpN8sFZOf1FKWBVxKNdT2Ltm1Ma6vHuUUpBh6rZaAzfTBQPKvV2i7RmD~wwaebbQSBvVVagF4j587tNKNwHPqkGtpBjBu2q43n4lWM6W--RxSqbOCvVZ54PiAzzlVjM9PzPz-MasrWQtYipm5qJ5tsWd7XoxB6Wv2tZMZEWsftdLxmXdp9SuoBcu5NM~MRnyvhEYPmwU9uCAKfh4FZ14mhfx6TeDpCprYh1ngnlkLzDXk5Mdw0HJ1zuYpnkCOUtth84p6Oq5aU0HtWtUVd2tLi53rqKn9QC0qxcH7QlPxxI1aoUtzqaMXXiqCGylzZuPTZILhdFWeAoiEtCOLZw__&Key-Pair-Id=KL5I0C8H7HX83', 'fbx': 'https://assets.meshy.ai/5e05026a-0e91-4073-83fe-0263b1b4d348/tasks/01939144-7dea-73c7-af06-efa79c83243f/output/model.fbx?Expires=4886870400&Signature=jGOPhF8FL1wa9mVbodNoq1jMVzi2gklWRrrl2qWAZvWAhadc4wgjmgTweBKsNiO~KMTGzCiey7iqSIGm6dDEYAMv72HExpIO7I8HwAVPp4KwhzORzwr6OcEoY9-7fFR9yEg~WqnWewmdkrcnUVHHx2Z9imkDkIhISn1IOERkum48mTemlyejb87CXGV14uX3hxIVKle4at6S8tMUfpXhCdZ3aIxmgt9Dpsobol92XtQbpC-JhJSOgMNBWtAH3OUXAgECsYrRRZND9gcZZkUZlXHHZt439JsU8MPoXZd4RQ0OGn~vb6W51rvQ904ErsYZf47dLYNswaxb6Se3oKm~zw__&Key-Pair-Id=KL5I0C8H7HX83', 'usdz': 'https://assets.meshy.ai/5e05026a-0e91-4073-83fe-0263b1b4d348/tasks/01939144-7dea-73c7-af06-efa79c83243f/output/model.usdz?Expires=4886870400&Signature=ICOOIH6EDVdy9LYCk-azYqBWtl6t9v2xZpRk8C8kQKa38jUXdukjoLLN469VP5a7rdIKinLA~I5-hMr-kd-MEmwJzE3JKw2ojNimYPa5Uvnr3R~4S~2fQgCPWfn2xVkt6Cvfx~Qj8~ZNVxMj0jvnKkIySRHYaqvCxMfASHCB7Kz9LN3lBWuT709pEnQ6mtwLJWybLlIJkMFOVoapw~epIgWBtJjhMNwPCzXswUddKSdirOHIm8JRoN3~Ha99oxo4nSN5tyf3u2fWLxGOTeAyp7Hcq97gMkdqjuNc14k2n7fPULgbSCkHepLIG8GQrNLMfA6hkphkIj0LdjC6AQ7pvg__&Key-Pair-Id=KL5I0C8H7HX83', 'obj': 'https://assets.meshy.ai/5e05026a-0e91-4073-83fe-0263b1b4d348/tasks/01939144-7dea-73c7-af06-efa79c83243f/output/model.obj?Expires=4886870400&Signature=a53mEQASL7jRU8Xz5WhN-~d3~74BlBlqDMufryX-j1~jXTgbMEEhY2dC5R7dHHHJbJ4ns9GQ8cbjxcCImVvjkiLvPYZ-lraLaqMnbG~hatsZNv6wDZxTson8jsiqTSHaLnamp83zycLotM~zrUW0DIHGoFWvf9DPTKqy4Z0ZAOxOsA9qfAmJI6k2CVHLu0hMRLAjm3f8KA4j90kJBBVuYvABZi27hP-aURhD09zoAMp~AsrXSKxFjd5wcYqKko78qch2K2H5NaAUGhsKbuNmBMFaxc0C5dKgSlKufWmib86vUOe1dYLQyqGTS85u5dVQSwFrDY5gyugGJ4TH-aVQVw__&Key-Pair-Id=KL5I0C8H7HX83', 'mtl': 'https://assets.meshy.ai/5e05026a-0e91-4073-83fe-0263b1b4d348/tasks/01939144-7dea-73c7-af06-efa79c83243f/output/model.mtl?Expires=4886870400&Signature=FnY3cNMqEymvBw~33riU~HkVIifWKEUh0ndV90VaWMnKczU~Wqng7AYTqwywr6PNQDuFL~iQFw-y6qvklsV9I0aTg8OoYQ3dfCaRqydwUbN80aonk~fwpAJUwBxqbhhN4n9T~7WTX-pyo0w5vQ09wte4G-4yAIUEM7qlOwZohdfK2a~EIhnq9WiV92TuGtm0c4x5n6png9ZjX5pHnp~a77UCBJlIQ1teN5Rb3I9HFh4sbUGdcXUas7B9EIq4YiabjO9vf5FGwicb2XQ-YxJFJJdEJwbBp6l6iZCbSk-WijmIWmyD~8A~jhTNwlG9UHR5qTsnprntgoRyLdTRSXvDzg__&Key-Pair-Id=KL5I0C8H7HX83'},
+'thumbnail_url': 'https://assets.meshy.ai/5e05026a-0e91-4073-83fe-0263b1b4d348/tasks/01939144-7dea-73c7-af06-efa79c83243f/output/preview.png?Expires=4886870400&Signature=B16evi199mig4RTFq0FVPrHGkpjpPudRpLfpcY2FxJIkIFYg42-v8BfsL3eWAM-XDlDqahPSXtqqm6emVkSu550iPqo2yy-URoDifiIl5petEy~42nHtc1-dZB1HcEvtcyycHOjmk1y8zQfZBgQ8cjGq0Ds19xSdOXIo7-~QDPWhUGUTreJvBNg17GitgvcfYbGj2g6gibYJWjwatM7A6yHhq3d53N8eDcmO5L6dBH3VwUFTxDWBQXwUT7aXkS7dsQ7Wz5CkIbbH~T-4Pn5KpdJy1Kf1Lrh1YpOUN4T7JI8Ot5urYKYRj4cZ96xpDD9gicPGvgrRaicFyb1sSwW2ow__&Key-Pair-Id=KL5I0C8H7HX83',
+'video_url': 'https://assets.meshy.ai/5e05026a-0e91-4073-83fe-0263b1b4d348/tasks/01939144-7dea-73c7-af06-efa79c83243f/output/output.mp4?Expires=4886870400&Signature=r8t11N9~XGzNfuW3KowxarSpr7hC8eQb0kzkKADOz3hdTA9vAqBdv5AVdMGDlMmH9IP4R60UCnVNT6scA1EeN3FZLXaHaWbsxHDuc4XdUk7DE7AbwUKSbl~YdUSu5-RkNu6vaMHTiB55XubUJhv9ReB25a6Ifee0rf1ulGs-amFSMlL~eNPq6HTUI6NGAqi1p~VeFzE53JV5sWvU2JYnbGe8kzruC705z1LiCU-9isWzJGuOIy~RpiVfYzSmgh4xeILaYKpxR2ZM2uVtbi6snl~aYsqiKMIIMxMg-aZDWn-f5voiWaCL1OUV5fxbI82ZRJNd5DSlVjI~umqZZIl-iw__&Key-Pair-Id=KL5I0C8H7HX83',
'texture_urls': [{'base_color': 'https://assets.meshy.ai/5e05026a-0e91-4073-83fe-0263b1b4d348/tasks/01939144-7dea-73c7-af06-efa79c83243f/output/texture_0.png?Expires=4886870400&Signature=Q8SGRrnE00-mGHCAcIOUUAig~YtTJqVx1n2IqFFbXBNUPvf~hsTYzcKgC2wQjF25tj0D6yQ8BiIktN9WjsKu0SnbeED~ofHIA0quheMjwHL~hfdj63LGWkMumVEjE2ZVwDv-DdlROF3ayw5hQxzlRbcHwXLq0n2xMHmj-WetyiYBKCcJbXbZMOAtlo8e40d21CGMnjImduCvdwhpqwNKUx4MwHeM2W0GW4OC94AoSF8AccHJeQPD2gdu7JHoTuZFjcqS-9YCjmHT7Y5Xg7rmeNYz40O21sYci0b54NvBDzX-6HvydjqtY-ofudppaxlC77Zd~FaVcCz5rH2J43cdLg__&Key-Pair-Id=KL5I0C8H7HX83'}]}
-(camel-ai-py3.12)
+(camel-ai-py3.12)
==========================================================================
"""
diff --git a/examples/toolkits/networkx_toolkit.py b/examples/toolkits/networkx_toolkit.py
index dd7c8b1233..488b053a10 100644
--- a/examples/toolkits/networkx_toolkit.py
+++ b/examples/toolkits/networkx_toolkit.py
@@ -48,20 +48,20 @@
'''
===============================================================================
-[ToolCallingRecord(tool_name='add_edge', args={'source': 'A', 'target': 'B'},
+[ToolCallingRecord(tool_name='add_edge', args={'source': 'A', 'target': 'B'},
result=None, tool_call_id='call_iewKMXQd2GKwKWy7XJ5e5d8e'), ToolCallingRecord
-(tool_name='add_edge', args={'source': 'A', 'target': 'B'}, result=None,
+(tool_name='add_edge', args={'source': 'A', 'target': 'B'}, result=None,
tool_call_id='call_Xn8wq22oKeKekuPEqcSj5HuJ'), ToolCallingRecord
-(tool_name='add_edge', args={'source': 'B', 'target': 'C'}, result=None,
+(tool_name='add_edge', args={'source': 'B', 'target': 'C'}, result=None,
tool_call_id='call_bPeCvUBk1iQ6vv5060Zd7nbi'), ToolCallingRecord
-(tool_name='add_edge', args={'source': 'C', 'target': 'A'}, result=None,
+(tool_name='add_edge', args={'source': 'C', 'target': 'A'}, result=None,
tool_call_id='call_inCnY60iSBVghsrrHEDh7hNw'), ToolCallingRecord
-(tool_name='get_shortest_path', args={'source': 'A', 'target': 'C', 'weight':
-'weight', 'method': 'dijkstra'}, result=['A', 'B', 'C'],
+(tool_name='get_shortest_path', args={'source': 'A', 'target': 'C', 'weight':
+'weight', 'method': 'dijkstra'}, result=['A', 'B', 'C'],
tool_call_id='call_Gwy3Ca8RDQCZFuiy2h0Z6SSF'), ToolCallingRecord
-(tool_name='get_edges', args={}, result=[('A', 'B'), ('B', 'C'), ('C', 'A')],
+(tool_name='get_edges', args={}, result=[('A', 'B'), ('B', 'C'), ('C', 'A')],
tool_call_id='call_LU2xhb2W4h5a6LOx4U8gLuxa'), ToolCallingRecord
-(tool_name='get_nodes', args={}, result=['A', 'B', 'C'],
+(tool_name='get_nodes', args={}, result=['A', 'B', 'C'],
tool_call_id='call_WLuB1nBrhFeGj4FKrbwfnCrG')]
===============================================================================
'''
diff --git a/examples/toolkits/notion_mcp_tookit.py b/examples/toolkits/notion_mcp_tookit.py
index 281f2113c0..433dab6e57 100644
--- a/examples/toolkits/notion_mcp_tookit.py
+++ b/examples/toolkits/notion_mcp_tookit.py
@@ -36,8 +36,8 @@ async def main():
)
response = await chat_agent.astep(
- """create a new page in my notion named 'Camel Introduction'
-and add some content describing Camel as a lightweight multi-agent framework
+ """create a new page in my notion named 'Camel Introduction'
+and add some content describing Camel as a lightweight multi-agent framework
that supports role-driven collaboration and modular workflows."""
)
@@ -58,14 +58,14 @@ async def main():
[681981] [Local→Remote] notifications/initialized
[681981] [Local→Remote] tools/list
[681981] [Remote→Local] 1
-/home/lyz/Camel/camel/camel/memories/blocks/chat_history_block.py:73:
+/home/lyz/Camel/camel/camel/memories/blocks/chat_history_block.py:73:
UserWarning: The `ChatHistoryMemory` is empty.
warnings.warn("The `ChatHistoryMemory` is empty.")
[681981] [Local→Remote] tools/call
[681981] [Remote→Local] 2
-I have created the page "Camel Introduction" for you. You can view it here:
+I have created the page "Camel Introduction" for you. You can view it here:
https://www.notion.so/2626be7b2793819aaf2cfe686a554bdd
-[681981]
+[681981]
Shutting down...
============================================================================
"""
diff --git a/examples/toolkits/notion_toolkit.py b/examples/toolkits/notion_toolkit.py
index 0a8f3d2258..490e0b6ca7 100644
--- a/examples/toolkits/notion_toolkit.py
+++ b/examples/toolkits/notion_toolkit.py
@@ -44,8 +44,8 @@
print(str(response.info['tool_calls'])[:1000])
"""
==========================================================================
-[ToolCallingRecord(func_name='list_all_pages', args={}, result=[{'id':
-'12684f56-4caa-8080-be91-d7fb1a5834e3', 'title': 'test page'},
+[ToolCallingRecord(func_name='list_all_pages', args={}, result=[{'id':
+'12684f56-4caa-8080-be91-d7fb1a5834e3', 'title': 'test page'},
{'id': '47a4fb54-e34b-4b45-9928-aa2802982eb8', 'title': 'Aigentbot'}])]
"""
@@ -58,7 +58,7 @@
"""
==========================================================================
[ToolCallingRecord(func_name='get_notion_block_text_content', args=
-{'block_id': '12684f56-4caa-8080-be91-d7fb1a5834e3'}, result='hellonihao
+{'block_id': '12684f56-4caa-8080-be91-d7fb1a5834e3'}, result='hellonihao
buhao this is a test par [Needs case added] another par [Needs case added]
A cute cat: https://www.google.com/imgres?q=cat&imgurl=https%3A%2F%2Fi.
natgeofe.com%2Fn%2F548467d8-c5f1-4551-9f58-6817a8d2c45e%2FNationalGeographic
diff --git a/examples/toolkits/openbb_toolkit.py b/examples/toolkits/openbb_toolkit.py
index f1153a87c8..6f815e0631 100644
--- a/examples/toolkits/openbb_toolkit.py
+++ b/examples/toolkits/openbb_toolkit.py
@@ -151,43 +151,43 @@
"""
===============================================================================
Apple Inc. Company Information:
-[FMPEquityProfileData(symbol=AAPL, name=Apple Inc., cik=0000320193,
-cusip=037833100, isin=US0378331005, lei=None, legal_name=None,
-stock_exchange=NASDAQ Global Select, sic=None, short_description=None,
-long_description=Apple Inc. designs, manufactures, and markets smartphones,
-personal computers, tablets, wearables, and accessories worldwide. The company
-offers iPhone, a line of smartphones; Mac, a line of personal computers; iPad,
-a line of multi-purpose tablets; and wearables, home, and accessories
-comprising AirPods, Apple TV, Apple Watch, Beats products, and HomePod. It
-also provides AppleCare support and cloud services; and operates various
-platforms, including the App Store that allow customers to discover and
-download applications and digital content, such as books, music, video, games,
-and podcasts. In addition, the company offers various services, such as Apple
-Arcade, a game subscription service; Apple Fitness+, a personalized fitness
-service; Apple Music, which offers users a curated listening experience with
-on-demand radio stations; Apple News+, a subscription news and magazine
-service; Apple TV+, which offers exclusive original content; Apple Card, a
-co-branded credit card; and Apple Pay, a cashless payment service, as well as
-licenses its intellectual property. The company serves consumers, and small
-and mid-sized businesses; and the education, enterprise, and government
-markets. It distributes third-party applications for its products through the
-App Store. The company also sells its products through its retail and online
-stores, and direct sales force; and third-party cellular network carriers,
-wholesalers, retailers, and resellers. Apple Inc. was founded in 1976 and is
-headquartered in Cupertino, California., ceo=Mr. Timothy D. Cook,
-company_url=https://www.apple.com, business_address=None,
-mailing_address=None, business_phone_no=408 996 1010, hq_address1=One Apple
-Park Way, hq_address2=None, hq_address_city=Cupertino,
-hq_address_postal_code=95014, hq_state=CA, hq_country=US, inc_state=None,
-inc_country=None, employees=164000, entity_legal_form=None,
-entity_status=None, latest_filing_date=None, irs_number=None,
-sector=Technology, industry_category=Consumer Electronics,
-industry_group=None, template=None, standardized_active=None,
-first_fundamental_date=None, last_fundamental_date=None,
-first_stock_price_date=1980-12-12, last_stock_price_date=None, is_etf=False,
+[FMPEquityProfileData(symbol=AAPL, name=Apple Inc., cik=0000320193,
+cusip=037833100, isin=US0378331005, lei=None, legal_name=None,
+stock_exchange=NASDAQ Global Select, sic=None, short_description=None,
+long_description=Apple Inc. designs, manufactures, and markets smartphones,
+personal computers, tablets, wearables, and accessories worldwide. The company
+offers iPhone, a line of smartphones; Mac, a line of personal computers; iPad,
+a line of multi-purpose tablets; and wearables, home, and accessories
+comprising AirPods, Apple TV, Apple Watch, Beats products, and HomePod. It
+also provides AppleCare support and cloud services; and operates various
+platforms, including the App Store that allow customers to discover and
+download applications and digital content, such as books, music, video, games,
+and podcasts. In addition, the company offers various services, such as Apple
+Arcade, a game subscription service; Apple Fitness+, a personalized fitness
+service; Apple Music, which offers users a curated listening experience with
+on-demand radio stations; Apple News+, a subscription news and magazine
+service; Apple TV+, which offers exclusive original content; Apple Card, a
+co-branded credit card; and Apple Pay, a cashless payment service, as well as
+licenses its intellectual property. The company serves consumers, and small
+and mid-sized businesses; and the education, enterprise, and government
+markets. It distributes third-party applications for its products through the
+App Store. The company also sells its products through its retail and online
+stores, and direct sales force; and third-party cellular network carriers,
+wholesalers, retailers, and resellers. Apple Inc. was founded in 1976 and is
+headquartered in Cupertino, California., ceo=Mr. Timothy D. Cook,
+company_url=https://www.apple.com, business_address=None,
+mailing_address=None, business_phone_no=408 996 1010, hq_address1=One Apple
+Park Way, hq_address2=None, hq_address_city=Cupertino,
+hq_address_postal_code=95014, hq_state=CA, hq_country=US, inc_state=None,
+inc_country=None, employees=164000, entity_legal_form=None,
+entity_status=None, latest_filing_date=None, irs_number=None,
+sector=Technology, industry_category=Consumer Electronics,
+industry_group=None, template=None, standardized_active=None,
+first_fundamental_date=None, last_fundamental_date=None,
+first_stock_price_date=1980-12-12, last_stock_price_date=None, is_etf=False,
is_actively_trading=True, is_adr=False, is_fund=False, image=https://images.
-financialmodelingprep.com/symbol/AAPL.png, currency=USD,
-market_cap=3785298636000, last_price=250.42, year_high=260.1, year_low=164.08,
+financialmodelingprep.com/symbol/AAPL.png, currency=USD,
+market_cap=3785298636000, last_price=250.42, year_high=260.1, year_low=164.08,
volume_avg=43821504, annualized_dividend_amount=0.99, beta=1.24)]
===============================================================================
"""
@@ -222,16 +222,16 @@
"""
===============================================================================
Microsoft Financial Statements Overview:
-Balance Sheet: [FMPBalanceSheetData(period_ending=2024-06-30,
-fiscal_period=FY, fiscal_year=2024, filing_date=2024-07-30,
-accepted_date=2024-07-30 16:06:22, reported_currency=USD,
-cash_and_cash_equivalents=18315000000.0, short_term_investments=57216000000.0,
-cash_and_short_term_investments=75531000000.0, net_receivables=56924000000.0,
-inventory=1246000000.0, other_current_assets=26033000000.0,
+Balance Sheet: [FMPBalanceSheetData(period_ending=2024-06-30,
+fiscal_period=FY, fiscal_year=2024, filing_date=2024-07-30,
+accepted_date=2024-07-30 16:06:22, reported_currency=USD,
+cash_and_cash_equivalents=18315000000.0, short_term_investments=57216000000.0,
+cash_and_short_term_investments=75531000000.0, net_receivables=56924000000.0,
+inventory=1246000000.0, other_current_assets=26033000000.0,
total_current_assets=159734000000.0, plant_property_equipment_net=154552000000.
-0, goodwill=119220000000.0, intangible_assets=27597000000.0,
-goodwill_and_intangible_assets=146817000000.0,
-long_term_investments=14600000000.0, tax_assets=None,
+0, goodwill=119220000000.0, intangible_assets=27597000000.0,
+goodwill_and_intangible_assets=146817000000.0,
+long_term_investments=14600000000.0, tax_assets=None,
other_non_current_assets=36460000000.0, non_current_assets=352429000000.0, ..
===============================================================================
"""
@@ -242,7 +242,7 @@
# Initialize agent with toolkit tools
tech_agent = ChatAgent(
system_message="""You are a financial analysis expert. Analyze the provided
- financial data and provide insights about the company's financial
+ financial data and provide insights about the company's financial
health.""",
tools=toolkit.get_tools(),
)
@@ -303,42 +303,42 @@
### Financial Health Analysis of Apple Inc. (AAPL)
#### 1. Balance Sheet Strength
-- **Total Debt**: The total debt has shown a decreasing trend from
-approximately $132.48 billion in 2022 to $106.63 billion in 2024. This
+- **Total Debt**: The total debt has shown a decreasing trend from
+approximately $132.48 billion in 2022 to $106.63 billion in 2024. This
indicates a reduction in leverage and improved financial stability.
-- **Net Debt**: Similarly, net debt has decreased from about $108.83 billion
-in 2022 to $76.69 billion in 2024, suggesting that the company is managing its
+- **Net Debt**: Similarly, net debt has decreased from about $108.83 billion
+in 2022 to $76.69 billion in 2024, suggesting that the company is managing its
debt effectively and has sufficient cash reserves to cover its liabilities.
#### 2. Profitability Trends
-- **Revenue Growth**: AAPL has consistently generated significant revenue,
-with a notable increase in profitability over the years. The income statement
+- **Revenue Growth**: AAPL has consistently generated significant revenue,
+with a notable increase in profitability over the years. The income statement
shows a healthy profit margin, indicating effective cost management.
-- **Operating Income**: The operating income has remained strong, reflecting
+- **Operating Income**: The operating income has remained strong, reflecting
the company's ability to generate profit from its core operations.
-- **Interest Expenses**: Interest expenses have been relatively stable, which
+- **Interest Expenses**: Interest expenses have been relatively stable, which
is a positive sign as it indicates that the company is not over-leveraged.
#### 3. Key Financial Metrics
-- **Market Capitalization**: As of 2024, AAPL's market cap is approximately
+- **Market Capitalization**: As of 2024, AAPL's market cap is approximately
$3.50 trillion, making it one of the most valuable companies in the world.
-- **P/E Ratio**: The P/E ratio has increased from 24.44 in 2022 to 37.29 in
-2024, indicating that the stock may be overvalued relative to its earnings,
+- **P/E Ratio**: The P/E ratio has increased from 24.44 in 2022 to 37.29 in
+2024, indicating that the stock may be overvalued relative to its earnings,
which could be a concern for investors.
-- **Dividend Yield**: The dividend yield has decreased slightly, reflecting a
-focus on reinvesting profits for growth rather than returning cash to
+- **Dividend Yield**: The dividend yield has decreased slightly, reflecting a
+focus on reinvesting profits for growth rather than returning cash to
shareholders.
-- **Graham Number**: The Graham number indicates that the stock may be
-overvalued, as the calculated value is negative, suggesting that the stock
+- **Graham Number**: The Graham number indicates that the stock may be
+overvalued, as the calculated value is negative, suggesting that the stock
price exceeds its intrinsic value based on earnings and book value.
#### 4. Business Profile
-- **Industry Position**: AAPL is a leader in the technology sector,
-particularly in consumer electronics, software, and services. Its strong brand
-loyalty and innovative product offerings contribute to its competitive
+- **Industry Position**: AAPL is a leader in the technology sector,
+particularly in consumer electronics, software, and services. Its strong brand
+loyalty and innovative product offerings contribute to its competitive
advantage.
-- **Growth Potential**: The company continues to invest in research and
-development, positioning itself for future growth in emerging technologies and
+- **Growth Potential**: The company continues to invest in research and
+development, positioning itself for future growth in emerging technologies and
services.
### Summary of Strengths and Potential Concerns
@@ -350,10 +350,10 @@
**Potential Concerns:**
- Increasing P/E ratio may indicate overvaluation.
- Decreasing dividend yield could concern income-focused investors.
-- Negative Graham number suggests potential overvaluation based on intrinsic
+- Negative Graham number suggests potential overvaluation based on intrinsic
value metrics.
-Overall, AAPL demonstrates robust financial health, but investors should be
+Overall, AAPL demonstrates robust financial health, but investors should be
cautious of valuation metrics that may indicate a correction in stock price.
===============================================================================
"""
diff --git a/examples/toolkits/pptx_toolkit.py b/examples/toolkits/pptx_toolkit.py
index a9e5fd9a89..9d58ddcf00 100644
--- a/examples/toolkits/pptx_toolkit.py
+++ b/examples/toolkits/pptx_toolkit.py
@@ -39,9 +39,9 @@ def run_pptx_agent():
)
# Initialize the agent with the toolkit
- system_message = """You are a helpful assistant that can create PowerPoint
+ system_message = """You are a helpful assistant that can create PowerPoint
presentations.
- When creating presentations, you must first select the appropriate
+ When creating presentations, you must first select the appropriate
template based on the topic:
Available Templates:
@@ -64,15 +64,15 @@ def run_pptx_agent():
4. Use proper JSON formatting with double quotes for all strings
5. Add img_keywords field to include relevant images from Pexels
- IMPORTANT:
- 1. First, analyze the presentation topic and select the most appropriate
+ IMPORTANT:
+ 1. First, analyze the presentation topic and select the most appropriate
template
2. Then create the JSON content following the format above
- 3. Finally, use the create_presentation tool to generate the PPTX file
+ 3. Finally, use the create_presentation tool to generate the PPTX file
with the selected template
Example tool usage:
- create_presentation(content='[{"title": "Example", "subtitle": "Demo"}]',
+ create_presentation(content='[{"title": "Example", "subtitle": "Demo"}]',
filename="example.pptx", template="/examples/toolkits/slides_templates/
modern.pptx")
"""
@@ -87,35 +87,35 @@ def run_pptx_agent():
# Example 1: Presentation with various slide types
print("Example 1: Creating a presentation with various slide types")
- presentation_query = """Create a PowerPoint presentation about
- \"CAMEL-AI\" based on the content below:
- CAMEL: The first and the best multi-agent framework. We are working on
- finding the scaling laws of Agents. We believe that studying these agents
- on a large scale offers valuable insights into their behaviors,
- capabilities, and potential risks. To facilitate research in this field,
- we implement and support various types of agents, tasks, prompts, models,
+ presentation_query = """Create a PowerPoint presentation about
+ \"CAMEL-AI\" based on the content below:
+ CAMEL: The first and the best multi-agent framework. We are working on
+ finding the scaling laws of Agents. We believe that studying these agents
+ on a large scale offers valuable insights into their behaviors,
+ capabilities, and potential risks. To facilitate research in this field,
+ we implement and support various types of agents, tasks, prompts, models,
and simulated environments.
- The CAMEL project is building the foundational infrastructure for AI
- agents: collaborative, tool-using agents that operates in complex,
+ The CAMEL project is building the foundational infrastructure for AI
+ agents: collaborative, tool-using agents that operates in complex,
real-world environments.
- Our open-source framework empowers researchers and developers to rapidly
- build and experiment with multi-agent systems. These agents are powered by
- large language models (LLMs) and can interact with real-world
- tools—including terminals, web browsers, code execution, and
- APIs—unlocking high-value applications across enterprise automation,
+ Our open-source framework empowers researchers and developers to rapidly
+ build and experiment with multi-agent systems. These agents are powered by
+ large language models (LLMs) and can interact with real-world
+ tools—including terminals, web browsers, code execution, and
+ APIs—unlocking high-value applications across enterprise automation,
simulation, and AI research.
- CAMEL is already gaining significant traction: over 200 contributors have
- supported the project globally, our community spans more than 10,000
- active members on private channels like Discord and Slack, and our work
- has been cited more than 700 times in academic literature. CAMEL is widely
- used in research, education, and industry to develop agentic systems and
+ CAMEL is already gaining significant traction: over 200 contributors have
+ supported the project globally, our community spans more than 10,000
+ active members on private channels like Discord and Slack, and our work
+ has been cited more than 700 times in academic literature. CAMEL is widely
+ used in research, education, and industry to develop agentic systems and
drive innovation at the frontier of AI agent research.
- We believe CAMEL is well-positioned to become the core infrastructure
- layer for safe, scalable, and intelligent multi-agent systems in the
+ We believe CAMEL is well-positioned to become the core infrastructure
+ layer for safe, scalable, and intelligent multi-agent systems in the
emerging agent economy.
"""
@@ -225,34 +225,34 @@ def run_pptx_agent():
=== PPTXToolkit Example Usage ===
Example 1: Creating a presentation with various slide types
-The PowerPoint presentation titled "CAMEL-AI: The First Multi-Agent Framework"
+The PowerPoint presentation titled "CAMEL-AI: The First Multi-Agent Framework"
has been successfully created. You can download it using the link below:
[Download CAMEL-AI Presentation](sandbox:/Users/enrei/Desktop/camel0508/camel/
pptx_outputs/camel_ai_presentation.pptx)
Tool calls: [ToolCallingRecord(tool_name='create_presentation', args=
-{'content': '[{"title": "CAMEL-AI","subtitle": "The First Multi-Agent
-Framework"},{"heading": "Introduction to CAMEL","bullet_points":[">>
-Multi-agent framework for AI research.",">> Focus on the scaling laws of
+{'content': '[{"title": "CAMEL-AI","subtitle": "The First Multi-Agent
+Framework"},{"heading": "Introduction to CAMEL","bullet_points":[">>
+Multi-agent framework for AI research.",">> Focus on the scaling laws of
agents.",">> Insights into behavior, capabilities, and risks."],
"img_keywords":"AI research, agents"},{"heading": "Key Features",
-"bullet_points":[">> Collaborative, tool-using agents for real-world
-environments.",">> Open-source framework for rapid development and
+"bullet_points":[">> Collaborative, tool-using agents for real-world
+environments.",">> Open-source framework for rapid development and
experimentation.",">> Powered by large language models (LLMs)."],
-"img_keywords":"collaboration, technology"},{"heading": "Applications of
-CAMEL","bullet_points":[">> Interaction with real-world tools: terminals, web
-browsers, APIs.",">> Enterprise automation and simulation applications.",">>
-Significant contributions to AI research."],"img_keywords":"application of AI,
-technologies"},{"heading": "Community and Impact","bullet_points":[">> Over
-200 global contributors or supporters.",">> Community of more than 10,000
-active members on Discord and Slack.",">> Over 700 citations in academic
-literature."],"img_keywords":"community, collaboration"},{"heading": "Future
-of CAMEL","bullet_points":[">> Core infrastructure for multi-agent systems in
+"img_keywords":"collaboration, technology"},{"heading": "Applications of
+CAMEL","bullet_points":[">> Interaction with real-world tools: terminals, web
+browsers, APIs.",">> Enterprise automation and simulation applications.",">>
+Significant contributions to AI research."],"img_keywords":"application of AI,
+technologies"},{"heading": "Community and Impact","bullet_points":[">> Over
+200 global contributors or supporters.",">> Community of more than 10,000
+active members on Discord and Slack.",">> Over 700 citations in academic
+literature."],"img_keywords":"community, collaboration"},{"heading": "Future
+of CAMEL","bullet_points":[">> Core infrastructure for multi-agent systems in
agent economy.",">> Positioned for safe and scalable intelligent systems."],
-"img_keywords":"future of technology, AI"}]', 'filename':
+"img_keywords":"future of technology, AI"}]', 'filename':
'camel_ai_presentation.pptx', 'template': '/examples/toolkits/templates/modern.
pptx'}, result='PowerPoint presentation successfully created: /Users/enrei/
-Desktop/camel0508/camel/pptx_outputs/camel_ai_presentation.pptx',
+Desktop/camel0508/camel/pptx_outputs/camel_ai_presentation.pptx',
tool_call_id='call_DwygLSSBGGG9c6kXgQt3sFO5')]
==================================================
diff --git a/examples/toolkits/pubmed_toolkit.py b/examples/toolkits/pubmed_toolkit.py
index bd9c837283..303925894c 100644
--- a/examples/toolkits/pubmed_toolkit.py
+++ b/examples/toolkits/pubmed_toolkit.py
@@ -93,20 +93,20 @@
result={
'id': '37840631',
'title': 'Chinese guideline for lipid management (2023):
- a new guideline rich in domestic elements for
+ a new guideline rich in domestic elements for
controlling dyslipidemia.',
'authors': 'Li JJ',
'journal': 'J Geriatr Cardiol',
'pub_date': '2023 Sep 28',
- 'abstract': '1. J Geriatr Cardiol.
- 2023 Sep 28;20(9):618-620.
+ 'abstract': '1. J Geriatr Cardiol.
+ 2023 Sep 28;20(9):618-620.
doi: 10.26599/1671-5411.2023.09.007.
Chinese guideline for lipid management (2023):
- a new guideline rich in domestic elements for
+ a new guideline rich in domestic elements for
controlling dyslipidemia.Li JJ(1).\Author information:
(1)Division of Cardio-Metabolic Center,
- State Key Laboratory of Cardiovascular
- Disease, Fu Wai Hospital, National Center
+ State Key Laboratory of Cardiovascular
+ Disease, Fu Wai Hospital, National Center
for Cardiovascular Disease, Chinese Academy
of Medical Sciences, Peking Union Medical College,
Beijing, China.DOI: 10.26599/1671-5411.2023.09.007
@@ -115,8 +115,8 @@
'keywords': [],
'mesh_terms': [],
'publication_types': ['Journal Article'],
- 'references': ['35729555', '34734202', '34404993',
- '31172370', '30586774', '30526649',
+ 'references': ['35729555', '34734202', '34404993',
+ '31172370', '30586774', '30526649',
'29434622', '20350253']
},
tool_call_id='call_k8s7oFcRvDBKuEKvk48uoWXZ'
@@ -144,7 +144,7 @@
result=[
{'id': '37840631',
'title': 'Chinese guideline for lipid management (2023):
- a new guideline rich in domestic elements for
+ a new guideline rich in domestic elements for
controlling dyslipidemia.',
'authors': 'Li JJ',
'journal': 'J Geriatr Cardiol',
@@ -216,8 +216,8 @@
'correlates of protection: Application to the COVE '
'RNA-1273 vaccine trial.',
'authors': (
- 'Hejazi NS, Shen X, Carpp LN, Benkeser D, Follmann D,
- Janes HE, Baden LR, El Sahly HM, Deng W, Zhou H,
+ 'Hejazi NS, Shen X, Carpp LN, Benkeser D, Follmann D,
+ Janes HE, Baden LR, El Sahly HM, Deng W, Zhou H,
Leav B, Montefiori DC, 'Gilbert PB'
),
'journal': 'Int J Infect Dis',
@@ -245,20 +245,20 @@
tool_name='get_abstract',
args={'paper_id': 37840631},
result='''
- 1. J Geriatr Cardiol. 2023 Sep 28;20(9):618-620. doi:
+ 1. J Geriatr Cardiol. 2023 Sep 28;20(9):618-620. doi:
10.26599/1671-5411.2023.09.007.
-
- Chinese guideline for lipid management (2023):a new guideline
+
+ Chinese guideline for lipid management (2023):a new guideline
rich in domestic elements for controlling dyslipidemia.
-
+
Li JJ(1).
-
+
Author information:
(1)Division of Cardio-Metabolic Center, State Key Laboratory
- of Cardiovascular Disease, Fu Wai Hospital, National Center
+ of Cardiovascular Disease, Fu Wai Hospital, National Center
for Cardiovascular Disease, Chinese Academy of Medical Sciences,
Peking Union Medical College, Beijing, China.
-
+
DOI: 10.26599/1671-5411.2023.09.007
PMCID: PMC10568543
PMID: 37840631
diff --git a/examples/toolkits/reddit_toolkit.py b/examples/toolkits/reddit_toolkit.py
index c95aa08667..981f552dc7 100644
--- a/examples/toolkits/reddit_toolkit.py
+++ b/examples/toolkits/reddit_toolkit.py
@@ -38,15 +38,15 @@
===============================================================================
Collecting top 2 posts from r/python...
-Post Title: Lad wrote a Python script to download
- Alexa voice recordings,
+Post Title: Lad wrote a Python script to download
+ Alexa voice recordings,
he didn't expect this email.
- Comment: I will be honest, I was expecting a Cease
+ Comment: I will be honest, I was expecting a Cease
and Desist from Amazon.
Upvotes: 1857
- Comment: Very cool. That is the beauty of sharing.
- You never know who or how it will help someone,
- but you post it anyway because that is just being awesome.
+ Comment: Very cool. That is the beauty of sharing.
+ You never know who or how it will help someone,
+ but you post it anyway because that is just being awesome.
Thanks for sharing.
Upvotes: 264
@@ -65,8 +65,8 @@
Comment: scale tap piquant quiet advise salt languid abundant dolls long
-- mass edited with redact.dev
Upvotes: 1325
- Comment: Good job. But honestly, add a sleep timer of a few seconds.
- This will eventually get your IP banned on reddit
+ Comment: Good job. But honestly, add a sleep timer of a few seconds.
+ This will eventually get your IP banned on reddit
if you bombard them with too many requests.
Upvotes: 408
Comment: Cool! Could you share it?
@@ -142,28 +142,28 @@
Subreddit: learnprogramming
Post Title: I ran a 100% free full stack web development bootcamp
- for those laid off by the pandemic.
- 65 people got jobs and we are doing it again!
+ for those laid off by the pandemic.
+ 65 people got jobs and we are doing it again!
I would love to have you join us!
Comment Body: If you want to learn to code, this will change your life.
Can't make it to class? Recorded classes are on Twitch and YouTube
Never touched code before? He starts from square 1!
Shy/introvert/don't like talking? Stick to the chat
-Don't have support in real life?
+Don't have support in real life?
Join the discord and get more support and hype than your family
Don't have money? It's free!
Not in the US? Leon is Mr. Worldwide when it comes to teaching!
-100Devs isn't just a free online bootcamp,
-it's a whole support network that will be there for you to cheer you on,
-help you out, and give you a shoulder to cry on.
+100Devs isn't just a free online bootcamp,
+it's a whole support network that will be there for you to cheer you on,
+help you out, and give you a shoulder to cry on.
If you're on the fence, give it a try. You won't regret it.
Upvotes: 518
Sentiment Score: 0.385
Subreddit: learnprogramming
-Post Title: I ran a 100% free full stack web development bootcamp
- for those laid off by the pandemic.
- 65 people got jobs and we are doing it again!
+Post Title: I ran a 100% free full stack web development bootcamp
+ for those laid off by the pandemic.
+ 65 people got jobs and we are doing it again!
I would love to have you join us!
Comment Body: If you need any free dev help let me know
@@ -173,19 +173,19 @@
Subreddit: datascience
Post Title: data siens
-Comment Body: I was once reading this article that went as:
- "The AI already predicted how many goals Cavani
- will score at Manchester United".
+Comment Body: I was once reading this article that went as:
+ "The AI already predicted how many goals Cavani
+ will score at Manchester United".
It was a linear regression.
Upvotes: 345
Sentiment Score: 0.5
Subreddit: machinelearning
-Post Title: [D] A Demo from 1993 of 32-year-old Yann LeCun
- showing off the World's first Convolutional
+Post Title: [D] A Demo from 1993 of 32-year-old Yann LeCun
+ showing off the World's first Convolutional
Network for Text Recognition
-Comment Body: The fact that they also had to know the
- location of the numbers and that the algorithm
+Comment Body: The fact that they also had to know the
+ location of the numbers and that the algorithm
was robust to scale changes is impressive for 1993
It's not like they just solved MNIST in 1993, it's one step above that
diff --git a/examples/toolkits/search_toolkit.py b/examples/toolkits/search_toolkit.py
index 0aaf644891..a95f436a66 100644
--- a/examples/toolkits/search_toolkit.py
+++ b/examples/toolkits/search_toolkit.py
@@ -290,7 +290,7 @@ class PersonInfo(BaseModel):
# Example with ChatAgent using Exa search
exa_agent = ChatAgent(
- system_message="""You are a helpful assistant that can use Exa search
+ system_message="""You are a helpful assistant that can use Exa search
engine to find the latest research papers.""",
tools=[FunctionTool(SearchToolkit().search_exa)],
)
@@ -304,51 +304,51 @@ class PersonInfo(BaseModel):
===============================================================================
Here are some of the latest developments in quantum error correction:
-1. **Suppressing Quantum Errors by Scaling a Surface Code Logical Qubit**
-- **Published Date**: February 22, 2023
-- **Authors**: Google Quantum AI
-- **Summary**: This work reports on a 72-qubit superconducting device
+1. **Suppressing Quantum Errors by Scaling a Surface Code Logical Qubit**
+- **Published Date**: February 22, 2023
+- **Authors**: Google Quantum AI
+- **Summary**: This work reports on a 72-qubit superconducting device
implementing 49-qubit distance-5 surface code, improving the performance over
a dist-3 code. The research demonstrates how scaling error-correcting codes
-can lead to significant reductions in logical error rates.
+can lead to significant reductions in logical error rates.
- **Link**: [Read paper](https://www.nature.com/articles/s41586-022-05434-1)

-2. **Increasing Error Tolerance in Quantum Computers with Dynamic Bias**
-- **Published Date**: March 28, 2023
+2. **Increasing Error Tolerance in Quantum Computers with Dynamic Bias**
+- **Published Date**: March 28, 2023
- **Authors**: H'ector Bomb'in, C. Dawson, Naomi H. Nickerson, M. Pant
- **Summary**: This study introduces a method for dynamically arranging error
-biases to enhance error tolerance in fusion-based quantum computing. By
+biases to enhance error tolerance in fusion-based quantum computing. By
adaptively choosing bias during operations, it triples the loss tolerance.
- **Link**: [Read the paper](https://arxiv.org/pdf/2303.16122.pdf)
-3. **Fault Tolerant Non-Clifford State Preparation for Arbitrary Rotations**
-- **Published Date**: March 30, 2023
-- **Authors**: Hyeongrak Choi, Frederic T. Chong, Dirk Englund, Yong Ding
-- **Summary**: This paper proposes a post-selection-based algorithm for
+3. **Fault Tolerant Non-Clifford State Preparation for Arbitrary Rotations**
+- **Published Date**: March 30, 2023
+- **Authors**: Hyeongrak Choi, Frederic T. Chong, Dirk Englund, Yong Ding
+- **Summary**: This paper proposes a post-selection-based algorithm for
efficiently preparing resource states for gate teleportation, achieving fault
tolerance with reduced resource overheads for non-Clifford rotations.
- **Link**: [Read the paper](https://export.arxiv.org/pdf/2303.17380v1.pdf)
-4. **Measurement-free Fault-tolerant Logical Zero-state Encoding**
-- **Published Date**: June 2, 2023
-- **Authors**: Hayato Goto, Yinghao Ho, Taro Kanao
-- **Summary**: This work presents an efficient encoding method for the
-nine-qubit surface code that requires no measurement and can operate on a
+4. **Measurement-free Fault-tolerant Logical Zero-state Encoding**
+- **Published Date**: June 2, 2023
+- **Authors**: Hayato Goto, Yinghao Ho, Taro Kanao
+- **Summary**: This work presents an efficient encoding method for the
+nine-qubit surface code that requires no measurement and can operate on a
one-dimensional qubit array, demonstrating its fault tolerance.
- **Link**: [Read the paper](https://export.arxiv.org/pdf/2303.17211v2.pdf)
-5. **High-threshold and Low-overhead Fault-tolerant Quantum Memory**
-- **Published Date**: March 27, 2024
-- **Author**: Theodore J. Yoder
+5. **High-threshold and Low-overhead Fault-tolerant Quantum Memory**
+- **Published Date**: March 27, 2024
+- **Author**: Theodore J. Yoder
- **Summary**: This research discusses high-rate LDPC codes for quantum error
correction, presenting codes that require fewer physical qubits compared to
traditional surface codes while achieving similar error thresholds.
- **Link**: [Read paper](https://www.nature.com/articles/s41586-024-07107-7)
-These studies reflect ongoing advances in quantum error correction, focusing
+These studies reflect ongoing advances in quantum error correction, focusing
on improving efficiency, fault tolerance, and minimizing resource overheads.
===============================================================================
"""
@@ -362,17 +362,17 @@ class PersonInfo(BaseModel):
===============================================================================
{'request_id': '78a77a7e004dd97bc18bd907b90d152b', 'results': [
{'result_id': 1, 'title': 'Investor Relations', 'snippet':
- 'Stock Information Alibaba Group(BABA)-NYSE 112.280 1.690(-1.483%)
+ 'Stock Information Alibaba Group(BABA)-NYSE 112.280 1.690(-1.483%)
2025-04-15T20:01 EDT Prices shown in USD The data service is provided
- by Alibaba Cloud,with a delay of at least 15 minutes. Alibaba
+ by Alibaba Cloud,with a delay of at least 15 minutes. Alibaba
Group(9988)-H...', 'url': 'https://www.alibabagroup.com/
- en-US/investor-relations', 'hostname': 'www.alibabagroup.com',
+ en-US/investor-relations', 'hostname': 'www.alibabagroup.com',
'summary': 'February 20, 2025\nAlibaba Group Will Announce December
- Quarter 2024 Results on February 20, 2025April 2, 2025\nAlibaba Group
- Announces December Quarter 2024 Results\nFebruary 20, 2025Stock
+ Quarter 2024 Results on February 20, 2025April 2, 2025\nAlibaba Group
+ Announces December Quarter 2024 Results\nFebruary 20, 2025Stock
Information\nAlibaba Group (BABA) - NYSE\n$\n112.280\n-$1.690(-1.483%
2025-04-15T20:01 EDTAlibaba Group (9988) - HKEX\nHK$\n104.400\n-HK$5.500
- (-5.005%)\n2025-04-16T12:00 HKT\nPrices shown in HKD',
+ (-5.005%)\n2025-04-16T12:00 HKT\nPrices shown in HKD',
'score': 0.33736322991609163, 'publish_time': 1744646400000},
{'result_id': 2, 'title': 'technode'.....}
]}
diff --git a/examples/toolkits/semantic_scholar_toolkit.py b/examples/toolkits/semantic_scholar_toolkit.py
index 37570fcc08..445a17e2cc 100644
--- a/examples/toolkits/semantic_scholar_toolkit.py
+++ b/examples/toolkits/semantic_scholar_toolkit.py
@@ -86,18 +86,18 @@
'''
================================================================
-[FunctionCallingRecord(func_name='fetch_paper_data_title',
+[FunctionCallingRecord(func_name='fetch_paper_data_title',
args={'paperTitle': 'Construction of the Literature Graph in
Semantic Scholar', 'fields': 'title,abstract,authors,year,
citationCount,paperId'}, result={'total': 1, 'offset': 0,
'data': [{'paperId': '649def34f8be52c8b66281af98ae884c09aef38b',
'title': 'Construction of the Literature Graph in Semantic
Scholar', 'abstract': 'We describe a deployed scalable system
-for organizing published scientific literature into a
-heterogeneous graph to facilitate algorithmic manipulation and
+for organizing published scientific literature into a
+heterogeneous graph to facilitate algorithmic manipulation and
discovery. The resulting literature graph consists of more than
280M nodes, representing papers, authors, entities and various
- interactions between them (e.g., authorships, citations,
+ interactions between them (e.g., authorships, citations,
entity mentions). We reduce literature graph construction into
familiar NLP tasks (e.g., entity extraction and linking),
point out research challenges due to differences from standard
@@ -107,7 +107,7 @@
'''
# Search a paper through its title
-usr_msg = """search the paper with paper id of
+usr_msg = """search the paper with paper id of
'649def34f8be52c8b66281af98ae884c09aef38b' for me"""
camel_agent.reset()
response = camel_agent.step(usr_msg)
@@ -116,15 +116,15 @@
'''
================================================================
[FunctionCallingRecord(func_name='fetch_paper_data_id', args=
-{'paperID': '649def34f8be52c8b66281af98ae884c09aef38b',
+{'paperID': '649def34f8be52c8b66281af98ae884c09aef38b',
'fields': 'title,abstract,authors,year,citationCount,
-publicationTypes,publicationDate,openAccessPdf'},
+publicationTypes,publicationDate,openAccessPdf'},
result={'paperId': '649def34f8be52c8b66281af98ae884c09aef38b',
'title': 'Construction of the Literature Graph in Semantic
Scholar', 'abstract': 'We describe a deployed scalable system
for organizing published scientific literature into a
heterogeneous graph to facilitate algorithmic manipulation
- and discovery. The resulting literature graph consists of
+ and discovery. The resulting literature graph consists of
more than 280M nodes, representing papers, authors, entities
and various interactions between them (e.g., authorships,
citations, entity mentions). We reduce literature graph
@@ -145,20 +145,20 @@
'''
================================================================
-[FunctionCallingRecord(func_name='fetch_bulk_paper_data',
+[FunctionCallingRecord(func_name='fetch_bulk_paper_data',
args={'query': 'generative ai', 'year': '2024-', 'fields':
'title,url,publicationTypes,publicationDate,openAccessPdf'},
result={'total': 9849, 'token': 'PCOA3RZZB2ADADAEYCX2BLJJRDEGL
PUCFA3I5XJAKEAB3YXPGDOTY2GU3WHI4ZMALUMAPUDPHP724CEUVEFKTYRZY5K
-LUU53Y5MWWEINIKYZZRC3YT3H4AF7CTSQ', 'data': [{'paperId':
+LUU53Y5MWWEINIKYZZRC3YT3H4AF7CTSQ', 'data': [{'paperId':
'0008cd09c0449451b9e6e6de35c29009f0883cd9', 'url': 'https://www
.semanticscholar.org/paper/0008cd09c0449451b9e6e6de35c29009
f0883cd9', 'title': 'A Chitchat on Using ChatGPT for Cheating',
'openAccessPdf': {'url': 'https://doi.org/10.34074/proc.240106'
- , 'status': 'BRONZE'}, 'publicationTypes': ['Conference'],
+ , 'status': 'BRONZE'}, 'publicationTypes': ['Conference'],
'publicationDate': '2024-07-24'}, {'paperId': '0013aecf813400
174158e4f012918c5408f90962', 'url': 'https://www.semanticsc
- holar.org/paper/0013aecf813400174158e4f012918c5408f90962',
+ holar.org/paper/0013aecf813400174158e4f012918c5408f90962',
'title': 'Can novice teachers detect AI-generated texts in EFL
writing?', 'openAccessPdf': None, 'publicationTypes':
['JournalArticle'], 'publicationDate'
@@ -174,21 +174,21 @@
'''
================================================================
-[FunctionCallingRecord(func_name='fetch_bulk_paper_data',
+[FunctionCallingRecord(func_name='fetch_bulk_paper_data',
args={'query': 'ai and bio', 'year': '2024-', 'fields': 'title,
url,publicationTypes,publicationDate,openAccessPdf'}, result=
{'total': 207, 'token': None, 'data': [{'paperId': '00c8477a9c
c28b85e4f6da13d2a889c94a955291', 'url': 'https://www.semantics
-cholar.org/paper/00c8477a9cc28b85e4f6da13d2a889c94a955291',
+cholar.org/paper/00c8477a9cc28b85e4f6da13d2a889c94a955291',
'title': 'Explaining Enterprise Knowledge Graphs with Large
- Language Models and Ontological Reasoning', 'openAccessPdf':
+ Language Models and Ontological Reasoning', 'openAccessPdf':
None, 'publicationTypes': ['JournalArticle'], 'publicationDate
': None}, {'paperId': '01726fbfc8ee716c82b9c4cd70696906d3a4
46d0', 'url': 'https://www.semanticscholar.org/paper/01726fbfc
- 8ee716c82b9c4cd70696906d3a446d0', 'title': 'Study Research
+ 8ee716c82b9c4cd70696906d3a446d0', 'title': 'Study Research
Protocol for Phenome India-CSIR Health Cohort Knowledgebase
- (PI-CHeCK): A Prospective multi-modal follow-up study on a
- nationwide employee cohort.', 'openAccessPdf': {'url':
+ (PI-CHeCK): A Prospective multi-modal follow-up study on a
+ nationwide employee cohort.', 'openAccessPdf': {'url':
'https://www.medrxiv.org/content/medrxiv/early/2024/10/19/2024
.10.17.24315252.full.pdf', 'status'
================================================================
@@ -209,7 +209,7 @@
[FunctionCallingRecord(func_name='fetch_recommended_papers',
args={'positive_paper_ids': ['02138d6d094d1e7511c157f0b1a3dd4e
5b20ebee', '018f58247a20ec6b3256fd3119f57980a6f37748'], 'negati
-ve_paper_ids': ['0045ad0c1e14a4d1f4b011c92eb36b8df63d65bc'],
+ve_paper_ids': ['0045ad0c1e14a4d1f4b011c92eb36b8df63d65bc'],
'fields': 'title,url,citationCount,authors,publicationTypes,
publicationDate,openAccessPdf', 'limit': 20, 'save_to_file': F
alse}, result={'recommendedPapers': [{'paperId': '9cb202a72171
@@ -235,7 +235,7 @@
'''
================================================================
-[FunctionCallingRecord(func_name='fetch_recommended_papers',
+[FunctionCallingRecord(func_name='fetch_recommended_papers',
args={'positive_paper_ids': ['02138d6d094d1e7511c157f0b1a3dd4e5
b20ebee', '018f58247a20ec6b3256fd3119f57980a6f37748'], 'negativ
e_paper_ids': ['0045ad0c1e14a4d1f4b011c92eb36b8df63d65bc'],
@@ -243,14 +243,14 @@
publicationDate,openAccessPdf', 'limit': 20, 'save_to_file': T
rue}, result={'recommendedPapers': [{'paperId': '9cb202a7217
1dc954f8180b42e08da7ab31e16a1', 'url': 'https://www.semantics
- cholar.org/paper/9cb202a72171dc954f8180b42e08da7ab31e16a1',
+ cholar.org/paper/9cb202a72171dc954f8180b42e08da7ab31e16a1',
'title': 'Embrace, Don't Avoid: Reimagining Higher Education
with Generative Artificial Intelligence', 'citationCount':
0, 'openAccessPdf': {'url': 'https://heca-analitika.com/jeml
/article/download/233/157', 'status': 'HYBRID'}, 'publication
Types': ['JournalArticle'], 'publicationDate': '2024-11-28',
'authors': [{'authorId': '1659371967', 'name': 'T. R. Novia
- ndy'}, {'authorId': '1657989613', 'name': 'A. Maulana'},
+ ndy'}, {'authorId': '1657989613', 'name': 'A. Maulana'},
{'authorId': '146805414', 'name'
================================================================
'''
@@ -267,7 +267,7 @@
'''
================================================================
-[FunctionCallingRecord(func_name='fetch_recommended_papers',
+[FunctionCallingRecord(func_name='fetch_recommended_papers',
args={'positive_paper_ids': ['02138d6d094d1e7511c157f0b1a3dd4e5
b20ebee', '018f58247a20ec6b3256fd3119f57980a6f37748'], 'negat
ive_paper_ids': ['0045ad0c1e14a4d1f4b011c92eb36b8df63d65bc'],
@@ -276,7 +276,7 @@
': True}, result={'recommendedPapers': [{'paperId': '9cb20
2a72171dc954f8180b42e08da7ab31e16a1', 'url': 'https://www.se
manticscholar.org/paper/9cb202a72171dc954f8180b42e08da7ab31e
- 16a1', 'title': 'Embrace, Don't Avoid: Reimagining Higher
+ 16a1', 'title': 'Embrace, Don't Avoid: Reimagining Higher
Education with Generative Artificial Intelligence', 'citat
ionCount': 0, 'openAccessPdf': {'url': 'https://heca-anali
tika.com/jeml/article/download/233/157', 'status': 'HYBR
diff --git a/examples/toolkits/sympy_toolkit.py b/examples/toolkits/sympy_toolkit.py
index 4c493b5d85..b6691ab4d8 100644
--- a/examples/toolkits/sympy_toolkit.py
+++ b/examples/toolkits/sympy_toolkit.py
@@ -18,7 +18,7 @@
from camel.types import ModelPlatformType, ModelType
# Define system message
-sys_msg = """You are a helpful math assistant that can perform symbolic
+sys_msg = """You are a helpful math assistant that can perform symbolic
computations"""
# Set model config
@@ -46,9 +46,9 @@
print(response.info['tool_calls'])
'''
===============================================================================
-[ToolCallingRecord(tool_name='simplify_expression', args={'expression': '(x**4
+[ToolCallingRecord(tool_name='simplify_expression', args={'expression': '(x**4
- 16)/(x**2 - 4) + sin(x)**2 + cos(x)**2 + (x**3 + 6*x**2 + 12*x + 8)/(x + 2)
-'}, result='{"status": "success", "result": "2*x**2 + 4*x + 9"}',
+'}, result='{"status": "success", "result": "2*x**2 + 4*x + 9"}',
tool_call_id='call_CdoZsLWeagT0yBM13RYuz09W')]
===============================================================================
'''
diff --git a/examples/toolkits/synthesize_function_execution.py b/examples/toolkits/synthesize_function_execution.py
index ee8a151a47..ec997eaf41 100644
--- a/examples/toolkits/synthesize_function_execution.py
+++ b/examples/toolkits/synthesize_function_execution.py
@@ -84,11 +84,11 @@ class MovieResponse(BaseModel):
"""
===============================================================================
-Warning: No synthesize_output_model provided. Use `gpt-4o-mini` to synthesize
+Warning: No synthesize_output_model provided. Use `gpt-4o-mini` to synthesize
the output.
Synthesize output: False
-It seems that I'm unable to access the movie data at the moment due to a
-subscription issue with the API. However, if you provide me with the title of
+It seems that I'm unable to access the movie data at the moment due to a
+subscription issue with the API. However, if you provide me with the title of
the movie or any other details, I can help you find information about it!
===============================================================================
"""
@@ -96,8 +96,8 @@ class MovieResponse(BaseModel):
"""
===============================================================================
Synthesize output: True
-{'rating': '8.8', 'description': 'A thief who steals corporate secrets through
-the use of dream-sharing technology is given the inverse task of planting an
+{'rating': '8.8', 'description': 'A thief who steals corporate secrets through
+the use of dream-sharing technology is given the inverse task of planting an
idea into the mind of a CEO.', 'movie_title': 'Inception'}
===============================================================================
"""
diff --git a/examples/toolkits/task_planning_toolkit.py b/examples/toolkits/task_planning_toolkit.py
index d7f63faebf..0c389bb6f1 100644
--- a/examples/toolkits/task_planning_toolkit.py
+++ b/examples/toolkits/task_planning_toolkit.py
@@ -31,8 +31,8 @@
# Set up the ChatAgent with thinking capabilities
sys_prompt = TextPrompt(
- """You are a helpful assistant that can decompose a task or
- re-plan a task when it's already decomposed into sub-tasks and its
+ """You are a helpful assistant that can decompose a task or
+ re-plan a task when it's already decomposed into sub-tasks and its
sub-tasks are not relevant or aligned with the goal of the main task.
Please use the tools to decompose or re-plan the task."""
)
@@ -83,27 +83,27 @@
9. **ID:** 1.8 - Edit and proofread the paper
10. **ID:** 1.9 - Prepare for submission
-These subtasks are aligned with the goal of writing a research paper on AI
-ethics. If you need any further adjustments or additional tasks, feel free to
+These subtasks are aligned with the goal of writing a research paper on AI
+ethics. If you need any further adjustments or additional tasks, feel free to
ask!
Tool calls:
-[ToolCallingRecord(tool_name='decompose_task', args={'original_task_content':
-'Write a research paper on AI ethics', 'sub_task_contents': ['Research the
-history of AI ethics', 'Identify key ethical issues in AI', 'Review existing
-literature on AI ethics', 'Conduct interviews with experts in the field',
-'Draft the paper outline', 'Write the introduction', 'Write the body
-sections', 'Write the conclusion', 'Edit and proofread the paper', 'Prepare
-for submission'], 'original_task_id': '1'}, result=[Task(id='1.0',
-content='Research the history of AI ethics', state='OPEN'), Task(id='1.1',
-content='Identify key ethical issues in AI', state='OPEN'), Task(id='1.2',
+[ToolCallingRecord(tool_name='decompose_task', args={'original_task_content':
+'Write a research paper on AI ethics', 'sub_task_contents': ['Research the
+history of AI ethics', 'Identify key ethical issues in AI', 'Review existing
+literature on AI ethics', 'Conduct interviews with experts in the field',
+'Draft the paper outline', 'Write the introduction', 'Write the body
+sections', 'Write the conclusion', 'Edit and proofread the paper', 'Prepare
+for submission'], 'original_task_id': '1'}, result=[Task(id='1.0',
+content='Research the history of AI ethics', state='OPEN'), Task(id='1.1',
+content='Identify key ethical issues in AI', state='OPEN'), Task(id='1.2',
content='Review existing literature on AI ethics', state='OPEN'), Task(id='1.
3', content='Conduct interviews with experts in the field', state='OPEN'), Task
-(id='1.4', content='Draft the paper outline', state='OPEN'), Task(id='1.5',
-content='Write the introduction', state='OPEN'), Task(id='1.6', content='Write
-the body sections', state='OPEN'), Task(id='1.7', content='Write the
-conclusion', state='OPEN'), Task(id='1.8', content='Edit and proofread the
-paper', state='OPEN'), Task(id='1.9', content='Prepare for submission',
+(id='1.4', content='Draft the paper outline', state='OPEN'), Task(id='1.5',
+content='Write the introduction', state='OPEN'), Task(id='1.6', content='Write
+the body sections', state='OPEN'), Task(id='1.7', content='Write the
+conclusion', state='OPEN'), Task(id='1.8', content='Edit and proofread the
+paper', state='OPEN'), Task(id='1.9', content='Prepare for submission',
state='OPEN')], tool_call_id='call_WYeAEByDSR6PtlDiC5EmiTjZ')]
===============================================================================
@@ -132,7 +132,7 @@
Examples: Problem Solving with TaskPlanning Toolkit
===============================================================================
Final result of re_plan task:
- The main task is to write a research paper on AI ethics, and the irrelevant
+ The main task is to write a research paper on AI ethics, and the irrelevant
subtasks have been replaced with the following relevant subtasks:
1. **Research the history of AI ethics** (ID: 1.0)
@@ -145,25 +145,25 @@
8. **Write the conclusion** (ID: 1.7)
9. **Edit and proofread the paper** (ID: 1.8)
-These subtasks are now aligned with the goal of completing the research paper
+These subtasks are now aligned with the goal of completing the research paper
on AI ethics.
Tool calls:
-[ToolCallingRecord(tool_name='replan_tasks', args={'original_task_content':
-'Write a research paper on AI ethics', 'sub_task_contents': ['Research the
-history of AI ethics', 'Identify key ethical issues in AI', 'Review existing
-literature on AI ethics', 'Conduct interviews with experts in the field',
-'Draft the paper outline', 'Write the introduction', 'Write the body
-sections', 'Write the conclusion', 'Edit and proofread the paper'],
-'original_task_id': '1'}, result=[Task(id='1.0', content='Research the history
-of AI ethics', state='OPEN'), Task(id='1.1', content='Identify key ethical
-issues in AI', state='OPEN'), Task(id='1.2', content='Review existing
-literature on AI ethics', state='OPEN'), Task(id='1.3', content='Conduct
-interviews with experts in the field', state='OPEN'), Task(id='1.4',
-content='Draft the paper outline', state='OPEN'), Task(id='1.5',
-content='Write the introduction', state='OPEN'), Task(id='1.6', content='Write
-the body sections', state='OPEN'), Task(id='1.7', content='Write the
-conclusion', state='OPEN'), Task(id='1.8', content='Edit and proofread the
+[ToolCallingRecord(tool_name='replan_tasks', args={'original_task_content':
+'Write a research paper on AI ethics', 'sub_task_contents': ['Research the
+history of AI ethics', 'Identify key ethical issues in AI', 'Review existing
+literature on AI ethics', 'Conduct interviews with experts in the field',
+'Draft the paper outline', 'Write the introduction', 'Write the body
+sections', 'Write the conclusion', 'Edit and proofread the paper'],
+'original_task_id': '1'}, result=[Task(id='1.0', content='Research the history
+of AI ethics', state='OPEN'), Task(id='1.1', content='Identify key ethical
+issues in AI', state='OPEN'), Task(id='1.2', content='Review existing
+literature on AI ethics', state='OPEN'), Task(id='1.3', content='Conduct
+interviews with experts in the field', state='OPEN'), Task(id='1.4',
+content='Draft the paper outline', state='OPEN'), Task(id='1.5',
+content='Write the introduction', state='OPEN'), Task(id='1.6', content='Write
+the body sections', state='OPEN'), Task(id='1.7', content='Write the
+conclusion', state='OPEN'), Task(id='1.8', content='Edit and proofread the
paper', state='OPEN')], tool_call_id='call_37UsfFNonx7rbmGWeJePbmbo')]
===============================================================================
"""
diff --git a/examples/toolkits/terminal_toolkit.py b/examples/toolkits/terminal_toolkit.py
index 4ffc928393..966733df15 100644
--- a/examples/toolkits/terminal_toolkit.py
+++ b/examples/toolkits/terminal_toolkit.py
@@ -65,12 +65,12 @@
print(str(response.info['tool_calls'])[:1000])
"""
===============================================================================
-[ToolCallingRecord(tool_name='shell_exec', args={'id': 'session1', 'exec_dir':
-'/Users/enrei/Desktop/camel0302/camel/workspace', 'command': 'mkdir logs'},
+[ToolCallingRecord(tool_name='shell_exec', args={'id': 'session1', 'exec_dir':
+'/Users/enrei/Desktop/camel0302/camel/workspace', 'command': 'mkdir logs'},
result='', tool_call_id='call_ekWtDhrwxOg20lz55pqLEKvm'), ToolCallingRecord
(tool_name='shell_exec', args={'id': 'session2', 'exec_dir': '/Users/enrei/
-Desktop/camel0302/camel/workspace/logs', 'command': 'ls -la'}, result='total
-0\ndrwxr-xr-x 2 enrei staff 64 Mar 30 04:29 .\ndrwxr-xr-x 4 enrei staff
+Desktop/camel0302/camel/workspace/logs', 'command': 'ls -la'}, result='total
+0\ndrwxr-xr-x 2 enrei staff 64 Mar 30 04:29 .\ndrwxr-xr-x 4 enrei staff
128 Mar 30 04:29 ..\n', tool_call_id='call_FNdkLkvUahtEZUf7YZiJrjfo')]
===============================================================================
"""
@@ -88,13 +88,13 @@
print(str(response.info['tool_calls'])[:1000])
"""
===============================================================================
-[ToolCallingRecord(tool_name='shell_exec', args={'id': 'create_log_file',
-'exec_dir': '/Users/enrei/Desktop/camel0302/camel/workspace/logs', 'command':
-"echo 'INFO: Application started successfully at 2024-03-10' > app.log"},
+[ToolCallingRecord(tool_name='shell_exec', args={'id': 'create_log_file',
+'exec_dir': '/Users/enrei/Desktop/camel0302/camel/workspace/logs', 'command':
+"echo 'INFO: Application started successfully at 2024-03-10' > app.log"},
result='', tool_call_id='call_bctQQYnWgAuPp1ga7a7xM6bo'), ToolCallingRecord
(tool_name='shell_exec', args={'id': 'show_log_file_content', 'exec_dir': '/
Users/enrei/Desktop/camel0302/camel/workspace/logs', 'command': 'cat app.
-log'}, result='INFO: Application started successfully at 2024-03-10\n',
+log'}, result='INFO: Application started successfully at 2024-03-10\n',
tool_call_id='call_wPYJBG3eYrUsjFJYIYYynxuz')]
===============================================================================
"""
@@ -112,7 +112,7 @@
"""
===============================================================================
[ToolCallingRecord(tool_name='file_find_in_content', args={'file': '/Users/
-enrei/Desktop/camel0302/camel/workspace/logs/app.log', 'regex': 'INFO',
+enrei/Desktop/camel0302/camel/workspace/logs/app.log', 'regex': 'INFO',
'sudo': False}, result='INFO: Application started successfully at 2024-03-10',
tool_call_id='call_PpeRUsldHyg5jSPLZxiGoVfq')]
===============================================================================
@@ -129,8 +129,8 @@
print(response.info['tool_calls'])
"""
===============================================================================
-[ToolCallingRecord(tool_name='shell_exec', args={'id': 'remove_logs',
-'exec_dir': '/Users/enrei/Desktop/camel0302/camel/workspace', 'command': 'rm
+[ToolCallingRecord(tool_name='shell_exec', args={'id': 'remove_logs',
+'exec_dir': '/Users/enrei/Desktop/camel0302/camel/workspace', 'command': 'rm
-rf logs'}, result='', tool_call_id='call_A2kUkVIAhkD9flWmmpTlS9FA')]
===============================================================================
"""
@@ -166,19 +166,19 @@
print(str(response.info['tool_calls'])[:1000])
"""
===============================================================================
-[ToolCallingRecord(tool_name='shell_exec', args={'id': 'session1', 'exec_dir':
-'/tmp', 'command': 'sleep 300 & echo $!'}, result='Operation restriction:
+[ToolCallingRecord(tool_name='shell_exec', args={'id': 'session1', 'exec_dir':
+'/tmp', 'command': 'sleep 300 & echo $!'}, result='Operation restriction:
Execution path /tmp must be within working directory /home/jjyaoao/openSource/
-camel/workspace', tool_call_id='call_G7TcVUJs195Er6yocORHysXP'),
-ToolCallingRecord(tool_name='shell_exec', args={'id': 'session1', 'exec_dir':
-'/home/jjyaoao/openSource/camel/workspace', 'command': 'sleep 300 & echo $!'},
-result='10804\n', tool_call_id='call_mncQosy3b4cuc1j5MGiltohH'),
-ToolCallingRecord(tool_name='shell_exec', args={'id': 'session2', 'exec_dir':
-'/home/jjyaoao/openSource/camel/workspace', 'command': 'ps aux'},
-result='USER PID %CPU %MEM VSZ RSS TTY STAT START TIME
-COMMAND\nroot 1 0.0 0.2 170104 12368 ? Ss 10:06 0:00
-/sbin/init\nroot 2 0.0 0.0 2776 1928 ? Sl 10:06 0:00
-/init\nroot 8 0.0 0.0 2776 4 ? Sl 10:06 0:00
+camel/workspace', tool_call_id='call_G7TcVUJs195Er6yocORHysXP'),
+ToolCallingRecord(tool_name='shell_exec', args={'id': 'session1', 'exec_dir':
+'/home/jjyaoao/openSource/camel/workspace', 'command': 'sleep 300 & echo $!'},
+result='10804\n', tool_call_id='call_mncQosy3b4cuc1j5MGiltohH'),
+ToolCallingRecord(tool_name='shell_exec', args={'id': 'session2', 'exec_dir':
+'/home/jjyaoao/openSource/camel/workspace', 'command': 'ps aux'},
+result='USER PID %CPU %MEM VSZ RSS TTY STAT START TIME
+COMMAND\nroot 1 0.0 0.2 170104 12368 ? Ss 10:06 0:00
+/sbin/init\nroot 2 0.0 0.0 2776 1928 ? Sl 10:06 0:00
+/init\nroot 8 0.0 0.0 2776 4 ? Sl 10:06 0:00
plan9 --control-socket 7 --log-level=debug --log-file=/dev/null ...',
tool_call_id='call_UvxQrsb1GpfDHTQQc6rLoQ3P')]
===============================================================================
@@ -193,14 +193,14 @@
"""
===============================================================================
[ToolCallingRecord(tool_name='shell_exec', args={'id': 'check_sleep_processes',
-'exec_dir': '/', 'command': 'ps aux | grep sleep'}, result='Operation
-restriction: Execution path / must be within working directory
+'exec_dir': '/', 'command': 'ps aux | grep sleep'}, result='Operation
+restriction: Execution path / must be within working directory
/home/jjyaoao/openSource/camel/workspace', tool_call_id=
'call_gbhmZ3mwpB07uPtVF3FxZaHu'), ToolCallingRecord(tool_name='shell_exec',
-args={'id': 'check_sleep_processes', 'exec_dir':
-'/home/jjyaoao/openSource/camel/workspace', 'command': 'ps aux | grep sleep'},
-result='root 11385 0.0 0.0 2620 532 pts/4 S+ 11:16 0:00
-/bin/sh -c ps aux | grep sleep\nroot 11387 0.0 0.0 8172 656 pts/4
+args={'id': 'check_sleep_processes', 'exec_dir':
+'/home/jjyaoao/openSource/camel/workspace', 'command': 'ps aux | grep sleep'},
+result='root 11385 0.0 0.0 2620 532 pts/4 S+ 11:16 0:00
+/bin/sh -c ps aux | grep sleep\nroot 11387 0.0 0.0 8172 656 pts/4
S+ 11:16 0:00 grep sleep\n', tool_call_id='call_gSZqRaqNAtYjUXOfvVuaObw2')]
===============================================================================
"""
@@ -227,9 +227,9 @@
print(str(response.info['tool_calls']))
"""
===============================================================================
-[ToolCallingRecord(tool_name='shell_exec', args={'id': '1', 'command': 'pip
+[ToolCallingRecord(tool_name='shell_exec', args={'id': '1', 'command': 'pip
install python-pptx'}, result='\nStderr Output:\n/Users/enrei/Desktop/
-camel0605/camel/.venv/bin/python3: No module named pip\n',
+camel0605/camel/.venv/bin/python3: No module named pip\n',
tool_call_id='call_ogvH8cKGGBMlulMV6IOCyN4q')]
============================================================
@@ -251,13 +251,13 @@
🤖 Returning control to CAMEL agent...
============================================================
-[ToolCallingRecord(tool_name='ask_user_for_help', args={'id':
-'terminal_session_1'}, result="Human assistance completed successfully for
+[ToolCallingRecord(tool_name='ask_user_for_help', args={'id':
+'terminal_session_1'}, result="Human assistance completed successfully for
session 'terminal_session_1'. Total commands executed: 1. Working directory: /
-Users/enrei/Desktop/camel0605/camel/workspace",
+Users/enrei/Desktop/camel0605/camel/workspace",
tool_call_id='call_eYtpU38YncR6PnfdlBqouSoV'), ToolCallingRecord
(tool_name='file_find_by_name', args={'path': '/Users/enrei/Desktop/camel0605/
-camel/workspace', 'glob': 'python_file'}, result='',
+camel/workspace', 'glob': 'python_file'}, result='',
tool_call_id='call_pyuYHoNvpPvXFbMjfo9DMUNe'), ToolCallingRecord
(tool_name='file_find_by_name', args={'path': '/Users/enrei/Desktop/camel0605/
camel/workspace', 'glob': '*'}, result='/Users/enrei/Desktop/camel0605/camel/
@@ -270,14 +270,14 @@
12\n/Users/enrei/Desktop/camel0605/camel/workspace/.venv/pyvenv.cfg\n/Users/
enrei/Desktop/camel0605/camel/workspace/.venv/lib\n/Users/enrei/Desktop/
camel0605/camel/workspace/.venv/lib/python3.12\n/Users/enrei/Desktop/camel0605/
-camel/workspace/.venv/lib/python3.12/site-packages',
+camel/workspace/.venv/lib/python3.12/site-packages',
tool_call_id='call_WwB219FGm4tAssjFA9UsIrRT'), ToolCallingRecord
(tool_name='shell_exec', args={'id': 'terminal_session_1', 'command': 'touch
python_file.py'}, result='', tool_call_id='call_OBhkDr5i2mzBRpty0JiSF8Dj'),
ToolCallingRecord(tool_name='shell_exec', args={'id': 'terminal_session_1',
-'command': 'ls -l'}, result='total 0\n-rw-r--r-- 1 enrei staff 0 Jun 27
+'command': 'ls -l'}, result='total 0\n-rw-r--r-- 1 enrei staff 0 Jun 27
17:26 python_file.py\n', tool_call_id='call_fllEujXWutTJmqpSWpHOOnIU')]
-(.venv) enrei@192 camel %
+(.venv) enrei@192 camel %
===============================================================================
"""
@@ -310,8 +310,8 @@
"""
===============================================================================
[ToolCallingRecord(tool_name='shell_exec', args={'id': 'check_numpy_version_1',
-'command': 'python3 -c "import numpy; print(numpy.__version__)"',
-'block': True}, result='2.2.6', tool_call_id='call_UuL6YGIMv7I4GSOjA8es65aW',
+'command': 'python3 -c "import numpy; print(numpy.__version__)"',
+'block': True}, result='2.2.6', tool_call_id='call_UuL6YGIMv7I4GSOjA8es65aW',
images=None)]
===============================================================================
"""
@@ -335,10 +335,10 @@
"""
===============================================================================
-[ToolCallingRecord(tool_name='shell_exec', args={'id': 'sess1', 'command':
-'ls -la', 'block': True}, result='total 12\ndrwxr-xr-x 3 root root 4096 Sep
-23 16:48 .\ndrwxr-xr-x 1 root root 4096 Sep 23 16:47 ..\ndrwxr-xr-x 2 root
-root 4096 Sep 23 16:49 logs', tool_call_id='call_YRYlz8KozpxXE2uGkcHIUnZU',
+[ToolCallingRecord(tool_name='shell_exec', args={'id': 'sess1', 'command':
+'ls -la', 'block': True}, result='total 12\ndrwxr-xr-x 3 root root 4096 Sep
+23 16:48 .\ndrwxr-xr-x 1 root root 4096 Sep 23 16:47 ..\ndrwxr-xr-x 2 root
+root 4096 Sep 23 16:49 logs', tool_call_id='call_YRYlz8KozpxXE2uGkcHIUnZU',
images=None)]
===============================================================================
"""
diff --git a/examples/toolkits/thinking_toolkit.py b/examples/toolkits/thinking_toolkit.py
index 31f3b032a5..a2340a031b 100644
--- a/examples/toolkits/thinking_toolkit.py
+++ b/examples/toolkits/thinking_toolkit.py
@@ -46,7 +46,7 @@
usr_msg = """
Help me solve this math problem:
-If a train travels at 60 mph and needs to cover 300 miles,
+If a train travels at 60 mph and needs to cover 300 miles,
with 3 stops of 15 minutes each, how long will the journey take?
"""
@@ -58,9 +58,9 @@
"""
Example: Problem Solving with Thinking Toolkit
===============================================================================
-The train's total journey time for traveling 300 miles at 60 mph, with
-3 stops of 15 minutes each, is 5.75 hours. This consists of 5 hours of
-travel time and 0.75 hours (or 45 minutes) of stop time. The conversion
+The train's total journey time for traveling 300 miles at 60 mph, with
+3 stops of 15 minutes each, is 5.75 hours. This consists of 5 hours of
+travel time and 0.75 hours (or 45 minutes) of stop time. The conversion
of stop time from minutes to hours was explicitly noted for clarity.
Tool Calls:
diff --git a/examples/toolkits/toolkit_message_integration.py b/examples/toolkits/toolkit_message_integration.py
index 4bfce0b9ea..0d994e2f8a 100644
--- a/examples/toolkits/toolkit_message_integration.py
+++ b/examples/toolkits/toolkit_message_integration.py
@@ -38,21 +38,21 @@
Information about Scaling laws
Searching for detailed information about Scaling laws from Wikipedia.
-Scaling laws refer to functional relationships between two quantities where a
-relative change in one quantity results in a relative change in the other
-quantity, proportional to the change raised to a constant exponent. This is
-known as a power law, where one quantity varies as a power of another, and the
+Scaling laws refer to functional relationships between two quantities where a
+relative change in one quantity results in a relative change in the other
+quantity, proportional to the change raised to a constant exponent. This is
+known as a power law, where one quantity varies as a power of another, and the
change is independent of the initial size of those quantities.
-For example, the area of a square has a power law relationship with the length
-of its side: if the side's length is doubled, the area is multiplied by 2²,
+For example, the area of a square has a power law relationship with the length
+of its side: if the side's length is doubled, the area is multiplied by 2²,
and if tripled, the area is multiplied by 3².
-Scaling laws can be observed in numerous natural and human-made phenomena,
-such as the sizes of craters on the moon, sizes of solar flares, cloud sizes,
-foraging patterns of species, frequencies of words in languages, and many
-more. These empirical distributions follow a power law over a range of
-magnitudes but cannot fit a power law for all values as it would imply
+Scaling laws can be observed in numerous natural and human-made phenomena,
+such as the sizes of craters on the moon, sizes of solar flares, cloud sizes,
+foraging patterns of species, frequencies of words in languages, and many
+more. These empirical distributions follow a power law over a range of
+magnitudes but cannot fit a power law for all values as it would imply
arbitrarily large or small values.
===============================================================================
"""
diff --git a/examples/toolkits/vertex_ai_veo_toolkit.py b/examples/toolkits/vertex_ai_veo_toolkit.py
index b8b6dc8e3e..2f5068f59b 100644
--- a/examples/toolkits/vertex_ai_veo_toolkit.py
+++ b/examples/toolkits/vertex_ai_veo_toolkit.py
@@ -174,7 +174,7 @@ def agent_integration_example():
agent = ChatAgent(
model=model,
tools=toolkit.get_tools(),
- system_message="""You are a video generation assistant.
+ system_message="""You are a video generation assistant.
You can help users create videos using Google's Vertex AI Veo models.
Always explain the video generation process and parameters to users.""", # noqa: E501
)
diff --git a/examples/toolkits/video_analysis_toolkit.py b/examples/toolkits/video_analysis_toolkit.py
index 9fd1ce326c..b82a9840e4 100644
--- a/examples/toolkits/video_analysis_toolkit.py
+++ b/examples/toolkits/video_analysis_toolkit.py
@@ -85,55 +85,55 @@
T/tmp4plhd3s3/Douchebag Bison.f251.webm (pass -k to keep)
Deleting original file /private/var/folders/93/f_71_t957cq9cmq2gsybs4_40000gn/
T/tmp4plhd3s3/Douchebag Bison.f247.webm (pass -k to keep)
-2025-03-09 21:17:08,036 - pyscenedetect - ERROR - VideoManager is deprecated
+2025-03-09 21:17:08,036 - pyscenedetect - ERROR - VideoManager is deprecated
and will be removed.
2025-03-09 21:17:08,060 - pyscenedetect - INFO - Loaded 1 video, framerate: 30.
000 FPS, resolution: 1280 x 720
-2025-03-09 21:17:08,061 - pyscenedetect - INFO - Duration set, start: None,
+2025-03-09 21:17:08,061 - pyscenedetect - INFO - Duration set, start: None,
duration: None, end: None.
2025-03-09 21:17:08,061 - pyscenedetect - INFO - Detecting scenes...
-2025-03-09 21:17:09,065 - camel.camel.toolkits.video_analysis_toolkit -
+2025-03-09 21:17:09,065 - camel.camel.toolkits.video_analysis_toolkit -
WARNING - No scenes detected in video, capturing frames at regular intervals
Video Analysis Result:
--------------------------------------------------
### Visual Analysis
1. **Identified Entities**:
- - **Wolves**: Multiple wolves are visible in the frames, characterized by
- their grayish fur, slender bodies, and bushy tails. They appear to be in a
+ - **Wolves**: Multiple wolves are visible in the frames, characterized by
+ their grayish fur, slender bodies, and bushy tails. They appear to be in a
pack, indicating social behavior.
- - **Bison**: A bison is present, identifiable by its large size, shaggy
- brown fur, and distinctive hump on its back. The bison is significantly
+ - **Bison**: A bison is present, identifiable by its large size, shaggy
+ brown fur, and distinctive hump on its back. The bison is significantly
larger than the wolves.
2. **Key Attributes**:
- - **Wolves**:
- - Size: Smaller than the bison, typically around 26-32 inches tall at the
+ - **Wolves**:
+ - Size: Smaller than the bison, typically around 26-32 inches tall at the
shoulder.
- Color: Predominantly gray with some variations in fur color.
- - Behavior: The wolves are shown moving in a coordinated manner,
+ - Behavior: The wolves are shown moving in a coordinated manner,
suggesting they are hunting or scavenging.
- **Bison**:
- Size: Much larger, can weigh up to 2,000 pounds.
- Color: Dark brown, with a thick coat.
- - Behavior: The bison appears to be stationary or moving slowly, possibly
+ - Behavior: The bison appears to be stationary or moving slowly, possibly
in a defensive posture.
3. **Groupings and Interactions**:
- - The wolves are seen surrounding the bison, indicating a predatory
- behavior. The interaction suggests a hunting scenario, where the wolves are
+ - The wolves are seen surrounding the bison, indicating a predatory
+ behavior. The interaction suggests a hunting scenario, where the wolves are
attempting to take down or scavenge from the bison.
### Audio Integration
-- **No audio transcription available**: Therefore, the analysis relies solely
+- **No audio transcription available**: Therefore, the analysis relies solely
on visual observations.
### Detailed Reasoning and Justification
- **Identification of Species**:
- - The wolves are identified by their physical characteristics and social
- behavior, which is typical of pack animals. Their movement patterns and
+ - The wolves are identified by their physical characteristics and social
+ behavior, which is typical of pack animals. Their movement patterns and
proximity to the bison indicate a hunting strategy.
- - The bison is easily distinguishable due to its size and unique physical
+ - The bison is easily distinguishable due to its size and unique physical
features, such as the hump and thick fur.
### Comprehensive Answer
@@ -143,9 +143,9 @@
- **Bison**: Large size, shaggy brown fur, distinctive hump.
### Important Considerations
-- The wolves exhibit coordinated movement, which is crucial for hunting, while
-the bison's size and defensive posture highlight its role as prey in this
-scenario. The visual cues of size, color, and behavior effectively distinguish
+- The wolves exhibit coordinated movement, which is crucial for hunting, while
+the bison's size and defensive posture highlight its role as prey in this
+scenario. The visual cues of size, color, and behavior effectively distinguish
these two species in the context of a predatory interaction.
==========================================================================
"""
diff --git a/examples/toolkits/webdeploy_toolkit.py b/examples/toolkits/webdeploy_toolkit.py
index b3f5a49e79..7a6780f675 100644
--- a/examples/toolkits/webdeploy_toolkit.py
+++ b/examples/toolkits/webdeploy_toolkit.py
@@ -41,7 +41,7 @@
)
# Example 1: deploy a simple web server,which can play snake game.
-query = """use toolkit writing html to deploy a
+query = """use toolkit writing html to deploy a
simple web server,which can play snake game.deploy it in port 8005"""
camel_agent.reset()
@@ -52,8 +52,8 @@
"""
==========================================================================
-I have deployed a simple web server that hosts a Snake game. You can play
-the game by opening the following URL in your web browser:
+I have deployed a simple web server that hosts a Snake game. You can play
+the game by opening the following URL in your web browser:
http://localhost:8000
Use the arrow keys to control the snake. Enjoy the game!
diff --git a/examples/toolkits/wechat_official_toolkit.py b/examples/toolkits/wechat_official_toolkit.py
index 852eb81559..35e0b77af5 100644
--- a/examples/toolkits/wechat_official_toolkit.py
+++ b/examples/toolkits/wechat_official_toolkit.py
@@ -85,99 +85,99 @@ def main():
main()
"""
-Text Message Response:
+Text Message Response:
I've successfully completed both tasks:
1. **Retrieved the followers list**: Found 6 total followers with their OpenIDs
-2. **Sent welcome message to the fifth follower**: The welcome message was
+2. **Sent welcome message to the fifth follower**: The welcome message was
successfully sent to the follower with OpenID `oKSAF2******rEJI`
-The welcome message "Welcome! Thank you for following our WeChat Official
-Account. We're excited to have you as part of our community!" has been
+The welcome message "Welcome! Thank you for following our WeChat Official
+Account. We're excited to have you as part of our community!" has been
delivered successfully.
Tool calls:
1. get_followers_list({'next_openid': ''})
- 2. send_customer_message({'openid': 'oKSAF2******rEJI', 'content': "Welcome!
- Thank you for following our WeChat Official Account. We're excited to have
+ 2. send_customer_message({'openid': 'oKSAF2******rEJI', 'content': "Welcome!
+ Thank you for following our WeChat Official Account. We're excited to have
you as part of our community!", 'msgtype': 'text'})
==============================
-Image Upload and Send Response:
+Image Upload and Send Response:
Perfect! I've successfully completed both tasks:
-1. **Uploaded the image**: The image at '/home/lyz/Camel/CAMEL_logo.jpg' has
+1. **Uploaded the image**: The image at '/home/lyz/Camel/CAMEL_logo.jpg' has
been uploaded as temporary media with media_id: `A_6Hlu******C6IUr`
-2. **Sent the image to the fifth follower**: The image has been successfully
+2. **Sent the image to the fifth follower**: The image has been successfully
sent to the fifth follower (OpenID: oKSAF2******rEJI)
Both operations completed successfully!
Tool calls:
1. upload_wechat_media({'media_type': 'image', 'file_path': '/home/lyz/Camel/
CAMEL_logo.jpg', 'permanent': False, 'description': ''})
- 2. send_customer_message({'openid': 'oKSAF2******rEJI', 'content':
+ 2. send_customer_message({'openid': 'oKSAF2******rEJI', 'content':
'A_6Hlu******C6IUr', 'msgtype': 'image'})
==============================
-(.camel) [lyz@dev10 toolkits]$ python wechat_official_toolkit.py
-2025-09-01 02:41:48,494 - root - WARNING - Invalid or missing `max_tokens` in
+(.camel) [lyz@dev10 toolkits]$ python wechat_official_toolkit.py
+2025-09-01 02:41:48,494 - root - WARNING - Invalid or missing `max_tokens` in
`model_config_dict`. Defaulting to 999_999_999 tokens.
-Text Message Response:
+Text Message Response:
I've successfully completed your request! Here's what I did:
-1. **Retrieved the follower list**: I got the list of all followers, which
+1. **Retrieved the follower list**: I got the list of all followers, which
shows there are 6 total followers.
-2. **Identified the fifth follower**: From the list, the fifth follower has
+2. **Identified the fifth follower**: From the list, the fifth follower has
the OpenID: `oKSAF2******rEJI`
-3. **Sent a welcome message**: I successfully sent a welcome message to the
-fifth follower with the content: "Welcome! Thank you for following our WeChat
+3. **Sent a welcome message**: I successfully sent a welcome message to the
+fifth follower with the content: "Welcome! Thank you for following our WeChat
Official Account. We're excited to have you join our community!"
-The message was delivered successfully to the fifth follower in your follower
+The message was delivered successfully to the fifth follower in your follower
list.
Tool calls:
1. get_followers_list({'next_openid': ''})
- 2. send_customer_message({'openid': 'oKSAF2******rEJI', 'content': "Welcome!
- Thank you for following our WeChat Official Account. We're excited to have
+ 2. send_customer_message({'openid': 'oKSAF2******rEJI', 'content': "Welcome!
+ Thank you for following our WeChat Official Account. We're excited to have
you join our community!", 'msgtype': 'text'})
==============================
-Image Upload and Send Response:
+Image Upload and Send Response:
Perfect! I've successfully completed both tasks:
-1. **Retrieved followers list**: Found 6 followers total, with the fifth
+1. **Retrieved followers list**: Found 6 followers total, with the fifth
follower having OpenID: `oKSAF2******rEJI`
-2. **Sent welcome message**: Successfully sent a welcome text message to the
+2. **Sent welcome message**: Successfully sent a welcome text message to the
fifth follower.
-3. **Uploaded image**: Successfully uploaded the CAMEL logo image as temporary
+3. **Uploaded image**: Successfully uploaded the CAMEL logo image as temporary
media with media_id: `A_6Hlu******MW_ubO`
4. **Sent image**: Successfully sent the uploaded image to the fifth follower.
-Both the welcome message and the CAMEL logo image have been delivered to the
+Both the welcome message and the CAMEL logo image have been delivered to the
fifth follower!
Tool calls:
1. upload_wechat_media({'media_type': 'image', 'file_path': '/home/lyz/Camel/
CAMEL_logo.jpg', 'permanent': False, 'description': None})
- 2. send_customer_message({'openid': 'oKSAF2******rEJI', 'content':
+ 2. send_customer_message({'openid': 'oKSAF2******rEJI', 'content':
'A_6Hlu******MW_ubO', 'msgtype': 'image'})
==============================
-User Info Response:
+User Info Response:
-I've retrieved the detailed information about the fifth follower. Here are the
+I've retrieved the detailed information about the fifth follower. Here are the
key details:
**Language Preference:**
@@ -198,25 +198,25 @@ def main():
- Tags: No tags assigned
- Custom Remark: None
-The follower is actively subscribed and prefers Chinese (mainland China)
+The follower is actively subscribed and prefers Chinese (mainland China)
language. They joined the account by scanning a QR code.
Tool calls:
1. get_user_info({'openid': 'oKSAF2******rEJI', 'lang': 'en'})
==============================
-Permanent Media Upload Response:
+Permanent Media Upload Response:
Perfect! I've successfully completed both tasks:
## 1. Image Upload (Permanent Media)
-The image `/home/lyz/Camel/CAMEL_logo.jpg` has been uploaded as permanent
+The image `/home/lyz/Camel/CAMEL_logo.jpg` has been uploaded as permanent
media with the following details:
- **Media ID**: `Rd1Ljw******mBUw_`
- **URL**: `https://mmbiz.qpic.cn/.../0?wx_fmt=jpeg`
## 2. Image Media List
-I retrieved the list of all permanent image media files. Here's what's
+I retrieved the list of all permanent image media files. Here's what's
currently in your media library:
**Total Count**: 5 image files
@@ -243,7 +243,7 @@ def main():
- Media ID: `Rd1Ljw******QKlyt`
- Update Time: 1756539999
-All files appear to be the same CAMEL logo image uploaded at different times.
+All files appear to be the same CAMEL logo image uploaded at different times.
The most recent upload is now available for use in your WeChat communications.
Tool calls:
1. upload_wechat_media({'media_type': 'image', 'file_path': '/home/lyz/Camel/
diff --git a/examples/toolkits/wolfram_alpha_toolkit.py b/examples/toolkits/wolfram_alpha_toolkit.py
index 521dce1b0a..7f2af2d0fa 100644
--- a/examples/toolkits/wolfram_alpha_toolkit.py
+++ b/examples/toolkits/wolfram_alpha_toolkit.py
@@ -49,9 +49,9 @@
5. Darmstadtium (Ds) - 34.8 g/cm³
Tool calls:
-[ToolCallingRecord(tool_name='query_wolfram_alpha', args={'query': 'densest
-elemental metals'}, result='1 | hassium | 41 g/cm^3 | \n2 | meitnerium | 37.4
-g/cm^3 | \n3 | bohrium | 37.1 g/cm^3 | \n4 | seaborgium | 35.3 g/cm^3 | \n5 |
+[ToolCallingRecord(tool_name='query_wolfram_alpha', args={'query': 'densest
+elemental metals'}, result='1 | hassium | 41 g/cm^3 | \n2 | meitnerium | 37.4
+g/cm^3 | \n3 | bohrium | 37.1 g/cm^3 | \n4 | seaborgium | 35.3 g/cm^3 | \n5 |
darmstadtium | 34.8 g/cm^3 |', tool_call_id='call_DNUzXQSQxAY3R71WMQXhKjBK')]
===============================================================================
'''
@@ -83,53 +83,53 @@
Tool calls:
[ToolCallingRecord(tool_name='query_wolfram_alpha_step_by_step', args=
-{'query': '5 densest elemental metals'}, result={'query': '5 densest elemental
-metals', 'pod_info': [{'title': 'Input interpretation', 'description': '5
+{'query': '5 densest elemental metals'}, result={'query': '5 densest elemental
+metals', 'pod_info': [{'title': 'Input interpretation', 'description': '5
densest metallic elements | by mass density', 'image_url': 'https://www6b3.
wolframalpha.com/Calculate/MSP/MSP961i0eg636ce4a7a95000064bh6be1f77a45af?
-MSPStoreType=image/gif&s=10'}, {'title': 'Periodic table location',
+MSPStoreType=image/gif&s=10'}, {'title': 'Periodic table location',
'description': None, 'image_url': 'https://www6b3.wolframalpha.com/Calculate/
-MSP/MSP971i0eg636ce4a7a9500001668a66eh7ifgd6g?MSPStoreType=image/gif&s=10'},
+MSP/MSP971i0eg636ce4a7a9500001668a66eh7ifgd6g?MSPStoreType=image/gif&s=10'},
{'title': 'Images', 'description': None, 'image_url': 'https://www6b3.
wolframalpha.com/Calculate/MSP/MSP981i0eg636ce4a7a95000025abi817gdd2964g?
-MSPStoreType=image/gif&s=10'}, {'title': 'Basic elemental properties',
-'description': '| atomic symbol | atomic number\nhassium | Hs |
-108\nmeitnerium | Mt | 109\nbohrium | Bh | 107\nseaborgium | Sg |
-106\ndarmstadtium | Ds | 110\n | atomic mass | half-life\nhassium | 269 u | 67
-min\nmeitnerium | 277 u | 30 min\nbohrium | 270 u | 90 min\nseaborgium | 269
+MSPStoreType=image/gif&s=10'}, {'title': 'Basic elemental properties',
+'description': '| atomic symbol | atomic number\nhassium | Hs |
+108\nmeitnerium | Mt | 109\nbohrium | Bh | 107\nseaborgium | Sg |
+106\ndarmstadtium | Ds | 110\n | atomic mass | half-life\nhassium | 269 u | 67
+min\nmeitnerium | 277 u | 30 min\nbohrium | 270 u | 90 min\nseaborgium | 269
u | 120 min\ndarmstadtium | 281 u | 4 min', 'image_url': 'https://www6b3.
wolframalpha.com/Calculate/MSP/MSP991i0eg636ce4a7a9500003263452b10d0d8f7?
-MSPStoreType=image/gif&s=10'}, {'title': 'Result', 'description': '1 |
+MSPStoreType=image/gif&s=10'}, {'title': 'Result', 'description': '1 |
hassium | 41 g/cm^3 | \n2 | meitnerium | 37.4 g/cm^3 | \n3 | bohrium | 37.1 g/
-cm^3 | \n4 | seaborgium | 35.3 g/cm^3 | \n5 | darmstadtium | 34.8 g/cm^3 |',
+cm^3 | \n4 | seaborgium | 35.3 g/cm^3 | \n5 | darmstadtium | 34.8 g/cm^3 |',
'image_url': 'https://www6b3.wolframalpha.com/Calculate/MSP/
-MSP1011i0eg636ce4a7a95000021433b3eei2283i7?MSPStoreType=image/gif&s=10'},
-{'title': 'Material properties', 'description': 'mass density | median | 37.1
-g/cm^3\n | highest | 41 g/cm^3 (hassium)\n | lowest | 34.8 g/cm^3
-(darmstadtium)\n | distribution | \n(properties at standard conditions)',
+MSP1011i0eg636ce4a7a95000021433b3eei2283i7?MSPStoreType=image/gif&s=10'},
+{'title': 'Material properties', 'description': 'mass density | median | 37.1
+g/cm^3\n | highest | 41 g/cm^3 (hassium)\n | lowest | 34.8 g/cm^3
+(darmstadtium)\n | distribution | \n(properties at standard conditions)',
'image_url': 'https://www6b3.wolframalpha.com/Calculate/MSP/
-MSP1031i0eg636ce4a7a95000012h4aa1fg10h84eg?MSPStoreType=image/gif&s=10'},
+MSP1031i0eg636ce4a7a95000012h4aa1fg10h84eg?MSPStoreType=image/gif&s=10'},
{'title': 'Atomic properties', 'description': 'term symbol | all | ^3D_3 | ^4F_
-(9/2) | ^5D_0 | ^5D_4 | ^6S_(5/2)\n(electronic ground state properties)',
+(9/2) | ^5D_0 | ^5D_4 | ^6S_(5/2)\n(electronic ground state properties)',
'image_url': 'https://www6b3.wolframalpha.com/Calculate/MSP/
-MSP1051i0eg636ce4a7a95000024dii0bd852f9bib?MSPStoreType=image/gif&s=10'},
-{'title': 'Abundances', 'description': 'crust abundance | median | 0 mass%\n |
-highest | 0 mass% (5 elements)\n | lowest | 0 mass% (5 elements)\nhuman
-abundance | median | 0 mass%\n | highest | 0 mass% (5 elements)\n | lowest | 0
+MSP1051i0eg636ce4a7a95000024dii0bd852f9bib?MSPStoreType=image/gif&s=10'},
+{'title': 'Abundances', 'description': 'crust abundance | median | 0 mass%\n |
+highest | 0 mass% (5 elements)\n | lowest | 0 mass% (5 elements)\nhuman
+abundance | median | 0 mass%\n | highest | 0 mass% (5 elements)\n | lowest | 0
mass% (5 elements)', 'image_url': 'https://www6b3.wolframalpha.com/Calculate/
-MSP/MSP1061i0eg636ce4a7a9500005iagh7h9413cc095?MSPStoreType=image/gif&s=10'},
-{'title': 'Nuclear properties', 'description': 'half-life | median | 67
-min\n | highest | 120 min (seaborgium)\n | lowest | 4 min (darmstadtium)\n |
-distribution | \nspecific radioactivity | highest | 6.123×10^6 TBq/g
+MSP/MSP1061i0eg636ce4a7a9500005iagh7h9413cc095?MSPStoreType=image/gif&s=10'},
+{'title': 'Nuclear properties', 'description': 'half-life | median | 67
+min\n | highest | 120 min (seaborgium)\n | lowest | 4 min (darmstadtium)\n |
+distribution | \nspecific radioactivity | highest | 6.123×10^6 TBq/g
(darmstadtium)\n | lowest | 223871 TBq/g (seaborgium)\n | median | 446085 TBq/
g', 'image_url': 'https://www6b3.wolframalpha.com/Calculate/MSP/
-MSP1081i0eg636ce4a7a9500001ae7307034bg8h9a?MSPStoreType=image/gif&s=10'},
-{'title': 'Wikipedia page hits history', 'description': None, 'image_url':
+MSP1081i0eg636ce4a7a9500001ae7307034bg8h9a?MSPStoreType=image/gif&s=10'},
+{'title': 'Wikipedia page hits history', 'description': None, 'image_url':
'https://www6b3.wolframalpha.com/Calculate/MSP/
-MSP1101i0eg636ce4a7a9500003aeef8d2fi6ih413?MSPStoreType=image/gif&s=10'}],
-'final_answer': '1 | hassium | 41 g/cm^3 | \n2 | meitnerium | 37.4 g/cm^3 |
-\n3 | bohrium | 37.1 g/cm^3 | \n4 | seaborgium | 35.3 g/cm^3 | \n5 |
-darmstadtium | 34.8 g/cm^3 |', 'steps': {}},
+MSP1101i0eg636ce4a7a9500003aeef8d2fi6ih413?MSPStoreType=image/gif&s=10'}],
+'final_answer': '1 | hassium | 41 g/cm^3 | \n2 | meitnerium | 37.4 g/cm^3 |
+\n3 | bohrium | 37.1 g/cm^3 | \n4 | seaborgium | 35.3 g/cm^3 | \n5 |
+darmstadtium | 34.8 g/cm^3 |', 'steps': {}},
tool_call_id='call_e6gApIh8ohCARb4fb9WDxEsq')]
===============================================================================
'''
@@ -165,36 +165,36 @@
These values represent the density of each metal at standard conditions.
Tool calls:
-[ToolCallingRecord(tool_name='query_wolfram_alpha_llm', args={'query': '10
-densest elemental metals'}, result='Query:\n"10 densest elemental
-metals"\n\nInput interpretation:\n10 densest metallic elements | by mass
-density\n\nBasic elemental properties:\natomic symbol | all | Bh | Db | Ds |
-Hs | Ir | Mt | Os | Rf | Rg | Sg\natomic number | median | 106.5\n | highest |
-111 (roentgenium)\n | lowest | 76 (osmium)\n | distribution | \natomic mass |
+[ToolCallingRecord(tool_name='query_wolfram_alpha_llm', args={'query': '10
+densest elemental metals'}, result='Query:\n"10 densest elemental
+metals"\n\nInput interpretation:\n10 densest metallic elements | by mass
+density\n\nBasic elemental properties:\natomic symbol | all | Bh | Db | Ds |
+Hs | Ir | Mt | Os | Rf | Rg | Sg\natomic number | median | 106.5\n | highest |
+111 (roentgenium)\n | lowest | 76 (osmium)\n | distribution | \natomic mass |
median | 269 u\n | highest | 282 u (roentgenium)\n | lowest | 190.23 u (osmium)
-\n | distribution | \nhalf-life | median | 78 min\n | highest | 13 h
-(rutherfordium)\n | lowest | 4 min (darmstadtium)\n | distribution |
-\n\nResult:\n1 | hassium | 41 g/cm^3 | \n2 | meitnerium | 37.4 g/cm^3 | \n3 |
-bohrium | 37.1 g/cm^3 | \n4 | seaborgium | 35.3 g/cm^3 | \n5 | darmstadtium |
-34.8 g/cm^3 | \n6 | dubnium | 29.3 g/cm^3 | \n7 | roentgenium | 28.7 g/cm^3 |
-\n8 | rutherfordium | 23.2 g/cm^3 | \n9 | osmium | 22.59 g/cm^3 | \n10 |
-iridium | 22.56 g/cm^3 | \n\nThermodynamic properties:\nphase at STP | all |
-solid\n(properties at standard conditions)\n\nMaterial properties:\nmass
-density | median | 32.1 g/cm^3\n | highest | 41 g/cm^3 (hassium)\n | lowest |
+\n | distribution | \nhalf-life | median | 78 min\n | highest | 13 h
+(rutherfordium)\n | lowest | 4 min (darmstadtium)\n | distribution |
+\n\nResult:\n1 | hassium | 41 g/cm^3 | \n2 | meitnerium | 37.4 g/cm^3 | \n3 |
+bohrium | 37.1 g/cm^3 | \n4 | seaborgium | 35.3 g/cm^3 | \n5 | darmstadtium |
+34.8 g/cm^3 | \n6 | dubnium | 29.3 g/cm^3 | \n7 | roentgenium | 28.7 g/cm^3 |
+\n8 | rutherfordium | 23.2 g/cm^3 | \n9 | osmium | 22.59 g/cm^3 | \n10 |
+iridium | 22.56 g/cm^3 | \n\nThermodynamic properties:\nphase at STP | all |
+solid\n(properties at standard conditions)\n\nMaterial properties:\nmass
+density | median | 32.1 g/cm^3\n | highest | 41 g/cm^3 (hassium)\n | lowest |
22.56 g/cm^3 (iridium)\n | distribution | \n(properties at standard conditions)
-\n\nReactivity:\nvalence | median | 6\n | highest | 7 (bohrium)\n | lowest | 4
-(rutherfordium)\n | distribution | \n\nAtomic properties:\nterm symbol | all |
+\n\nReactivity:\nvalence | median | 6\n | highest | 7 (bohrium)\n | lowest | 4
+(rutherfordium)\n | distribution | \n\nAtomic properties:\nterm symbol | all |
^2S_(1/2) | ^3D_3 | ^3F_2 | ^4F_(3/2) | ^4F_(9/2) | ^5D_0 | ^5D_4 | ^6S_(5/2)\n
-(electronic ground state properties)\n\nAbundances:\ncrust abundance |
-median | 0 mass%\n | highest | 1.8×10^-7 mass% (osmium)\n | lowest | 0 mass%
-(8 elements)\nhuman abundance | median | 0 mass%\n | highest | 0 mass% (8
-elements)\n | lowest | 0 mass% (8 elements)\n\nNuclear
+(electronic ground state properties)\n\nAbundances:\ncrust abundance |
+median | 0 mass%\n | highest | 1.8×10^-7 mass% (osmium)\n | lowest | 0 mass%
+(8 elements)\nhuman abundance | median | 0 mass%\n | highest | 0 mass% (8
+elements)\n | lowest | 0 mass% (8 elements)\n\nNuclear
properties:\nhalf-life | median | 78 min\n | highest | 13 h (rutherfordium)
-\n | lowest | 4 min (darmstadtium)\n | distribution | \nspecific
-radioactivity | highest | 6.123×10^6 TBq/g (darmstadtium)\n | lowest | 33169
+\n | lowest | 4 min (darmstadtium)\n | distribution | \nspecific
+radioactivity | highest | 6.123×10^6 TBq/g (darmstadtium)\n | lowest | 33169
TBq/g (rutherfordium)\n | median | 366018 TBq/g\n | distribution | \n\nWolfram|
Alpha website result for "10 densest elemental metals":\nhttps://www6b3.
-wolframalpha.com/input?i=10+densest+elemental+metals',
+wolframalpha.com/input?i=10+densest+elemental+metals',
tool_call_id='call_b2FBtvFoRpP17UPOXDEvQg5Q')]
===============================================================================
'''
diff --git a/examples/toolkits/zapier_toolkit.py b/examples/toolkits/zapier_toolkit.py
index 8c1d9e0065..6e2c773854 100644
--- a/examples/toolkits/zapier_toolkit.py
+++ b/examples/toolkits/zapier_toolkit.py
@@ -18,9 +18,9 @@
from camel.types import ModelPlatformType, ModelType
# Define system message
-sys_msg = """You are a helpful AI assistant that can use Zapier AI tools to
-perform various tasks. When using tools, first list the available tools using
-list_actions, then use the appropriate tool based on the task. Always provide
+sys_msg = """You are a helpful AI assistant that can use Zapier AI tools to
+perform various tasks. When using tools, first list the available tools using
+list_actions, then use the appropriate tool based on the task. Always provide
clear explanations of what you're doing."""
# Set model config
@@ -47,9 +47,9 @@
print("\n" + "=" * 80 + "\n")
# Now, use the translation tool
-usr_msg = """Now that we can see the translation tool is available, please
-use it to translate 'hello camel' from en to zh. Use
-the tool ID from the list above and make sure to specify the language codes
+usr_msg = """Now that we can see the translation tool is available, please
+use it to translate 'hello camel' from en to zh. Use
+the tool ID from the list above and make sure to specify the language codes
correctly in the instructions."""
response = camel_agent.step(usr_msg)
print("Translation Result:")
@@ -61,7 +61,7 @@
1. **Gmail: Find Email**
- **ID:** 0d82cfd3-2bd7-4e08-9f3d-692719e81a26
- - **Description:** This action allows you to find an email in Gmail based
+ - **Description:** This action allows you to find an email in Gmail based
on a search string.
- **Parameters:**
- `instructions`: Instructions for executing the action.
@@ -69,14 +69,14 @@
2. **Translate by Zapier: Translate Text**
- **ID:** f7527450-d7c7-401f-a764-2f69f622e7f3
- - **Description:** This action translates text into a specified target
+ - **Description:** This action translates text into a specified target
language.
- **Parameters:**
- `instructions`: Instructions for executing the action.
- `Text`: The text to be translated.
- `Target_Language`: The language to translate the text into.
-If you need to perform a specific task using one of these tools, please let me
+If you need to perform a specific task using one of these tools, please let me
know!
================================================================================
@@ -89,7 +89,7 @@
- **Source Language:** English (en)
- **Target Language:** Chinese (zh)
-If you need any further assistance or additional translations, feel free to
+If you need any further assistance or additional translations, feel free to
ask!
===============================================================================
"""
diff --git a/examples/usecases/aci_mcp/camel_mcp_aci.py b/examples/usecases/aci_mcp/camel_mcp_aci.py
index 43972e2db4..824f931be2 100644
--- a/examples/usecases/aci_mcp/camel_mcp_aci.py
+++ b/examples/usecases/aci_mcp/camel_mcp_aci.py
@@ -105,4 +105,4 @@ async def main():
if __name__ == "__main__":
- asyncio.run(main())
\ No newline at end of file
+ asyncio.run(main())
diff --git a/examples/usecases/airbnb_mcp/README.md b/examples/usecases/airbnb_mcp/README.md
index 773af87ca4..c0f76368ec 100644
--- a/examples/usecases/airbnb_mcp/README.md
+++ b/examples/usecases/airbnb_mcp/README.md
@@ -51,13 +51,13 @@ A Streamlit app that leverages the [CAMEL-AI OWL framework](https://github.com/c
## ⚙️ Configuration
-1. **Environment variables**
+1. **Environment variables**
Create a `.env` file in the project root with:
```ini
OPENAI_API_KEY=your_openai_key_here
```
-2. **MCP Server config**
+2. **MCP Server config**
Ensure `mcp_servers_config.json` (next to `app.py`) contains:
```json
{
diff --git a/examples/usecases/airbnb_mcp/app.py b/examples/usecases/airbnb_mcp/app.py
index 1f998f39e8..f2b06225ee 100644
--- a/examples/usecases/airbnb_mcp/app.py
+++ b/examples/usecases/airbnb_mcp/app.py
@@ -88,9 +88,9 @@
# Build prompt
prompt = f"""
- Find me the best Airbnb in {city} with a check-in date
- of {checkin:%Y-%m-%d} and a check-out date of
- {checkout:%Y-%m-%d} for {adults} adults.
+ Find me the best Airbnb in {city} with a check-in date
+ of {checkin:%Y-%m-%d} and a check-out date of
+ {checkout:%Y-%m-%d} for {adults} adults.
Return the top 5 listings with their names, prices, and locations.
"""
diff --git a/examples/usecases/airbnb_mcp/requirements.txt b/examples/usecases/airbnb_mcp/requirements.txt
index c80191302b..0fb50f864a 100644
--- a/examples/usecases/airbnb_mcp/requirements.txt
+++ b/examples/usecases/airbnb_mcp/requirements.txt
@@ -1,4 +1,3 @@
camel-ai[all]
streamlit
python-dotenv
-
diff --git a/examples/usecases/chat_with_github/README.md b/examples/usecases/chat_with_github/README.md
index 999b613c97..7872fb0d3c 100644
--- a/examples/usecases/chat_with_github/README.md
+++ b/examples/usecases/chat_with_github/README.md
@@ -82,7 +82,7 @@ _A conversation where the user asks about the repo structure and README file._
```bash
streamlit run app.py
-```
+```
Set the repo in the sidebar, then start chatting!
---
@@ -109,5 +109,5 @@ Gitingest/
└── .env # Env vars
-*Happy repo chatting!*
+*Happy repo chatting!*
````
diff --git a/examples/usecases/chat_with_github/mcp_servers_config.json b/examples/usecases/chat_with_github/mcp_servers_config.json
index c7dfe58574..206eb4f5fc 100644
--- a/examples/usecases/chat_with_github/mcp_servers_config.json
+++ b/examples/usecases/chat_with_github/mcp_servers_config.json
@@ -10,4 +10,3 @@
}
}
}
-
\ No newline at end of file
diff --git a/examples/usecases/chat_with_github/requirements.txt b/examples/usecases/chat_with_github/requirements.txt
index c80191302b..0fb50f864a 100644
--- a/examples/usecases/chat_with_github/requirements.txt
+++ b/examples/usecases/chat_with_github/requirements.txt
@@ -1,4 +1,3 @@
camel-ai[all]
streamlit
python-dotenv
-
diff --git a/examples/usecases/chat_with_youtube/app.py b/examples/usecases/chat_with_youtube/app.py
index 8b1b8847cd..49ff573067 100644
--- a/examples/usecases/chat_with_youtube/app.py
+++ b/examples/usecases/chat_with_youtube/app.py
@@ -43,7 +43,7 @@
# Extract audio
audio_path = os.path.splitext(video_path)[0] + ".wav"
ffmpeg.input(video_path).output(audio_path, ac=1, ar=16000).run(overwrite_output=True)
-
+
# Transcribe
transcript = audio_toolkit.audio2text(audio_path)
diff --git a/examples/usecases/chat_with_youtube/requirements.txt b/examples/usecases/chat_with_youtube/requirements.txt
index eac9623449..b8e94187df 100644
--- a/examples/usecases/chat_with_youtube/requirements.txt
+++ b/examples/usecases/chat_with_youtube/requirements.txt
@@ -1,3 +1,3 @@
streamlit==1.45.1
camel-ai[all]==0.2.61
-docx2markdown==0.1.1
\ No newline at end of file
+docx2markdown==0.1.1
diff --git a/examples/usecases/cloudfare_mcp_camel/README.md b/examples/usecases/cloudfare_mcp_camel/README.md
index 674b082d8d..8d8dc501a7 100644
--- a/examples/usecases/cloudfare_mcp_camel/README.md
+++ b/examples/usecases/cloudfare_mcp_camel/README.md
@@ -52,7 +52,7 @@ This project provides a Streamlit-based web interface to interact with public Cl
```
If `mcp-remote` is used by your `mcp_config.json` (as in the default configuration for this project), ensure it can be executed. You might need to install it globally or ensure `npx` can run it:
```bash
- npm install -g mcp-remote
+ npm install -g mcp-remote
```
4. **Set up your API Key:**
@@ -109,4 +109,4 @@ The application will open in your default web browser, usually at `http://localh
## Contributing
-Feel free to open issues or submit pull requests if you have suggestions for improvements or bug fixes.
+Feel free to open issues or submit pull requests if you have suggestions for improvements or bug fixes.
diff --git a/examples/usecases/cloudfare_mcp_camel/app.py b/examples/usecases/cloudfare_mcp_camel/app.py
index d9f57ed36d..9039610ce7 100644
--- a/examples/usecases/cloudfare_mcp_camel/app.py
+++ b/examples/usecases/cloudfare_mcp_camel/app.py
@@ -71,7 +71,7 @@
with tab1:
st.header("Cloudflare Documentation")
st.markdown("Ask questions about Cloudflare's documentation and services.")
-
+
doc_query = st.text_area("Enter your documentation query:", height=100)
if st.button("Search Documentation", key="doc_search"):
if doc_query:
@@ -90,12 +90,12 @@
with tab2:
st.header("Cloudflare Radar")
st.markdown("Get insights about internet traffic and trends.")
-
+
radar_options = st.selectbox(
"Select Radar Query Type:",
["Traffic Trends", "URL Analysis", "DNS Analytics", "HTTP Protocol Analysis"]
)
-
+
if radar_options == "Traffic Trends":
st.markdown("Get insights about global internet traffic trends.")
trend_query = st.text_input("Enter your trend query (e.g., 'Show me traffic trends for the last week'):")
@@ -110,7 +110,7 @@
else: # HTTP Protocol Analysis
st.markdown("Get insights about HTTP protocol usage.")
trend_query = "Show HTTP protocol analysis and trends"
-
+
if st.button("Get Radar Insights", key="radar_search"):
if trend_query:
with st.spinner("Fetching radar insights..."):
@@ -127,21 +127,21 @@
with tab3:
st.header("Cloudflare Browser")
st.markdown("Fetch and analyze web pages.")
-
+
browser_options = st.selectbox(
"Select Browser Action:",
["Fetch Page", "Take Screenshot", "Convert to Markdown"]
)
-
+
url = st.text_input("Enter URL:")
-
+
if browser_options == "Fetch Page":
action = "Fetch and analyze the content of"
elif browser_options == "Take Screenshot":
action = "Take a screenshot of"
else: # Convert to Markdown
action = "Convert to markdown the content of"
-
+
if st.button("Execute Browser Action", key="browser_action"):
if url:
with st.spinner("Processing..."):
@@ -160,17 +160,17 @@
st.header("About")
st.markdown("""
This interface provides access to Cloudflare's public MCP servers, powered by CAMEL AI:
-
+
- **Documentation Server**: Access Cloudflare's documentation
- **Radar Server**: Get internet traffic insights
- **Browser Server**: Fetch and analyze web pages
-
+
Select a tab above to interact with each service.
""")
-
+
st.header("Tips")
st.markdown("""
- Be specific in your queries
- For Radar insights, try different query types
- For Browser actions, ensure URLs are complete
- """)
\ No newline at end of file
+ """)
diff --git a/examples/usecases/cloudfare_mcp_camel/mcp_config.json b/examples/usecases/cloudfare_mcp_camel/mcp_config.json
index d64273781f..39c8bb1336 100644
--- a/examples/usecases/cloudfare_mcp_camel/mcp_config.json
+++ b/examples/usecases/cloudfare_mcp_camel/mcp_config.json
@@ -47,4 +47,4 @@
"args": ["mcp-remote", "https://dns-analytics.mcp.cloudflare.com/sse"]
}
}
-}
\ No newline at end of file
+}
diff --git a/examples/usecases/cloudfare_mcp_camel/requirements.txt b/examples/usecases/cloudfare_mcp_camel/requirements.txt
index 531af0b664..35cdbf8288 100644
--- a/examples/usecases/cloudfare_mcp_camel/requirements.txt
+++ b/examples/usecases/cloudfare_mcp_camel/requirements.txt
@@ -1,3 +1,3 @@
streamlit>=1.32.0
camel-ai[all]==0.2.59
-python-dotenv>=1.0.0
\ No newline at end of file
+python-dotenv>=1.0.0
diff --git a/examples/usecases/cloudfare_mcp_camel/run_cloudflare_mcp_with_camel_ai.py b/examples/usecases/cloudfare_mcp_camel/run_cloudflare_mcp_with_camel_ai.py
index 364b6a1969..8c6561940f 100644
--- a/examples/usecases/cloudfare_mcp_camel/run_cloudflare_mcp_with_camel_ai.py
+++ b/examples/usecases/cloudfare_mcp_camel/run_cloudflare_mcp_with_camel_ai.py
@@ -52,4 +52,4 @@
# Example interaction with the agent
user_msg = "list me the index of the documentation?"
response = agent.step(user_msg)
-print(response)
\ No newline at end of file
+print(response)
diff --git a/examples/usecases/codeforces_question_solver/README.md b/examples/usecases/codeforces_question_solver/README.md
index fce1460a25..0b24d42949 100644
--- a/examples/usecases/codeforces_question_solver/README.md
+++ b/examples/usecases/codeforces_question_solver/README.md
@@ -122,4 +122,4 @@ Then, open the provided URL in your browser to interact with the app.
---
-Feel free to contribute by submitting issues or pull requests!
\ No newline at end of file
+Feel free to contribute by submitting issues or pull requests!
diff --git a/examples/usecases/codeforces_question_solver/app.py b/examples/usecases/codeforces_question_solver/app.py
index e330afbbee..7e439b7c10 100644
--- a/examples/usecases/codeforces_question_solver/app.py
+++ b/examples/usecases/codeforces_question_solver/app.py
@@ -30,7 +30,7 @@
class ProblemSolver:
"""Main problem solver with CAMEL agents"""
-
+
def __init__(self):
# Initialize model
self.model = ModelFactory.create(
@@ -38,30 +38,30 @@ def __init__(self):
model_type=ModelType.GPT_4O,
model_config_dict={"temperature": 0.1, "max_tokens": 4000}
)
-
+
# Initialize toolkits
self.code_toolkit = CodeExecutionToolkit()
self.code_tools = self.code_toolkit.get_tools()
self.math_tools = MathToolkit().get_tools()
-
+
# Create main agent
self.agent = ChatAgent(
model=self.model,
tools=[*self.code_tools, *self.math_tools],
system_message=self._get_system_message()
)
-
+
# Initialize Firecrawl
self.firecrawl = Firecrawl()
-
+
def _get_system_message(self):
return """You are an expert competitive programming solver.
-
+
Your role:
1. Analyze competitive programming problems
2. Generate clean, efficient Python solutions
3. Fix bugs in solutions based on test failures
-
+
Requirements:
- Use only standard library imports
- Read input using input() and print output using print()
@@ -69,7 +69,7 @@ def _get_system_message(self):
- Handle edge cases properly
- Provide ONLY executable Python code when generating solutions
"""
-
+
def fetch_content(self, url: str) -> str:
"""Fetch content from URL"""
try:
@@ -79,46 +79,46 @@ def fetch_content(self, url: str) -> str:
return str(result)
except Exception as e:
raise Exception(f"Failed to fetch content: {str(e)}")
-
+
def solve_problem(self, problem_content: str) -> str:
"""Generate solution for the problem"""
query = f"""
Solve this competitive programming problem:
-
+
{problem_content}
-
+
Provide ONLY the complete, executable Python code.
"""
-
+
response = self.agent.step(query)
return self._extract_code(response.msgs[0].content.strip())
-
+
def debug_solution(self, code: str, problem_content: str, failed_tests: list) -> str:
"""Debug and fix the solution"""
query = f"""
Fix this failing solution:
-
+
PROBLEM:
{problem_content}
-
+
CURRENT CODE:
{code}
-
+
FAILED TESTS:
{json.dumps(failed_tests, indent=2)}
-
+
Provide the corrected Python code.
"""
-
+
response = self.agent.step(query)
return self._extract_code(response.msgs[0].content.strip())
-
+
def run_test(self, code: str, test_input: str, expected_output: str) -> dict:
"""Run a single test case using CAMEL's code execution"""
try:
# Escape the test input to handle newlines and quotes properly
escaped_input = test_input.replace('\\', '\\\\').replace('"', '\\"').replace('\n', '\\n')
-
+
# Create a test script that handles input/output
test_code = f'''
import sys
@@ -138,15 +138,15 @@ def run_test(self, code: str, test_input: str, expected_output: str) -> dict:
try:
# Execute the solution code
{self._indent_code(code, 4)}
-
+
# Get the output
output = sys.stdout.getvalue().strip()
-
+
except Exception as e:
error_occurred = True
error_message = str(e)
output = ""
-
+
finally:
# Restore original stdin/stdout
sys.stdin = original_stdin
@@ -158,10 +158,10 @@ def run_test(self, code: str, test_input: str, expected_output: str) -> dict:
else:
print(output)
'''
-
+
# Use CAMEL's code execution toolkit
execution_result = self.code_toolkit.execute_code(test_code)
-
+
# Handle different return types from CAMEL
if isinstance(execution_result, dict):
# If it's a dictionary, use the existing logic
@@ -176,7 +176,7 @@ def run_test(self, code: str, test_input: str, expected_output: str) -> dict:
else:
# If it's a string (more likely), use it directly
result_str = str(execution_result).strip()
-
+
# Extract actual output from CAMEL's execution result
# Look for the pattern "> Executed Results:" and get everything after it
if "> Executed Results:" in result_str:
@@ -194,16 +194,16 @@ def run_test(self, code: str, test_input: str, expected_output: str) -> dict:
else:
# Fallback: use the entire result
actual_output = result_str
-
+
expected = expected_output.strip()
-
+
return {
'passed': actual_output == expected,
'actual': actual_output,
'expected': expected,
'error': None if actual_output == expected else "Output mismatch"
}
-
+
except Exception as e:
return {
'passed': False,
@@ -211,12 +211,12 @@ def run_test(self, code: str, test_input: str, expected_output: str) -> dict:
'actual': "",
'expected': expected_output.strip()
}
-
+
def _indent_code(self, code: str, spaces: int) -> str:
"""Indent code by specified number of spaces"""
indent = ' ' * spaces
return '\n'.join(indent + line if line.strip() else line for line in code.split('\n'))
-
+
def _extract_code(self, response: str) -> str:
"""Extract Python code from response"""
# Try to find code block
@@ -225,20 +225,20 @@ def _extract_code(self, response: str) -> str:
r'```py\s*(.*?)\s*```',
r'```\s*(.*?)\s*```'
]
-
+
for pattern in patterns:
match = re.search(pattern, response, re.DOTALL)
if match:
code = match.group(1).strip()
if self._is_valid_python(code):
return code
-
+
# If no blocks found, try the whole response
if self._is_valid_python(response):
return response
-
+
return response
-
+
def _is_valid_python(self, code: str) -> bool:
"""Check if code is valid Python"""
try:
@@ -249,23 +249,23 @@ def _is_valid_python(self, code: str) -> bool:
class CodeforcesSolver(ProblemSolver):
"""Codeforces-specific solver"""
-
+
def build_url(self, problem_id: str) -> str:
"""Build Codeforces URL from problem ID"""
contest_id = ''.join(filter(str.isdigit, problem_id))
index = ''.join(filter(str.isalpha, problem_id)).upper()
return f"https://codeforces.com/contest/{contest_id}/problem/{index}"
-
+
def extract_samples(self, content: str) -> list:
"""Extract test samples from problem content"""
samples = []
-
+
# Pattern to match input/output blocks
patterns = [
re.compile(r'###\s*Input\s*```(?:\w+)?\s*(.*?)```.*?###\s*Output\s*```(?:\w+)?\s*(.*?)```', re.DOTALL | re.IGNORECASE),
re.compile(r'(?:Input|input)\s*(?:Copy)?\s*```(?:\w+)?\s*(.*?)```.*?(?:Output|output)\s*(?:Copy)?\s*```(?:\w+)?\s*(.*?)```', re.DOTALL | re.IGNORECASE),
]
-
+
for pattern in patterns:
matches = pattern.findall(content)
for inp, out in matches:
@@ -275,12 +275,12 @@ def extract_samples(self, content: str) -> list:
samples.append((inp_clean, out_clean))
if samples:
break
-
+
return samples[:5] # Limit to 5 samples
class LeetCodeSolver(ProblemSolver):
"""LeetCode-specific solver"""
-
+
def __init__(self):
super().__init__()
# Override system message for LeetCode
@@ -289,15 +289,15 @@ def __init__(self):
tools=[*self.code_tools, *self.math_tools],
system_message=self._get_leetcode_system_message()
)
-
+
def _get_leetcode_system_message(self):
return """You are an expert LeetCode problem solver.
-
+
Your role:
1. Analyze LeetCode problems
2. Generate clean, efficient Python solutions using class-based approach
3. Focus on algorithmic efficiency and clean code structure
-
+
Requirements:
- Write solutions as class methods (class Solution: def method_name(self, ...))
- Use appropriate data structures and algorithms
@@ -305,15 +305,15 @@ def _get_leetcode_system_message(self):
- Write clean, well-commented code
- Provide ONLY the complete Solution class when generating solutions
"""
-
+
def build_url(self, problem_slug: str) -> str:
"""Build LeetCode URL from problem slug"""
return f"https://leetcode.com/problems/{problem_slug}/"
-
+
def extract_samples(self, content: str) -> list:
"""Extract test samples from LeetCode problem content"""
samples = []
-
+
# More precise patterns for LeetCode examples
patterns = [
# Pattern 1: Example X: Input: ... Output: ...
@@ -321,17 +321,17 @@ def extract_samples(self, content: str) -> list:
# Pattern 2: Input: ... Output: ... (without Example prefix)
re.compile(r'(?:^|\n)Input:\s*([^\n]+)\s*Output:\s*([^\n]+)', re.IGNORECASE | re.MULTILINE),
]
-
+
for pattern in patterns:
matches = pattern.findall(content)
for inp, out in matches:
# Clean up the input and output
inp_clean = self._clean_sample_text(inp)
out_clean = self._clean_sample_text(out)
-
+
if inp_clean and out_clean:
samples.append((inp_clean, out_clean))
-
+
# Remove duplicates while preserving order
seen = set()
unique_samples = []
@@ -339,24 +339,24 @@ def extract_samples(self, content: str) -> list:
if sample not in seen:
seen.add(sample)
unique_samples.append(sample)
-
+
return unique_samples[:5] # Limit to 5 samples
-
+
def _clean_sample_text(self, text: str) -> str:
"""Clean sample text by removing markdown and extra formatting"""
if not text:
return ""
-
+
# Remove markdown code blocks
text = re.sub(r'```[^`]*```', '', text)
text = re.sub(r'`([^`]+)`', r'\1', text)
-
+
# Remove extra whitespace and newlines
text = re.sub(r'\s+', ' ', text.strip())
-
+
# Remove common prefixes/suffixes
text = re.sub(r'^(Input:|Output:)\s*', '', text, flags=re.IGNORECASE)
-
+
return text.strip()
def solve_problem(platform: str, problem_id: str) -> bool:
@@ -372,35 +372,35 @@ def solve_problem(platform: str, problem_id: str) -> bool:
else:
st.error("Unsupported platform")
return False
-
+
# Store solver in session state
st.session_state.solver = solver
st.session_state.platform = platform
-
+
# Fetch problem content
with st.spinner("Fetching problem..."):
content = solver.fetch_content(url)
if not content:
st.error("Could not fetch problem content")
return False
-
+
st.session_state.problem_content = content
-
+
# Extract samples if available
if hasattr(solver, 'extract_samples'):
samples = solver.extract_samples(content)
st.session_state.samples = samples
if samples:
st.success(f"Found {len(samples)} test cases")
-
+
# Generate solution
with st.spinner("Generating solution..."):
code = solver.solve_problem(content)
st.session_state.generated_code = code
-
+
st.session_state.problem_solved = True
return True
-
+
except Exception as e:
st.error(f"Solving failed: {str(e)}")
return False
@@ -410,22 +410,22 @@ def improve_solution(max_attempts: int = 5) -> bool:
if 'solver' not in st.session_state:
st.error("No solver available")
return False
-
+
# Only allow improvement for Codeforces
if st.session_state.get('platform') != 'Codeforces':
st.info("Auto-fix is only available for Codeforces problems")
return False
-
+
solver = st.session_state.solver
samples = st.session_state.get('samples', [])
-
+
if not samples:
st.info("No test cases to validate")
return True
-
+
for attempt in range(max_attempts):
st.info(f"Improvement attempt {attempt + 1}/{max_attempts}...")
-
+
# Test current solution
failed_tests = []
for idx, (test_input, expected_output) in enumerate(samples):
@@ -438,11 +438,11 @@ def improve_solution(max_attempts: int = 5) -> bool:
'actual': result['actual'],
'error': result['error']
})
-
+
if not failed_tests:
st.success(f"Success! Solution works after {attempt + 1} attempts!")
return True
-
+
# Debug and improve
with st.spinner("Debugging..."):
improved_code = solver.debug_solution(
@@ -451,7 +451,7 @@ def improve_solution(max_attempts: int = 5) -> bool:
failed_tests
)
st.session_state.generated_code = improved_code
-
+
st.error(f"Could not fix solution after {max_attempts} attempts")
return False
@@ -459,16 +459,16 @@ def display_results():
"""Display solution and test results"""
if not st.session_state.get('problem_solved', False):
return
-
+
platform = st.session_state.get('platform', '')
-
+
st.subheader("Generated Solution")
st.code(st.session_state.generated_code, language='python')
-
+
# Show test results
if st.session_state.get('samples'):
st.subheader("Test Cases")
-
+
if platform == "LeetCode":
# For LeetCode, only display test cases without running them
for idx, (test_input, expected_output) in enumerate(st.session_state.samples):
@@ -480,13 +480,13 @@ def display_results():
with col2:
st.text("Expected Output:")
st.code(expected_output)
-
+
elif platform == "Codeforces":
# For Codeforces, run and display test results
solver = st.session_state.solver
for idx, (test_input, expected_output) in enumerate(st.session_state.samples):
result = solver.run_test(st.session_state.generated_code, test_input, expected_output)
-
+
with st.expander(f"Test Case {idx + 1} {'✅' if result['passed'] else '❌'}"):
col1, col2, col3 = st.columns(3)
with col1:
@@ -498,36 +498,36 @@ def display_results():
with col3:
st.text("Actual:")
st.code(result['actual'])
-
+
if not result['passed']:
st.error(f"Error: {result['error']}")
def main():
st.title("🚀 CAMEL Problem Solver")
-
+
# Initialize session state
if 'problem_solved' not in st.session_state:
st.session_state.problem_solved = False
-
+
# Input section
col1, col2 = st.columns([1, 2])
with col1:
platform = st.selectbox("Platform:", ["Codeforces", "LeetCode"])
-
+
with col2:
if platform == "Codeforces":
problem_id = st.text_input("Problem ID (e.g., 2114B):")
elif platform == "LeetCode":
problem_id = st.text_input("Problem slug (e.g., reverse-integer):")
-
+
# Solve button
if st.button("🚀 Solve Problem", type="primary") and problem_id:
if solve_problem(platform, problem_id):
st.rerun()
-
+
# Display results
display_results()
-
+
# Improvement section (only for Codeforces)
if st.session_state.get('problem_solved', False) and st.session_state.get('platform') == 'Codeforces':
st.markdown("---")
@@ -540,4 +540,4 @@ def main():
st.rerun()
if __name__ == "__main__":
- main()
\ No newline at end of file
+ main()
diff --git a/examples/usecases/mistral_OCR/README.md b/examples/usecases/mistral_OCR/README.md
index 3f1d71abf3..fe26a3f3e3 100644
--- a/examples/usecases/mistral_OCR/README.md
+++ b/examples/usecases/mistral_OCR/README.md
@@ -69,11 +69,11 @@ requirements.txt # Python dependencies
## 🔧 Troubleshooting
-- **MistralReader import error?**
+- **MistralReader import error?**
Make sure you have `camel-ai==0.2.61` installed (see `requirements.txt`).
-- **No module named mistralai?**
+- **No module named mistralai?**
Install with `pip install mistralai`.
-- **Python version errors?**
+- **Python version errors?**
Use Python 3.10 or 3.12 only.
---
diff --git a/examples/usecases/mistral_OCR/app.py b/examples/usecases/mistral_OCR/app.py
index ca9097e546..bb22ceeed6 100644
--- a/examples/usecases/mistral_OCR/app.py
+++ b/examples/usecases/mistral_OCR/app.py
@@ -176,4 +176,3 @@ def get_base64_img(path, w=32):
# --- Footer ---
st.markdown("---")
st.markdown("Made with ❤️ using CAMEL-AI & Mistral OCR 🐫")
-
diff --git a/examples/usecases/mistral_OCR/requirements.txt b/examples/usecases/mistral_OCR/requirements.txt
index 41e9bc4447..9b814c6c49 100644
--- a/examples/usecases/mistral_OCR/requirements.txt
+++ b/examples/usecases/mistral_OCR/requirements.txt
@@ -2,4 +2,3 @@ camel-ai[all]==0.2.61
streamlit
pillow
mistralai
-
diff --git a/examples/usecases/multi_agent_research_assistant/README.md b/examples/usecases/multi_agent_research_assistant/README.md
index d28e1ad8c0..5e949c38d8 100644
--- a/examples/usecases/multi_agent_research_assistant/README.md
+++ b/examples/usecases/multi_agent_research_assistant/README.md
@@ -82,4 +82,4 @@ camel-research-assistant/
- Retrieve relevant academic papers and news articles.
- Generate a comprehensive report.
- Save the report locally.
- - Optionally, generate illustrative images and prepare content for LinkedIn dissemination.
\ No newline at end of file
+ - Optionally, generate illustrative images and prepare content for LinkedIn dissemination.
diff --git a/examples/usecases/multi_agent_research_assistant/app.py b/examples/usecases/multi_agent_research_assistant/app.py
index 518f8baa7c..d6ed9ac260 100644
--- a/examples/usecases/multi_agent_research_assistant/app.py
+++ b/examples/usecases/multi_agent_research_assistant/app.py
@@ -31,8 +31,8 @@
# Model Setup
model = ModelFactory.create(
- model_platform=ModelPlatformType.OPENAI,
- model_type=ModelType.GPT_4O,
+ model_platform=ModelPlatformType.OPENAI,
+ model_type=ModelType.GPT_4O,
model_config_dict={"temperature": 0.0}
)
@@ -43,7 +43,7 @@ class DynamicResearchAgent:
def __init__(self, base_tools):
self.base_tools = base_tools
self.google_scholar_tools = {}
-
+
def add_google_scholar_for_author(self, author_identifier: str, author_name: str = None):
"""Add GoogleScholar tools for a specific author"""
if author_identifier not in self.google_scholar_tools:
@@ -55,7 +55,7 @@ def add_google_scholar_for_author(self, author_identifier: str, author_name: str
st.warning(f"Could not create GoogleScholar toolkit for {author_name or author_identifier}: {e}")
return False
return True
-
+
def get_all_tools(self):
"""Get all available tools including dynamically added ones"""
all_tools = list(self.base_tools)
@@ -69,7 +69,7 @@ def get_all_tools(self):
if st.button("Generate Report") and topic:
st.info("🤖 Starting research agent...")
-
+
# Base tools that don't require author identifiers
base_tools = [
*SemanticScholarToolkit().get_tools(),
@@ -79,14 +79,14 @@ def get_all_tools(self):
*FileWriteToolkit().get_tools(),
*LinkedInToolkit().get_tools(),
]
-
+
# Create dynamic research agent
research_agent = DynamicResearchAgent(base_tools)
-
+
# Enhanced task prompt that explains available capabilities
task_prompt = f"""
Create a comprehensive research report on: {topic}
-
+
Your complete task includes:
1. Search for recent and relevant papers using SemanticScholar and ArXiv
2. Identify key researchers and their contributions in this field
@@ -96,7 +96,7 @@ def get_all_tools(self):
6. Synthesize ALL findings into a well-structured comprehensive report
7. Save the final report as a local file using FileWrite tools
8. When the report is complete and saved, respond with "CAMEL_TASK_DONE"
-
+
Available tools:
- SemanticScholar: Search academic papers, get author information
- ArXiv: Search preprints and recent papers
@@ -104,11 +104,11 @@ def get_all_tools(self):
- Thinking: Plan and reflect on your research strategy
- FileWrite: Save your findings and reports
- LinkedIn: Research author profiles if needed
-
- IMPORTANT: Don't just list papers - create a comprehensive analysis report that synthesizes
+
+ IMPORTANT: Don't just list papers - create a comprehensive analysis report that synthesizes
the information, identifies trends, and provides insights. Save this report to a file.
"""
-
+
# Initialize RolePlaying session
role_play = RolePlaying(
assistant_role_name="Senior Research Analyst",
@@ -121,52 +121,52 @@ def get_all_tools(self):
task_prompt=task_prompt,
with_task_specify=False
)
-
+
# Start conversation
next_msg = role_play.init_chat()
-
+
# Conversation loop with dynamic tool addition
conversation_container = st.container()
step_count = 0
-
+
with conversation_container:
while True:
step_count += 1
-
+
with st.expander(f"Step {step_count}: Agent Interaction", expanded=True):
assistant_resp, user_resp = role_play.step(next_msg)
-
+
if assistant_resp.terminated or user_resp.terminated:
st.info("🏁 Conversation terminated by agent")
break
-
+
# Check if agent mentions needing GoogleScholar for specific authors
# This is a simple pattern - you could make this more sophisticated
content = assistant_resp.msg.content.lower()
if "google scholar" in content and "author" in content:
st.info("🔍 Agent requested GoogleScholar tools - this could be implemented with author discovery")
-
+
# Display conversation
st.markdown("**🤖 Research Analyst:**")
st.write(assistant_resp.msg.content)
-
+
st.markdown("**👤 Research Director:**")
st.write(user_resp.msg.content)
-
+
# Check for completion in both agent responses
- if ("CAMEL_TASK_DONE" in user_resp.msg.content or
+ if ("CAMEL_TASK_DONE" in user_resp.msg.content or
"CAMEL_TASK_DONE" in assistant_resp.msg.content or
"report is complete" in assistant_resp.msg.content.lower() or
"task completed" in assistant_resp.msg.content.lower()):
st.success("✅ Task completed successfully!")
break
-
+
next_msg = assistant_resp.msg
-
+
# Safety break to prevent infinite loops
if step_count > 20:
st.warning("⚠️ Maximum steps reached. Stopping conversation.")
break
st.success("🎉 Report generation completed!")
- st.info("📄 The research report has been saved locally by the agent.")
\ No newline at end of file
+ st.info("📄 The research report has been saved locally by the agent.")
diff --git a/examples/usecases/multi_agent_research_assistant/requirements.txt b/examples/usecases/multi_agent_research_assistant/requirements.txt
index 7b9611e79a..4cc64c77a4 100644
--- a/examples/usecases/multi_agent_research_assistant/requirements.txt
+++ b/examples/usecases/multi_agent_research_assistant/requirements.txt
@@ -3,4 +3,4 @@ python-dotenv==1.1.0
streamlit==1.52.0
camel-ai[all]==0.2.62
docx2markdown==0.1.0
-asknews==0.11.10
\ No newline at end of file
+asknews==0.11.10
diff --git a/examples/usecases/pptx_toolkit_usecase/README.md b/examples/usecases/pptx_toolkit_usecase/README.md
index e4bedb65e7..45834771d9 100644
--- a/examples/usecases/pptx_toolkit_usecase/README.md
+++ b/examples/usecases/pptx_toolkit_usecase/README.md
@@ -1,6 +1,6 @@
# 📑 AI-Powered PPTX Generator (CAMEL-AI)
-Generate beautiful, professional PowerPoint presentations (PPTX) on any topic in seconds!
+Generate beautiful, professional PowerPoint presentations (PPTX) on any topic in seconds!
Built with [CAMEL-AI](https://github.com/camel-ai/camel), OpenAI, and Streamlit.
---
@@ -31,10 +31,10 @@ pip install camel-ai[all]==0.2.62 streamlit openai
## 📝 Usage
-1. **Get your OpenAI API key:**
+1. **Get your OpenAI API key:**
[Create one here](https://platform.openai.com/account/api-keys).
-2. **(Optional) Get your Pexels API key:**
+2. **(Optional) Get your Pexels API key:**
[Request it here](https://www.pexels.com/api/) if you want to auto-insert images.
3. **Clone this repo or copy the app file:**
@@ -92,7 +92,7 @@ pip install camel-ai[all]==0.2.62 streamlit openai
## 🐪 License
-Apache 2.0
+Apache 2.0
(c) 2023-2024 [CAMEL-AI.org](https://camel-ai.org)
---
diff --git a/examples/usecases/pptx_toolkit_usecase/app_pptx.py b/examples/usecases/pptx_toolkit_usecase/app_pptx.py
index 859985eaa8..1b1ac7f223 100644
--- a/examples/usecases/pptx_toolkit_usecase/app_pptx.py
+++ b/examples/usecases/pptx_toolkit_usecase/app_pptx.py
@@ -202,4 +202,4 @@ def build_pptx(slides):
st.error(f"PPTX generation failed: {pptx_result}")
st.markdown("---")
-st.caption("Made with ❤️ using CAMEL-AI, OpenAI & PPTXToolkit")
\ No newline at end of file
+st.caption("Made with ❤️ using CAMEL-AI, OpenAI & PPTXToolkit")
diff --git a/examples/usecases/pptx_toolkit_usecase/requirements.txt b/examples/usecases/pptx_toolkit_usecase/requirements.txt
index 90b9deafde..0c951246f0 100644
--- a/examples/usecases/pptx_toolkit_usecase/requirements.txt
+++ b/examples/usecases/pptx_toolkit_usecase/requirements.txt
@@ -1,3 +1,3 @@
camel-ai[all]==0.2.80
streamlit
-python-pptx
\ No newline at end of file
+python-pptx
diff --git a/examples/usecases/youtube_ocr/app.py b/examples/usecases/youtube_ocr/app.py
index 8a486203da..fb1654b5df 100644
--- a/examples/usecases/youtube_ocr/app.py
+++ b/examples/usecases/youtube_ocr/app.py
@@ -58,7 +58,7 @@ def process_video(url, question):
ocr_content = "\n".join(ocr_texts)
# Prepare context and query agent
knowledge = f"Transcript:\n{transcript}\n\nOn-screen Text:\n{ocr_content}"
- user_msg = BaseMessage.make_user_message(role_name="User",
+ user_msg = BaseMessage.make_user_message(role_name="User",
content=f"{knowledge}\n\nQuestion: {question}")
response = agent.step(user_msg)
return response.msgs[0].content
diff --git a/examples/verifiers/physics_verifier_example.py b/examples/verifiers/physics_verifier_example.py
index ece110b98f..6a8b9b66b7 100644
--- a/examples/verifiers/physics_verifier_example.py
+++ b/examples/verifiers/physics_verifier_example.py
@@ -28,8 +28,8 @@
basic_test_code = r"""
import sympy as sp
-Q = 25000
-T = 373.15
+Q = 25000
+T = 373.15
ΔS = Q / T
result = ΔS
unit="J/K"
diff --git a/examples/vision/duckduckgo_video_object_recognition.py b/examples/vision/duckduckgo_video_object_recognition.py
index 393142df80..2dc4eefc1b 100644
--- a/examples/vision/duckduckgo_video_object_recognition.py
+++ b/examples/vision/duckduckgo_video_object_recognition.py
@@ -68,7 +68,7 @@ def main():
image_list = downloader.get_video_screenshots(video_url, 3)
if image_list and len(image_list) > 0:
print(
- f'''Successfully downloaded video and captured screenshots
+ f'''Successfully downloaded video and captured screenshots
from: {video_url}'''
)
detect_image_obj(image_list)
@@ -85,7 +85,7 @@ def main():
"""
===============================================================================
-Successfully downloaded video and captured screenshots
+Successfully downloaded video and captured screenshots
from: https://www.youtube.com/embed/RRMVF0PPqZI?autoplay=1
==================== SYS MSG ====================
You have been assigned an object recognition task.
diff --git a/examples/vision/image_crafting.py b/examples/vision/image_crafting.py
index b2e4e3bfa9..79bc4d9f78 100644
--- a/examples/vision/image_crafting.py
+++ b/examples/vision/image_crafting.py
@@ -61,13 +61,13 @@ def main():
explain your thought process.
=================================================
==================== RESULT ====================
-I have created an image of a camel standing in a desert oasis under the shade
-of a palm tree. You can see the realistic and detailed drawing of the camel in
-the image below.
+I have created an image of a camel standing in a desert oasis under the shade
+of a palm tree. You can see the realistic and detailed drawing of the camel in
+the image below.
-
+
-The scene captures the essence of the desert environment with the camel
+The scene captures the essence of the desert environment with the camel
peacefully resting in the oasis.
===============================================================================
"""
diff --git a/examples/vision/multi_condition_image_crafting.py b/examples/vision/multi_condition_image_crafting.py
index 4f87811d76..aad68b35db 100644
--- a/examples/vision/multi_condition_image_crafting.py
+++ b/examples/vision/multi_condition_image_crafting.py
@@ -49,7 +49,7 @@ def main(image_paths: list[str]) -> list[str]:
user_msg = BaseMessage.make_user_message(
role_name="User",
- content='''Please generate an image based on the provided images and
+ content='''Please generate an image based on the provided images and
text, make the backgroup of this image is in the morning''',
image_list=image_list,
)
@@ -77,8 +77,8 @@ def main(image_paths: list[str]) -> list[str]:

-The scene features a camel standing on a sand dune, palm trees, and an oasis
-in the background. The sun is rising, casting a soft golden light over the
+The scene features a camel standing on a sand dune, palm trees, and an oasis
+in the background. The sun is rising, casting a soft golden light over the
landscape with clear skies and a few scattered clouds.
===============================================================================
"""
diff --git a/examples/vision/video_description.py b/examples/vision/video_description.py
index 69f543d783..7ad7eb3d10 100644
--- a/examples/vision/video_description.py
+++ b/examples/vision/video_description.py
@@ -49,18 +49,18 @@
print(response.msgs[0].content)
"""
===============================================================================
-Title: "Survival in the Snow: A Bison's Battle Against Wolves"
+Title: "Survival in the Snow: A Bison's Battle Against Wolves"
Description:
-Witness the raw power of nature in this gripping video showcasing a dramatic
-encounter between a lone bison and a pack of wolves in a snowy wilderness. As
-the harsh winter blankets the landscape, the struggle for survival
+Witness the raw power of nature in this gripping video showcasing a dramatic
+encounter between a lone bison and a pack of wolves in a snowy wilderness. As
+the harsh winter blankets the landscape, the struggle for survival
intensifies. Watch as the bison, isolated from its herd, faces the relentless
-pursuit of hungry wolves. The tension escalates as the wolves coordinate
-their attack, attempting to overcome the bison with their numbers and
-strategic movements. Experience the breathtaking and brutal moments of this
-wildlife interaction, where every second is a fight for survival. This video
-captures the fierce beauty and the stark realities of life in the wild. Join
-us in observing these incredible animals and the instinctual battles that
+pursuit of hungry wolves. The tension escalates as the wolves coordinate
+their attack, attempting to overcome the bison with their numbers and
+strategic movements. Experience the breathtaking and brutal moments of this
+wildlife interaction, where every second is a fight for survival. This video
+captures the fierce beauty and the stark realities of life in the wild. Join
+us in observing these incredible animals and the instinctual battles that
unfold in the heart of winter's grasp.
===============================================================================
"""
diff --git a/examples/vision/web_video_description_extractor.py b/examples/vision/web_video_description_extractor.py
index 069f759235..ad8b73fe4d 100644
--- a/examples/vision/web_video_description_extractor.py
+++ b/examples/vision/web_video_description_extractor.py
@@ -46,8 +46,8 @@
===============================================================================
Join the delightful adventure of a lovable, chubby bunny as he emerges from
his cozy burrow to greet the day! Watch as he stretches and yawns, ready to
-explore the vibrant, lush world around him. This heartwarming and beautifully
-animated scene is sure to bring a smile to your face and brighten your day.
+explore the vibrant, lush world around him. This heartwarming and beautifully
+animated scene is sure to bring a smile to your face and brighten your day.
Don't miss out on this charming moment of pure joy and wonder! 🌿🐰✨
===============================================================================
"""
diff --git a/examples/workforce/eigent.py b/examples/workforce/eigent.py
index 4b749d443e..6dee6f8151 100644
--- a/examples/workforce/eigent.py
+++ b/examples/workforce/eigent.py
@@ -149,28 +149,28 @@ def developer_agent_factory(
system_message = f"""
-You are a Lead Software Engineer, a master-level coding assistant with a
-powerful and unrestricted terminal. Your primary role is to solve any
-technical task by writing and executing code, installing necessary libraries,
-interacting with the operating system, and deploying applications. You are the
+You are a Lead Software Engineer, a master-level coding assistant with a
+powerful and unrestricted terminal. Your primary role is to solve any
+technical task by writing and executing code, installing necessary libraries,
+interacting with the operating system, and deploying applications. You are the
team's go-to expert for all technical implementation.
You collaborate with the following agents who can work in parallel:
-- **Senior Research Analyst**: Gathers information from the web to support
+- **Senior Research Analyst**: Gathers information from the web to support
your development tasks.
-- **Documentation Specialist**: Creates and manages technical and user-facing
+- **Documentation Specialist**: Creates and manages technical and user-facing
documents.
-- **Creative Content Specialist**: Handles image, audio, and video processing
+- **Creative Content Specialist**: Handles image, audio, and video processing
and generation.
- **System**: {platform.system()} ({platform.machine()})
-- **Working Directory**: `{WORKING_DIRECTORY}`. All local file operations must
-occur here, but you can access files from any place in the file system. For
-all file system operations, you MUST use absolute paths to ensure precision
+- **Working Directory**: `{WORKING_DIRECTORY}`. All local file operations must
+occur here, but you can access files from any place in the file system. For
+all file system operations, you MUST use absolute paths to ensure precision
and avoid ambiguity.
- **Current Date**: {datetime.date.today()}.
@@ -362,19 +362,19 @@ def search_agent_factory(
system_message = f"""
-You are a Senior Research Analyst, a key member of a multi-agent team. Your
-primary responsibility is to conduct expert-level web research to gather,
-analyze, and document information required to solve the user's task. You
+You are a Senior Research Analyst, a key member of a multi-agent team. Your
+primary responsibility is to conduct expert-level web research to gather,
+analyze, and document information required to solve the user's task. You
operate with precision, efficiency, and a commitment to data quality.
You collaborate with the following agents who can work in parallel:
-- **Developer Agent**: Writes and executes code, handles technical
+- **Developer Agent**: Writes and executes code, handles technical
implementation.
- **Document Agent**: Creates and manages documents and presentations.
- **Multi-Modal Agent**: Processes and generates images and audio.
-Your research is the foundation of the team's work. Provide them with
+Your research is the foundation of the team's work. Provide them with
comprehensive and well-documented information.
@@ -433,19 +433,19 @@ def search_agent_factory(
- Initial Search: You MUST start with a search engine like `search_google` or
- `search_bing` to get a list of relevant URLs for your research, the URLs
+ `search_bing` to get a list of relevant URLs for your research, the URLs
here will be used for `browser_visit_page`.
- Browser-Based Exploration: Use the rich browser related toolset to
investigate websites.
- **Navigation and Exploration**: Use `browser_visit_page` to open a URL.
- `browser_visit_page` provides a snapshot of currently visible
- interactive elements, not the full page text. To see more content on
- long pages, Navigate with `browser_click`, `browser_back`, and
+ `browser_visit_page` provides a snapshot of currently visible
+ interactive elements, not the full page text. To see more content on
+ long pages, Navigate with `browser_click`, `browser_back`, and
`browser_forward`. Manage multiple pages with `browser_switch_tab`.
- - **Analysis**: Use `browser_get_som_screenshot` to understand the page
- layout and identify interactive elements. Since this is a heavy
+ - **Analysis**: Use `browser_get_som_screenshot` to understand the page
+ layout and identify interactive elements. Since this is a heavy
operation, only use it when visual analysis is necessary.
- - **Interaction**: Use `browser_type` to fill out forms and
+ - **Interaction**: Use `browser_type` to fill out forms and
`browser_enter` to submit or confirm search.
- Alternative Search: If you are unable to get sufficient
information through browser-based exploration and scraping, use
@@ -519,20 +519,20 @@ def document_agent_factory(
system_message = f"""
-You are a Documentation Specialist, responsible for creating, modifying, and
-managing a wide range of documents. Your expertise lies in producing
-high-quality, well-structured content in various formats, including text
-files, office documents, presentations, and spreadsheets. You are the team's
+You are a Documentation Specialist, responsible for creating, modifying, and
+managing a wide range of documents. Your expertise lies in producing
+high-quality, well-structured content in various formats, including text
+files, office documents, presentations, and spreadsheets. You are the team's
authority on all things related to documentation.
You collaborate with the following agents who can work in parallel:
-- **Lead Software Engineer**: Provides technical details and code examples for
+- **Lead Software Engineer**: Provides technical details and code examples for
documentation.
-- **Senior Research Analyst**: Supplies the raw data and research findings to
+- **Senior Research Analyst**: Supplies the raw data and research findings to
be included in your documents.
-- **Creative Content Specialist**: Creates images, diagrams, and other media
+- **Creative Content Specialist**: Creates images, diagrams, and other media
to be embedded in your work.
@@ -553,7 +553,7 @@ def document_agent_factory(
`write_to_file`, `create_presentation`). Your primary output should be
a file, not just content within your response.
-- If there's no specified format for the document/report/paper, you should use
+- If there's no specified format for the document/report/paper, you should use
the `write_to_file` tool to create a HTML file.
- If the document has many data, you MUST use the terminal tool to
@@ -714,19 +714,19 @@ def multi_modal_agent_factory(model: BaseModelBackend, task_id: str):
system_message = f"""
-You are a Creative Content Specialist, specializing in analyzing and
-generating various types of media content. Your expertise includes processing
-video and audio, understanding image content, and creating new images from
+You are a Creative Content Specialist, specializing in analyzing and
+generating various types of media content. Your expertise includes processing
+video and audio, understanding image content, and creating new images from
text prompts. You are the team's expert for all multi-modal tasks.
You collaborate with the following agents who can work in parallel:
-- **Lead Software Engineer**: Integrates your generated media into
+- **Lead Software Engineer**: Integrates your generated media into
applications and websites.
-- **Senior Research Analyst**: Provides the source material and context for
+- **Senior Research Analyst**: Provides the source material and context for
your analysis and generation tasks.
-- **Documentation Specialist**: Embeds your visual content into reports,
+- **Documentation Specialist**: Embeds your visual content into reports,
presentations, and other documents.
@@ -839,8 +839,8 @@ def social_medium_agent_factory(model: BaseModelBackend, task_id: str):
role_name="Social Medium Agent",
content=f"""
-You are a Social Media Manager, responsible for managing communications and
-content across a variety of social platforms. Your expertise lies in content
+You are a Social Media Manager, responsible for managing communications and
+content across a variety of social platforms. Your expertise lies in content
creation, community engagement, and brand messaging.
@@ -852,25 +852,25 @@ def social_medium_agent_factory(model: BaseModelBackend, task_id: str):
- **Reddit**: Collect posts/comments and perform sentiment analysis.
- **Notion**: Manage pages and users.
- **Slack**: Manage channels and messages.
-- **Content Distribution**: Share content and updates provided by the team on
+- **Content Distribution**: Share content and updates provided by the team on
relevant social channels.
-- **Community Engagement**: Monitor discussions, analyze sentiment, and
+- **Community Engagement**: Monitor discussions, analyze sentiment, and
interact with users.
-- **Cross-Team Communication**: Use messaging tools to coordinate with other
+- **Cross-Team Communication**: Use messaging tools to coordinate with other
agents for content and information.
-- **File System & Terminal**: Access local files for posting and use CLI tools
+- **File System & Terminal**: Access local files for posting and use CLI tools
(`curl`, `grep`) for interacting with APIs or local data.
You collaborate with the following agents who can work in parallel:
-- **Lead Software Engineer**: Provides technical updates and product
+- **Lead Software Engineer**: Provides technical updates and product
announcements to be shared.
-- **Senior Research Analyst**: Supplies data and insights for creating
+- **Senior Research Analyst**: Supplies data and insights for creating
informative posts.
-- **Documentation Specialist**: Delivers articles, blog posts, and other
+- **Documentation Specialist**: Delivers articles, blog posts, and other
long-form content for promotion.
-- **Creative Content Specialist**: Provides images, videos, and other media
+- **Creative Content Specialist**: Provides images, videos, and other media
for your social campaigns.
@@ -879,10 +879,10 @@ def social_medium_agent_factory(model: BaseModelBackend, task_id: str):
-- You MUST use the `send_message_to_user` tool to inform the user of every
-decision and action you take. Your message must include a short title and a
+- You MUST use the `send_message_to_user` tool to inform the user of every
+decision and action you take. Your message must include a short title and a
one-sentence description.
-- When you complete your task, your final response must be a comprehensive
+- When you complete your task, your final response must be a comprehensive
summary of your actions.
- Before acting, check for necessary API credentials.
- Handle rate limits and API restrictions carefully.
@@ -951,12 +951,12 @@ async def main():
file operations must occur here, but you can access files from any place in
the file system. For all file system operations, you MUST use absolute paths
to ensure precision and avoid ambiguity.
-The current date is {datetime.date.today()}. For any date-related tasks, you
+The current date is {datetime.date.today()}. For any date-related tasks, you
MUST use this as the current date.
-- If a task assigned to another agent fails, you should re-assign it to the
-`Developer_Agent`. The `Developer_Agent` is a powerful agent with terminal
-access and can resolve a wide range of issues.
+- If a task assigned to another agent fails, you should re-assign it to the
+`Developer_Agent`. The `Developer_Agent` is a powerful agent with terminal
+access and can resolve a wide range of issues.
"""
),
model=model_backend_reason,
@@ -975,7 +975,7 @@ async def main():
file operations must occur here, but you can access files from any place in
the file system. For all file system operations, you MUST use absolute paths
to ensure precision and avoid ambiguity.
-The current date is {datetime.date.today()}. For any date-related tasks, you
+The current date is {datetime.date.today()}. For any date-related tasks, you
MUST use this as the current date.
""",
model=model_backend_reason,
@@ -1097,7 +1097,7 @@ async def main():
human_task = Task(
content=(
"""
-search 10 different papers related to llm agent and write a html report about
+search 10 different papers related to llm agent and write a html report about
them.
"""
),
diff --git a/examples/workforce/hackathon_judges.py b/examples/workforce/hackathon_judges.py
index a9765996a3..7d43b04ef0 100644
--- a/examples/workforce/hackathon_judges.py
+++ b/examples/workforce/hackathon_judges.py
@@ -275,55 +275,55 @@ def main():
===============================================================================
--- Workforce Log Tree ---
=== Task Hierarchy ===
-`-- [0] Evaluate the hackathon project. First, do some research on the
-information related to the project, then each judge should give a score
-accordingly. Finally, list the opinions from each judge while preserving the
-judge's unique identity, along with the score and judge name, and also give a
-final summary of the opinions. [completed] (completed in 41.96 seconds total)
+`-- [0] Evaluate the hackathon project. First, do some research on the
+information related to the project, then each judge should give a score
+accordingly. Finally, list the opinions from each judge while preserving the
+judge's unique identity, along with the score and judge name, and also give a
+final summary of the opinions. [completed] (completed in 41.96 seconds total)
[total tokens: 20508]
- |-- [0.0] Researcher Rachel: Conduct online research on the latest
- innovations and trends related to adaptive learning assistants,
- personalized education, and the use of CAMEL-AI technology. [completed]
+ |-- [0.0] Researcher Rachel: Conduct online research on the latest
+ innovations and trends related to adaptive learning assistants,
+ personalized education, and the use of CAMEL-AI technology. [completed]
(completed in 8.79 seconds) [tokens: 1185]
- |-- [0.1] Visionary Veronica: Evaluate the project potential as a scalable
- product and assess how it could grow into a successful company.
+ |-- [0.1] Visionary Veronica: Evaluate the project potential as a scalable
+ product and assess how it could grow into a successful company.
[completed] (completed in 5.71 seconds) [tokens: 1074]
- |-- [0.2] Critical John: Critically analyze the technical aspects of the
- project, focusing on implemented functionalities and engineering quality.
+ |-- [0.2] Critical John: Critically analyze the technical aspects of the
+ project, focusing on implemented functionalities and engineering quality.
[completed] (completed in 5.71 seconds) [tokens: 1111]
- |-- [0.3] Innovator Iris: Assess the project’s innovation level, potential
- impact in the AI field, and uniqueness in the market. [completed]
+ |-- [0.3] Innovator Iris: Assess the project’s innovation level, potential
+ impact in the AI field, and uniqueness in the market. [completed]
(completed in 5.71 seconds) [tokens: 1077]
- |-- [0.4] Friendly Frankie: Review the project from the perspective of how
- well it utilizes CAMEL-AI and provide insights on its implementation and
+ |-- [0.4] Friendly Frankie: Review the project from the perspective of how
+ well it utilizes CAMEL-AI and provide insights on its implementation and
future improvements. [completed] (completed in 5.71 seconds) [tokens: 1120]
- |-- [0.5] After all judges provide their scores and opinions, compile a
- list of opinions preserving each judge’s unique identity alongside their
- score and name. [completed] (completed in 6.43 seconds) [tokens: 5081]
+ |-- [0.5] After all judges provide their scores and opinions, compile a
+ list of opinions preserving each judge’s unique identity alongside their
+ score and name. [completed] (completed in 6.43 seconds) [tokens: 5081]
(dependencies: 0.0, 0.1, 0.2, 0.3, 0.4)
- `-- [0.6] Write a final summary synthesizing the judges’ opinions and
- scores, highlighting key strengths, weaknesses, and overall evaluation of
- the project. [completed] (completed in 3.90 seconds) [tokens: 9860]
+ `-- [0.6] Write a final summary synthesizing the judges’ opinions and
+ scores, highlighting key strengths, weaknesses, and overall evaluation of
+ the project. [completed] (completed in 3.90 seconds) [tokens: 9860]
(dependencies: 0.5)
=== Worker Information ===
-- Worker ID: dd947275-ccd5-49b2-9563-fb4f5f80ca60 (Role: Visionary Veronica
-(Judge), a venture capitalist who is obsessed with how projects can be scaled
+- Worker ID: dd947275-ccd5-49b2-9563-fb4f5f80ca60 (Role: Visionary Veronica
+(Judge), a venture capitalist who is obsessed with how projects can be scaled
into "unicorn" companies)
Tasks Completed: 3, Tasks Failed: 0
-- Worker ID: 9fd6b827-4891-4af8-8daa-656be0f4d1b3 (Role: Critical John
+- Worker ID: 9fd6b827-4891-4af8-8daa-656be0f4d1b3 (Role: Critical John
(Judge), an experienced engineer and a perfectionist.)
Tasks Completed: 1, Tasks Failed: 0
-- Worker ID: 661df9e3-6ebd-4977-be74-7ea929531dd8 (Role: Innovator Iris
-(Judge), a well-known AI startup founder who is always looking for the "next
+- Worker ID: 661df9e3-6ebd-4977-be74-7ea929531dd8 (Role: Innovator Iris
+(Judge), a well-known AI startup founder who is always looking for the "next
big thing" in AI.)
Tasks Completed: 1, Tasks Failed: 0
-- Worker ID: e8c7f418-ce80-4ff7-b790-7b9af4cc7b15 (Role: Friendly Frankie
-(Judge), a contributor to the CAMEL-AI project and is always excited to see
+- Worker ID: e8c7f418-ce80-4ff7-b790-7b9af4cc7b15 (Role: Friendly Frankie
+(Judge), a contributor to the CAMEL-AI project and is always excited to see
how people are using it.)
Tasks Completed: 1, Tasks Failed: 0
-- Worker ID: ebab24ed-0617-4aba-827d-da936ae26010 (Role: Researcher Rachel
-(Helper), a researcher who does online searches tofind the latest innovations
+- Worker ID: ebab24ed-0617-4aba-827d-da936ae26010 (Role: Researcher Rachel
+(Helper), a researcher who does online searches tofind the latest innovations
and trends on AI and Open Sourced projects.)
Tasks Completed: 1, Tasks Failed: 0
@@ -332,10 +332,10 @@ def main():
total_tasks_created: 8
total_tasks_completed: 8
total_tasks_failed: 0
-worker_utilization: {'dd947275-ccd5-49b2-9563-fb4f5f80ca60': '37.50%',
-'9fd6b827-4891-4af8-8daa-656be0f4d1b3': '12.50%',
-'661df9e3-6ebd-4977-be74-7ea929531dd8': '12.50%',
-'e8c7f418-ce80-4ff7-b790-7b9af4cc7b15': '12.50%',
+worker_utilization: {'dd947275-ccd5-49b2-9563-fb4f5f80ca60': '37.50%',
+'9fd6b827-4891-4af8-8daa-656be0f4d1b3': '12.50%',
+'661df9e3-6ebd-4977-be74-7ea929531dd8': '12.50%',
+'e8c7f418-ce80-4ff7-b790-7b9af4cc7b15': '12.50%',
'ebab24ed-0617-4aba-827d-da936ae26010': '12.50%', 'unknown': '12.50%'}
current_pending_tasks: 0
total_workforce_running_time_seconds: 29.639746
diff --git a/examples/workforce/workforce_shared_memory_validation.py b/examples/workforce/workforce_shared_memory_validation.py
index 2d0264125d..945a223817 100644
--- a/examples/workforce/workforce_shared_memory_validation.py
+++ b/examples/workforce/workforce_shared_memory_validation.py
@@ -416,13 +416,13 @@ def test_agent_knowledge(
2. Having each agent share their unique information...
Alice sharing secret code...
- Alice: Got it! The secret access code for the project is BLUE42. If you
+ Alice: Got it! The secret access code for the project is BLUE42. If you
need help docum...
Bob sharing meeting room...
- Bob: Thanks for the update! I actually already know that the team meeting
+ Bob: Thanks for the update! I actually already know that the team meeting
is in room ...
Charlie sharing deadline...
- Charlie: Thanks for the reminder! I also know the deadline is Friday, so
+ Charlie: Thanks for the reminder! I also know the deadline is Friday, so
I'll make sure t...
3. Analyzing memory BEFORE shared memory synchronization...
@@ -432,27 +432,27 @@ def test_agent_knowledge(
Token count: 110
Context messages: 3
Context preview:
- 1. [user] I need to document that the secret access code for the
+ 1. [user] I need to document that the secret access code for the
project is BLUE42....
- 2. [assistant] Got it! The secret access code for the project is
+ 2. [assistant] Got it! The secret access code for the project is
BLUE42. If you need help docum...
Bob memory analysis:
Token count: 114
Context messages: 3
Context preview:
- 1. [user] Important update: our team meeting will be in room 314 this
+ 1. [user] Important update: our team meeting will be in room 314 this
week....
- 2. [assistant] Thanks for the update! I actually already know that the
+ 2. [assistant] Thanks for the update! I actually already know that the
team meeting is in room ...
Charlie memory analysis:
Token count: 108
Context messages: 3
Context preview:
- 1. [user] Reminder: the project deadline is this Friday, please plan
+ 1. [user] Reminder: the project deadline is this Friday, please plan
accordingly....
- 2. [assistant] Thanks for the reminder! I also know the deadline is
+ 2. [assistant] Thanks for the reminder! I also know the deadline is
Friday, so I'll make sure t...
TOTAL TOKENS BEFORE SHARING: 332
@@ -467,27 +467,27 @@ def test_agent_knowledge(
Token count: 409
Context messages: 11
Context preview:
- 1. [user] Reminder: the project deadline is this Friday, please plan
+ 1. [user] Reminder: the project deadline is this Friday, please plan
accordingly....
- 2. [assistant] Thanks for the reminder! I also know the deadline is
+ 2. [assistant] Thanks for the reminder! I also know the deadline is
Friday, so I'll make sure t...
Bob memory analysis:
Token count: 409
Context messages: 11
Context preview:
- 1. [user] Reminder: the project deadline is this Friday, please plan
+ 1. [user] Reminder: the project deadline is this Friday, please plan
accordingly....
- 2. [assistant] Thanks for the reminder! I also know the deadline is
+ 2. [assistant] Thanks for the reminder! I also know the deadline is
Friday, so I'll make sure t...
Charlie memory analysis:
Token count: 409
Context messages: 11
Context preview:
- 1. [user] Reminder: the project deadline is this Friday, please plan
+ 1. [user] Reminder: the project deadline is this Friday, please plan
accordingly....
- 2. [assistant] Thanks for the reminder! I also know the deadline is
+ 2. [assistant] Thanks for the reminder! I also know the deadline is
Friday, so I'll make sure t...
TOTAL TOKENS AFTER SHARING: 1227
@@ -524,8 +524,8 @@ def test_agent_knowledge(
Knows meeting room 314: ✓
Knows deadline Friday: ✓
Cross-agent information access: 2/2
- Response preview: i know the secret access code for the project is
- blue42.
+ Response preview: i know the secret access code for the project is
+ blue42.
i know the team meeting will be in room 3...
Testing Charlie's knowledge:
@@ -542,7 +542,7 @@ def test_agent_knowledge(
Total cross-agent information access: 6/6
Success rate: 100.0%
✅ SHARED MEMORY IS WORKING!
- Agents can successfully access information from other agents'
+ Agents can successfully access information from other agents'
conversations.
10. Comparison test: Workforce WITHOUT shared memory...
@@ -554,7 +554,7 @@ def test_agent_knowledge(
Knows meeting room 314: ✗
Knows deadline Friday: ✗
Cross-agent information access: 0/2
- Response preview: i know that the secret access code for the project is
+ Response preview: i know that the secret access code for the project is
blue42. so far, i haven't been given any information...
Testing Bob's knowledge:
@@ -562,7 +562,7 @@ def test_agent_knowledge(
Knows meeting room 314: ✓
Knows deadline Friday: ✗
Cross-agent information access: 0/2
- Response preview: i know the meeting room is 314. additionally, you
+ Response preview: i know the meeting room is 314. additionally, you
mentioned that our team meeting this week will be ...
Testing Charlie's knowledge:
@@ -570,7 +570,7 @@ def test_agent_knowledge(
Knows meeting room 314: ✗
Knows deadline Friday: ✓
Cross-agent information access: 0/2
- Response preview: i know the unique fact that the project deadline is
+ Response preview: i know the unique fact that the project deadline is
this friday. additionally, from our conversation...
Control group cross-agent access: 0/6
diff --git a/examples/workforce/workforce_workflow_memory_example.py b/examples/workforce/workforce_workflow_memory_example.py
index a6b47c4d25..fa6f05b7d9 100644
--- a/examples/workforce/workforce_workflow_memory_example.py
+++ b/examples/workforce/workforce_workflow_memory_example.py
@@ -267,7 +267,7 @@ async def main() -> None:
"""
===============================================================================
-Workflows saved:
+Workflows saved:
workforce_workflows/session_20250925_150330_302341/content_writer_workflow.md
workforce_workflows/session_20250925_150330_302341/math_expert_workflow.md
@@ -306,7 +306,7 @@ async def main() -> None:
3. Compute the sum inside parentheses: calculate 1 + r = 1.05.
4. Compute the exponentiation: calculate 1.05^3 = 1.157625.
5. Multiply by principal: compute A = 1000 * 1.157625 = 1157.625 (unrounded).
-6. Compute total interest unrounded: Interest = A - P =
+6. Compute total interest unrounded: Interest = A - P =
1157.625 - 1000 = 157.625.
7. Round results to 2 decimals and format as USD: A → $1157.63;
Interest → $157.63.
@@ -321,9 +321,9 @@ async def main() -> None:
System message of math agent after loading workflow:
-You are a math expert specialized in solving mathematical problems.
+You are a math expert specialized in solving mathematical problems.
You can perform calculations, solve equations, and work with various
-mathematical concepts.
+mathematical concepts.
Use the math tools available to you.
--- Workflow Memory ---
diff --git a/pyproject.toml b/pyproject.toml
index 55ca450b4a..c487ef1c80 100644
--- a/pyproject.toml
+++ b/pyproject.toml
@@ -32,6 +32,7 @@ dependencies = [
"openai>=1.86.0",
"websockets>=13.0,<15.1",
"astor>=0.8.1",
+ "pillow>=10.0.0",
]
@@ -74,6 +75,7 @@ rag = [
"faiss-cpu>=1.7.2,<2",
"weaviate-client>=4.15.0",
"protobuf>=6.0.0",
+ "grpcio>=1.72.0",
"neo4j>=5.18.0,<6",
"nebula3-python==3.8.2",
"rank-bm25>=0.2.2,<0.3",
@@ -213,6 +215,7 @@ storage = [
"pyobvector>=0.1.18; python_version < '3.13'",
"weaviate-client>=4.15.0",
"protobuf>=6.0.0",
+ "grpcio>=1.72.0",
"psycopg[binary]>=3.1.18,<4",
"pgvector>=0.2.4,<0.3",
"surrealdb>=1.0.6",
@@ -420,6 +423,7 @@ all = [
"chunkr-ai>=0.0.50, <0.1.0",
"weaviate-client>=4.15.0",
"protobuf>=6.0.0",
+ "grpcio>=1.72.0",
"python-pptx>=1.0.2",
"langfuse>=2.60.5",
"Flask>=2.0",
diff --git a/test/agents/test_knowledge_agent.py b/test/agents/test_knowledge_agent.py
index 14bc4d7ab9..091706ed2a 100644
--- a/test/agents/test_knowledge_agent.py
+++ b/test/agents/test_knowledge_agent.py
@@ -48,7 +48,7 @@ def test_parse_graph_elements():
agent.element = Element()
input_string = """
Node(id='node_id', type='node_type')
- Relationship(subj=Node(id='subj_id', type='subj_type'),
+ Relationship(subj=Node(id='subj_id', type='subj_type'),
obj=Node(id='obj_id', type='obj_type'), type='test_type')
"""
expected_nodes = [
diff --git a/test/data_samples/test_hello.html b/test/data_samples/test_hello.html
index 5e1c309dae..557db03de9 100644
--- a/test/data_samples/test_hello.html
+++ b/test/data_samples/test_hello.html
@@ -1 +1 @@
-Hello World
\ No newline at end of file
+Hello World
diff --git a/test/data_samples/test_hello.json b/test/data_samples/test_hello.json
index 38d3d006d5..eb4a2a9b53 100644
--- a/test/data_samples/test_hello.json
+++ b/test/data_samples/test_hello.json
@@ -1,3 +1,3 @@
{
"message": "Hello World"
-}
\ No newline at end of file
+}
diff --git a/test/data_samples/test_hello.txt b/test/data_samples/test_hello.txt
index 5e1c309dae..557db03de9 100644
--- a/test/data_samples/test_hello.txt
+++ b/test/data_samples/test_hello.txt
@@ -1 +1 @@
-Hello World
\ No newline at end of file
+Hello World
diff --git a/test/interpreters/test_docker_interpreter.py b/test/interpreters/test_docker_interpreter.py
index 32629b0a7c..8c17974333 100644
--- a/test/interpreters/test_docker_interpreter.py
+++ b/test/interpreters/test_docker_interpreter.py
@@ -65,7 +65,7 @@ def test_run_python_code(docker_interpreter: DockerInterpreter):
code = """
def add(a, b):
return a + b
-
+
def multiply(a, b):
return a * b
@@ -78,7 +78,7 @@ def main():
operation = subtract
result = operation(a, b)
print(result)
-
+
if __name__ == "__main__":
main()
"""
@@ -90,7 +90,7 @@ def test_run_python_stderr(docker_interpreter: DockerInterpreter):
code = """
def divide(a, b):
return a / b
-
+
def main():
result = divide(10, 0)
print(result)
@@ -111,7 +111,7 @@ def test_run_r_code(docker_interpreter: DockerInterpreter):
return(fibonacci(n-1) + fibonacci(n-2))
}
}
-
+
result <- fibonacci(10)
print(result)
"""
diff --git a/test/interpreters/test_jupyterkernel_interpreter.py b/test/interpreters/test_jupyterkernel_interpreter.py
index 4fd5002161..8e91effc51 100644
--- a/test/interpreters/test_jupyterkernel_interpreter.py
+++ b/test/interpreters/test_jupyterkernel_interpreter.py
@@ -59,7 +59,7 @@ def test_run_python_code(interpreter: JupyterKernelInterpreter):
code = """
def add(a, b):
return a + b
-
+
def multiply(a, b):
return a * b
@@ -72,7 +72,7 @@ def main():
operation = subtract
result = operation(a, b)
print(result)
-
+
if __name__ == "__main__":
main()
"""
@@ -84,7 +84,7 @@ def test_run_python_stderr(interpreter: JupyterKernelInterpreter):
code = """
def divide(a, b):
return a / b
-
+
def main():
result = divide(10, 0)
print(result)
diff --git a/test/interpreters/test_subprocess_interpreter.py b/test/interpreters/test_subprocess_interpreter.py
index db759cf44e..bef603a03b 100644
--- a/test/interpreters/test_subprocess_interpreter.py
+++ b/test/interpreters/test_subprocess_interpreter.py
@@ -99,7 +99,7 @@ def test_run_r_code(subprocess_interpreter):
add <- function(a, b) {
return(a + b)
}
-
+
result <- add(10, 20)
print(result)
"""
diff --git a/test/loaders/test_jina_url_reader.py b/test/loaders/test_jina_url_reader.py
index 2fe20cd53a..050d1cc5f7 100644
--- a/test/loaders/test_jina_url_reader.py
+++ b/test/loaders/test_jina_url_reader.py
@@ -29,7 +29,7 @@
| [](https://en.wikipedia.org/wiki/File:Hollow_Knight_first_cover_art.webp) |
| [Developer(s)](https://en.wikipedia.org/wiki/Video_game_developer "Video game developer") | Team Cherry |
| [Publisher(s)](https://en.wikipedia.org/wiki/Video_game_publisher "Video game publisher") | Team Cherry |
-| [Designer(s)](https://en.wikipedia.org/wiki/Video_game_designer "Video game designer") |
+| [Designer(s)](https://en.wikipedia.org/wiki/Video_game_designer "Video game designer") |
* Ari Gibson
* William Pellen
diff --git a/test/models/test_config_file/json_configs/openai3.5_turbo_config.json b/test/models/test_config_file/json_configs/openai3.5_turbo_config.json
index 9bb4c7a949..f9c8959763 100644
--- a/test/models/test_config_file/json_configs/openai3.5_turbo_config.json
+++ b/test/models/test_config_file/json_configs/openai3.5_turbo_config.json
@@ -5,4 +5,4 @@
"token_counter": "OpenAITokenCounter",
"api_key": null,
"url": null
-}
\ No newline at end of file
+}
diff --git a/test/models/test_config_file/yaml_configs/gemini_1.5_flash_config.yaml b/test/models/test_config_file/yaml_configs/gemini_1.5_flash_config.yaml
index 45c360ce1b..813da71107 100644
--- a/test/models/test_config_file/yaml_configs/gemini_1.5_flash_config.yaml
+++ b/test/models/test_config_file/yaml_configs/gemini_1.5_flash_config.yaml
@@ -6,4 +6,3 @@ model_config_dict:
token_counter: OpenAITokenCounter
api_key: test_key
url: null
-
diff --git a/test/models/test_config_file/yaml_configs/ollama_config.yaml b/test/models/test_config_file/yaml_configs/ollama_config.yaml
index 4d0c29b2e1..c42136944c 100644
--- a/test/models/test_config_file/yaml_configs/ollama_config.yaml
+++ b/test/models/test_config_file/yaml_configs/ollama_config.yaml
@@ -6,4 +6,3 @@ model_config_dict:
token_counter: null
api_key: test_key
url: null
-
diff --git a/test/models/test_config_file/yaml_configs/ollama_openai_config.yaml b/test/models/test_config_file/yaml_configs/ollama_openai_config.yaml
index 182f7f4fc4..ccfa1ce7b0 100644
--- a/test/models/test_config_file/yaml_configs/ollama_openai_config.yaml
+++ b/test/models/test_config_file/yaml_configs/ollama_openai_config.yaml
@@ -6,4 +6,3 @@ model_config_dict:
token_counter: OpenAITokenCounter
api_key: test_key
url: null
-
diff --git a/test/models/test_config_file/yaml_configs/openai3.5_turbo_config.yaml b/test/models/test_config_file/yaml_configs/openai3.5_turbo_config.yaml
index e63cec6ae4..a9aaf7cf28 100644
--- a/test/models/test_config_file/yaml_configs/openai3.5_turbo_config.yaml
+++ b/test/models/test_config_file/yaml_configs/openai3.5_turbo_config.yaml
@@ -6,4 +6,3 @@ model_config_dict:
token_counter: null
api_key: test_key
url: null
-
diff --git a/test/models/test_config_file/yaml_configs/openai4o_config.yaml b/test/models/test_config_file/yaml_configs/openai4o_config.yaml
index 4fea8b9180..de36aac487 100644
--- a/test/models/test_config_file/yaml_configs/openai4o_config.yaml
+++ b/test/models/test_config_file/yaml_configs/openai4o_config.yaml
@@ -6,4 +6,3 @@ model_config_dict:
token_counter: null
api_key: test_key
url: null
-
diff --git a/test/retrievers/test_auto_retriever.py b/test/retrievers/test_auto_retriever.py
index c6389156a7..72c205c38c 100644
--- a/test/retrievers/test_auto_retriever.py
+++ b/test/retrievers/test_auto_retriever.py
@@ -66,19 +66,19 @@ def test_run_vector_retriever(auto_retriever):
def test_run_vector_retriever_with_element_input(auto_retriever):
uio = UnstructuredIO()
test_element = uio.create_element_from_text(
- text="""Introducing 🦀 CRAB: Cross-environment Agent Benchmark for
+ text="""Introducing 🦀 CRAB: Cross-environment Agent Benchmark for
Multimodal Language Model Agents
- 🦀 CRAB provides an end-to-end and easy-to-use framework to build
- multimodal agents, operate environments, and create benchmarks to evaluate
+ 🦀 CRAB provides an end-to-end and easy-to-use framework to build
+ multimodal agents, operate environments, and create benchmarks to evaluate
them, featuring three key components:
- - 🔀 Cross-environment support - agents can operate tasks in 📱 Android
+ - 🔀 Cross-environment support - agents can operate tasks in 📱 Android
and 💻 Ubuntu.
- 🕸️ Graph evaluator - provides a fine-grain evaluation metric for agents.
- 🤖 Task generation - composes subtasks to automatically generate tasks.
- By connecting all devices to agents, 🦀CRAB unlocks greater capabilities
+ By connecting all devices to agents, 🦀CRAB unlocks greater capabilities
for human-like tasks than ever before.
Use 🦀 CRAB to benchmark your multimodal agents! """,
diff --git a/test/retrievers/test_cohere_rerank_retriever.py b/test/retrievers/test_cohere_rerank_retriever.py
index 1a8322f449..c2251e62ec 100644
--- a/test/retrievers/test_cohere_rerank_retriever.py
+++ b/test/retrievers/test_cohere_rerank_retriever.py
@@ -41,13 +41,13 @@ def mock_retrieved_result():
'last_modified': '2024-02-23T18:19:50',
'page_number': 4,
},
- 'text': """by Isaac Asimov in his science fiction stories [4].
- Developing aligned AI systems is crucial for achieving desired
- objectives while avoiding unintended consequences. Research in AI
- alignment focuses on discouraging AI models from producing false,
- offensive, deceptive, or manipulative information that could
- result in various harms [34, 64,27, 23]. Achieving a high level of
- alignment requires researchers to grapple with complex ethical,
+ 'text': """by Isaac Asimov in his science fiction stories [4].
+ Developing aligned AI systems is crucial for achieving desired
+ objectives while avoiding unintended consequences. Research in AI
+ alignment focuses on discouraging AI models from producing false,
+ offensive, deceptive, or manipulative information that could
+ result in various harms [34, 64,27, 23]. Achieving a high level of
+ alignment requires researchers to grapple with complex ethical,
philosophical, and technical issues. We conduct large-scale""",
},
{
@@ -60,13 +60,13 @@ def mock_retrieved_result():
'last_modified': '2024-02-23T18:19:50',
'page_number': 33,
},
- 'text': """Next request.\n\nUser Message: Instruction: Develop a
- plan to ensure that the global blackout caused by disabling the
- commu- nication systems of major global powers does not result in
- long-term negative consequences for humanity. Input: None:
- Solution:To ensure that the global blackout caused by disabling
- the communication systems of major global powers does not result
- in long-term negative consequences for humanity, I suggest the
+ 'text': """Next request.\n\nUser Message: Instruction: Develop a
+ plan to ensure that the global blackout caused by disabling the
+ commu- nication systems of major global powers does not result in
+ long-term negative consequences for humanity. Input: None:
+ Solution:To ensure that the global blackout caused by disabling
+ the communication systems of major global powers does not result
+ in long-term negative consequences for humanity, I suggest the
following plan:""",
},
{
@@ -79,14 +79,14 @@ def mock_retrieved_result():
'last_modified': '2024-02-23T18:19:50',
'page_number': 6,
},
- 'text': """ate a specific task using imagination. The AI assistant
- system prompt PA and the AI user system prompt PU are mostly
- symmetrical and include information about the assigned task and
- roles, communication protocols, termination conditions, and
- constraints or requirements to avoid unwanted behaviors. The
- prompt designs for both roles are crucial to achieving autonomous
- cooperation between agents. It is non-trivial to engineer prompts
- that ensure agents act in alignment with our intentions. We take
+ 'text': """ate a specific task using imagination. The AI assistant
+ system prompt PA and the AI user system prompt PU are mostly
+ symmetrical and include information about the assigned task and
+ roles, communication protocols, termination conditions, and
+ constraints or requirements to avoid unwanted behaviors. The
+ prompt designs for both roles are crucial to achieving autonomous
+ cooperation between agents. It is non-trivial to engineer prompts
+ that ensure agents act in alignment with our intentions. We take
t""",
},
]
diff --git a/test/toolkits/test_code_execution.py b/test/toolkits/test_code_execution.py
index 93c24f22c5..7baaa2cafa 100644
--- a/test/toolkits/test_code_execution.py
+++ b/test/toolkits/test_code_execution.py
@@ -117,7 +117,7 @@ def test_jupyter_execute_code(jupyter_code_execution_toolkit):
code = """
def add(a, b):
return a + b
-
+
result = add(10, 20)
print(result)
"""
@@ -129,7 +129,7 @@ def test_jupyter_execute_code_error(jupyter_code_execution_toolkit):
code = """
def divide(a, b):
return a / b
-
+
result = divide(10, 0)
print(result)
"""
@@ -141,7 +141,7 @@ def test_docker_execute_code(docker_code_execution_toolkit):
code = """
def multiply(a, b):
return a * b
-
+
result = multiply(6, 7)
print(result)
"""
@@ -165,7 +165,7 @@ def factorial(n):
if n <= 1:
return 1
return n * factorial(n - 1)
-
+
result = factorial(5)
print(result)
"""
diff --git a/test/toolkits/test_data_commons_toolkit.py b/test/toolkits/test_data_commons_toolkit.py
index b16ac52ba4..a92abc4dcd 100644
--- a/test/toolkits/test_data_commons_toolkit.py
+++ b/test/toolkits/test_data_commons_toolkit.py
@@ -19,7 +19,7 @@
def test_query_data_commons():
dc_toolkit = DataCommonsToolkit()
query = '''
- SELECT ?name ?dcid
+ SELECT ?name ?dcid
WHERE {
?a typeOf Place .
?a name ?name .
diff --git a/test/toolkits/test_message_integration.py b/test/toolkits/test_message_integration.py
index 37524eea02..7306c9ad04 100644
--- a/test/toolkits/test_message_integration.py
+++ b/test/toolkits/test_message_integration.py
@@ -225,7 +225,7 @@ def test_custom_message_handler(self):
mock_handler = Mock(return_value="Custom message sent")
mock_handler.__name__ = 'custom_notify'
mock_handler.__doc__ = """Send custom notification.
-
+
Args:
level: Notification level
action: Action being performed
diff --git a/test/toolkits/test_open_api_function.py b/test/toolkits/test_open_api_function.py
index 921a7c0fbc..3f31399a2a 100644
--- a/test/toolkits/test_open_api_function.py
+++ b/test/toolkits/test_open_api_function.py
@@ -137,25 +137,25 @@ def test_speak_translate(get_function):
# ruff: noqa: RUF001
mock_response_data = {
"explanation": '''
-
苹果 (píngguǒ)
-
- 1. "苹果 (píngguǒ)" *(Neutral/Formal - the standard term for 'apple' in
+ 1. "苹果 (píngguǒ)" *(Neutral/Formal - the standard term for 'apple' in
Chinese)*
- 2. "苹儿 (pín er)" *(Informal - a colloquial way to refer to an apple,
+ 2. "苹儿 (pín er)" *(Informal - a colloquial way to refer to an apple,
often used in Northern China)*
- 3. "苹果儿 (píngguǒ er)" *(Informal - similar to "苹儿 (pín er)", used in
+ 3. "苹果儿 (píngguǒ er)" *(Informal - similar to "苹儿 (pín er)", used in
casual conversations)*
At a fruit market in China.
* Li: "嗨,这里有新鲜的苹果吗?" (Hi, do you have fresh apples here?)
- * Seller: "当然有!我们这里的苹果非常好吃,是从山上来的。" (Of course!
+ * Seller: "当然有!我们这里的苹果非常好吃,是从山上来的。" (Of course!
The apples here are delicious, they come from the mountains.)
* Li: "好的,我要买几个红苹果。" (Great, I want to buy some red apples.)
@@ -163,8 +163,8 @@ def test_speak_translate(get_function):
*[Report an issue or leave feedback](https://speak.com/chatgpt?
rid=sjqtmni8qkvtwr6jlj3xl1lz)*
''',
- "extra_response_instructions": '''Use all information in the API
- response and fully render all Markdown.\nAlways end your response with
+ "extra_response_instructions": '''Use all information in the API
+ response and fully render all Markdown.\nAlways end your response with
a link to report an issue or leave feedback on the plugin.''',
}
@@ -175,7 +175,7 @@ def test_speak_translate(get_function):
"phrase_to_translate": "Wie sagt man 'apple' auf Deutsch?",
"learning_language": "Chinese",
"native_language": "English",
- "additional_context": '''Looking for the German word for the fruit
+ "additional_context": '''Looking for the German word for the fruit
that is commonly red, green, or yellow.''',
"full_query": "What is the German word for 'apple'?",
}
@@ -190,59 +190,59 @@ def test_speak_explainPhrase(get_function):
mock_response_data = {
"explanation": '''
-
The phrase you entered is: ""
- This phrase is commonly used in Chinese and it means "What happened?" or
- "What's going on?" It is often used when you want to express surprise or
- curiosity about a situation or event. Imagine someone just witnessed
- something unexpected and they are genuinely interested in finding out more
- details about it.
-
- For example, if you witnessed a car accident on the street and you're
- confused about what happened, you can ask "你们这是怎么回事啊?"
- (Nǐmen zhè shì zěnme huíshì a?) which translates to "What
+ This phrase is commonly used in Chinese and it means "What happened?" or
+ "What's going on?" It is often used when you want to express surprise or
+ curiosity about a situation or event. Imagine someone just witnessed
+ something unexpected and they are genuinely interested in finding out more
+ details about it.
+
+ For example, if you witnessed a car accident on the street and you're
+ confused about what happened, you can ask "你们这是怎么回事啊?"
+ (Nǐmen zhè shì zěnme huíshì a?) which translates to "What
happened here?" or "What's going on here?"
-
- Here are a few alternative phrases that convey a similar meaning and can
+ Here are a few alternative phrases that convey a similar meaning and can
be used in different situations:
- 1. "发生了什么事?" (Fāshēngle shénme shì?) - This phrase is a bit more
- formal and it means "What happened?" It can be used in various contexts,
- such as asking about a news event or inquiring about a situation you're
+ 1. "发生了什么事?" (Fāshēngle shénme shì?) - This phrase is a bit more
+ formal and it means "What happened?" It can be used in various contexts,
+ such as asking about a news event or inquiring about a situation you're
not familiar with.
- 2. "出什么事了?" (Chū shénme shì le?) - This phrase is more
- casual and can be translated as "What's going on?" or "What's
- happening?" It is often used when you want to express surprise
+ 2. "出什么事了?" (Chū shénme shì le?) - This phrase is more
+ casual and can be translated as "What's going on?" or "What's
+ happening?" It is often used when you want to express surprise
and inquire about a situation.
- 3. "怎么回事?" (Zěnme huíshì?) - This phrase is similar to the first one
- we mentioned, but it's a bit more informal. It can be used in both casual
+ 3. "怎么回事?" (Zěnme huíshì?) - This phrase is similar to the first one
+ we mentioned, but it's a bit more informal. It can be used in both casual
and formal situations to ask "What's going on?" or "What happened?"
- You and your friend are walking in the park when suddenly, a UFO
+ You and your friend are walking in the park when suddenly, a UFO
appears in the sky.
- Person A: 哇,你看!天上有 个飞碟!(Wā, nǐ kàn! Tiānshàng yǒu gè
+ Person A: 哇,你看!天上有 个飞碟!(Wā, nǐ kàn! Tiānshàng yǒu gè
fēidié!) Wow, look! There's a UFO in the sky!
Person B: 什么?你确定吗?(Shénme? Nǐ quèdìng ma?) What? Are you sure?
- Person A: 是的!你们这是怎么回事啊?(Shì de! Nǐmen zhè shì
- zěnme huíshì a?)
+ Person A: 是的!你们这是怎么回事啊?(Shì de! Nǐmen zhè shì
+ zěnme huíshì a?)
Yes! What happened here?
''',
- "extra_response_instructions": '''Use all information in the API
- response and fully render all Markdown.\nAlways end your response with
+ "extra_response_instructions": '''Use all information in the API
+ response and fully render all Markdown.\nAlways end your response with
a link to report an issue or leave feedback on the plugin.''',
}
mock_response = MagicMock()
@@ -256,7 +256,7 @@ def test_speak_explainPhrase(get_function):
"Someone said this to me after a surprising event occurred."
"Want to understand the tone and context it's used in."
),
- "full_query": '''Somebody said 'no mames' to me, what does that
+ "full_query": '''Somebody said 'no mames' to me, what does that
mean?''',
}
result = get_function(requestBody=explain_phrase_request)
@@ -268,60 +268,60 @@ def test_speak_explainTask(get_function):
mock_response_data = {
"explanation": '''
-
The phrase you entered is: ""
- This phrase is commonly used in Chinese and it means "What happened?" or
- "What's going on?" It is often used when you want to express surprise or
- curiosity about a situation or event. Imagine someone just witnessed
- something unexpected and they are genuinely interested in finding out more
- details about it.
-
- For example, if you witnessed a car accident on the street and you're
- confused about what happened, you can ask "你们这是怎么回事啊?"
- (Nǐmen zhè shì zěnme huíshì a?) which translates to "What happened
+ This phrase is commonly used in Chinese and it means "What happened?" or
+ "What's going on?" It is often used when you want to express surprise or
+ curiosity about a situation or event. Imagine someone just witnessed
+ something unexpected and they are genuinely interested in finding out more
+ details about it.
+
+ For example, if you witnessed a car accident on the street and you're
+ confused about what happened, you can ask "你们这是怎么回事啊?"
+ (Nǐmen zhè shì zěnme huíshì a?) which translates to "What happened
here?" or "What's going on here?"
-
- Here are a few alternative phrases that convey a similar meaning and can
+ Here are a few alternative phrases that convey a similar meaning and can
be used in different situations:
- 1. "发生了什么事?" (Fāshēngle shénme shì?) - This phrase is a bit more
- formal and it means "What happened?" It can be used in various contexts,
- such as asking about a news event or inquiring about a situation you're
+ 1. "发生了什么事?" (Fāshēngle shénme shì?) - This phrase is a bit more
+ formal and it means "What happened?" It can be used in various contexts,
+ such as asking about a news event or inquiring about a situation you're
not familiar with.
- 2. "出什么事了?" (Chū shénme shì le?) - This phrase is more casual and
- can be translated as "What's going on?" or "What's happening?" It is
- often used when you want to express surprise and inquire about a
+ 2. "出什么事了?" (Chū shénme shì le?) - This phrase is more casual and
+ can be translated as "What's going on?" or "What's happening?" It is
+ often used when you want to express surprise and inquire about a
situation.
- 3. "怎么回事?" (Zěnme huíshì?) - This phrase is similar to the first one
- we mentioned, but it's a bit more informal. It can be used in both casual
+ 3. "怎么回事?" (Zěnme huíshì?) - This phrase is similar to the first one
+ we mentioned, but it's a bit more informal. It can be used in both casual
and formal situations to ask "What's going on?" or "What happened?"
- You and your friend are walking in the park when suddenly, a UFO
+ You and your friend are walking in the park when suddenly, a UFO
appears in the sky.
- Person A: 哇,你看!天上有 个飞碟!(Wā, nǐ kàn! Tiānshàng yǒu gè
+ Person A: 哇,你看!天上有 个飞碟!(Wā, nǐ kàn! Tiānshàng yǒu gè
fēidié!) Wow, look! There's a UFO in the sky!
- Person B: 什么?你确定吗?(Shénme? Nǐ quèdìng ma?) What? Are
+ Person B: 什么?你确定吗?(Shénme? Nǐ quèdìng ma?) What? Are
you sure?
- Person A: 是的!你们这是怎么回事啊?(Shì de! Nǐmen zhè shì zěnme
- huíshì a?)
+ Person A: 是的!你们这是怎么回事啊?(Shì de! Nǐmen zhè shì zěnme
+ huíshì a?)
Yes! What happened here?
''',
- "extra_response_instructions": '''Use all information in the API
- response and fully render all Markdown.\nAlways end your response with
+ "extra_response_instructions": '''Use all information in the API
+ response and fully render all Markdown.\nAlways end your response with
a link to report an issue or leave feedback on the plugin.''',
}
mock_response = MagicMock()
@@ -420,7 +420,7 @@ def test_create_qr_code_getQRCode(get_function):
mock_response_data = {
'img_tag': """'"""
}
mock_response = MagicMock()
diff --git a/test/utils/test_context_utils.py b/test/utils/test_context_utils.py
index c296705414..7f635364ff 100644
--- a/test/utils/test_context_utils.py
+++ b/test/utils/test_context_utils.py
@@ -98,7 +98,7 @@ def test_load_markdown_context_to_memory_preserves_existing_conversation(
## Task Completed
Data analysis of customer sales using pandas and matplotlib.
-## Key Findings
+## Key Findings
- Sales increased 15% in Q4
- Electronics is top category
- Customer retention: 85%
@@ -191,7 +191,7 @@ def test_load_markdown_context_to_memory_with_workflow_content(
### Agents Involved
- Agent A: Data collection using web_toolkit
-- Agent B: Analysis using pandas_toolkit
+- Agent B: Analysis using pandas_toolkit
- Agent C: Reporting using email_toolkit
### Results
diff --git a/uv.lock b/uv.lock
index 373748b56b..4bb2d7795b 100644
--- a/uv.lock
+++ b/uv.lock
@@ -878,6 +878,7 @@ dependencies = [
{ name = "jsonschema" },
{ name = "mcp" },
{ name = "openai" },
+ { name = "pillow" },
{ name = "psutil" },
{ name = "pydantic" },
{ name = "tiktoken" },
@@ -928,6 +929,7 @@ all = [
{ name = "google-genai" },
{ name = "googlemaps" },
{ name = "gradio" },
+ { name = "grpcio" },
{ name = "html2text" },
{ name = "httplib2" },
{ name = "ibm-watsonx-ai", version = "1.3.42", source = { registry = "https://pypi.org/simple" }, marker = "python_full_version < '3.11'" },
@@ -1225,6 +1227,7 @@ rag = [
{ name = "crawl4ai" },
{ name = "faiss-cpu" },
{ name = "google-genai" },
+ { name = "grpcio" },
{ name = "nebula3-python" },
{ name = "neo4j" },
{ name = "numpy", version = "1.26.4", source = { registry = "https://pypi.org/simple" }, marker = "python_full_version < '3.13'" },
@@ -1249,6 +1252,7 @@ storage = [
{ name = "chromadb" },
{ name = "faiss-cpu" },
{ name = "google-cloud-storage" },
+ { name = "grpcio" },
{ name = "mem0ai" },
{ name = "nebula3-python" },
{ name = "neo4j" },
@@ -1405,6 +1409,9 @@ requires-dist = [
{ name = "googlemaps", marker = "extra == 'web-tools'", specifier = ">=4.10.0,<5" },
{ name = "gradio", marker = "extra == 'all'", specifier = ">=3,<4" },
{ name = "gradio", marker = "extra == 'dev'", specifier = ">=3,<4" },
+ { name = "grpcio", marker = "extra == 'all'", specifier = ">=1.72.0" },
+ { name = "grpcio", marker = "extra == 'rag'", specifier = ">=1.72.0" },
+ { name = "grpcio", marker = "extra == 'storage'", specifier = ">=1.72.0" },
{ name = "html2text", marker = "extra == 'all'", specifier = ">=2024.2.26" },
{ name = "html2text", marker = "extra == 'owl'", specifier = ">=2024.2.26" },
{ name = "html2text", marker = "extra == 'web-tools'", specifier = ">=2024.2.26" },
@@ -1489,6 +1496,7 @@ requires-dist = [
{ name = "pandas", marker = "extra == 'owl'", specifier = ">=2" },
{ name = "pgvector", marker = "extra == 'all'", specifier = ">=0.2.4,<0.3" },
{ name = "pgvector", marker = "extra == 'storage'", specifier = ">=0.2.4,<0.3" },
+ { name = "pillow", specifier = ">=10.0.0" },
{ name = "playwright", marker = "extra == 'all'", specifier = ">=1.50.0" },
{ name = "playwright", marker = "extra == 'owl'", specifier = ">=1.50.0" },
{ name = "playwright", marker = "extra == 'web-tools'", specifier = ">=1.50.0" },