由 Fábio Ferreira (@fabiomlferreira) 开发。 查看 dotcursorrules.com 获取更多AI开发增强工具。
这是一个简单的 MCP 服务器,用于在 Cursor 等AI辅助开发工具中启用人机协作工作流程。此服务器允许您运行命令、查看其输出,并直接向AI提供文本反馈。它也与 Cline 和 Windsurf 兼容。
为获得最佳结果,请在AI助手的自定义提示中添加以下内容,可以将其添加到规则中或直接添加到提示中(例如,Cursor):
当您想要提出问题或需要用户确认时,请调用MCP
interactive_feedback
。
当您即将完成用户请求时,必须调用MCPinteractive_feedback
征求用户反馈,而不是直接结束对话。
收到用户反馈后,应该立即再次调用MCPinteractive_feedback
确认您理解了反馈并计划如何调整。
持续使用MCPinteractive_feedback
进行交互,直到用户提交空白反馈,表示对话可以结束。
记住:每一步操作后都应该寻求反馈,每次用户提供反馈后都需要确认理解。
这将确保您的AI助手使用此MCP服务器在标记任务完成之前请求用户反馈。
新功能:现在界面中添加了"自动添加交互提示文本"选项(默认启用),勾选此选项后,用户的每条反馈消息末尾都会自动添加上述提示文本,引导AI模型继续使用交互式反馈工具。如果您不需要此功能,可以在界面中取消勾选。
通过引导助手在进行推测性的、高成本的工具调用之前与用户检查,此模块可以大大减少在像Cursor这样的平台上的高级请求(例如OpenAI工具调用)的数量。在某些情况下,它有助于将多达25次工具调用整合为一个单一的、反馈感知的请求 — 节省资源并提高性能。
此MCP服务器使用Qt的QSettings
按项目存储配置。这包括:
- 要运行的命令。
- 是否在该项目的下一次启动时自动执行命令(请参阅"下次运行时自动执行"复选框)。
- 命令部分的可见性状态(显示/隐藏)(切换时立即保存)。
- 窗口几何形状和状态(一般UI首选项)。
这些设置通常存储在特定于平台的位置(例如,Windows上的注册表,macOS上的plist文件,Linux上的~/.config
或~/.local/share
中的配置文件)下的组织名称"InteractiveFeedbackMCP"和应用程序名称"InteractiveFeedbackMCP"下,每个项目目录都有一个唯一的组。
UI中的"保存配置"按钮主要保存当前输入到命令输入字段中的命令和活动项目的"下次运行时自动执行"复选框的状态。切换时会自动保存命令部分的可见性。关闭应用程序时会保存常规窗口大小和位置。
- 前提条件:
- Python 3.11或更新版本。
- uv(Python包管理器)。通过以下方式安装:
- Windows:
pip install uv
- Linux/Mac:
curl -LsSf https://astral.sh/uv/install.sh | sh
- Windows:
- 获取代码:
- 克隆此仓库:
git clone https://github.com/noopstudios/interactive-feedback-mcp.git
- 或下载源代码。
- 克隆此仓库:
- 导航到目录:
cd path/to/interactive-feedback-mcp
- 安装依赖:
uv sync
(这将创建一个虚拟环境并安装包)
- 运行MCP服务器:
uv run server.py
- 在Cursor中配置:
-
Cursor通常允许在其设置中指定自定义MCP服务器。您需要将Cursor指向此运行的服务器。确切的机制可能会有所不同,所以请查阅Cursor的文档以添加自定义MCP。
-
手动配置(例如,通过
mcp.json
) 记得将/Users/fabioferreira/Dev/scripts/interactive-feedback-mcp
路径更改为您在系统上克隆仓库的实际路径。{ "mcpServers": { "interactive-feedback-mcp": { "command": "uv", "args": [ "--directory", "/Users/fabioferreira/Dev/scripts/interactive-feedback-mcp", "run", "server.py" ], "timeout": 600, "autoApprove": [ "interactive_feedback" ] } } }
-
在Cursor中配置时,您可能会使用像
interactive-feedback-mcp
这样的服务器标识符。
-
Developed by Fábio Ferreira (@fabiomlferreira). Check out dotcursorrules.com for more AI development enhancements.
Simple MCP Server to enable a human-in-the-loop workflow in AI-assisted development tools like Cursor. This server allows you to run commands, view their output, and provide textual feedback directly to the AI. It is also compatible with Cline and Windsurf.
For the best results, add the following to your custom prompt in your AI assistant, you should add it on a rule or directly in the prompt (e.g., Cursor):
Whenever you want to ask a question or need user confirmation, always call the MCP
interactive_feedback
.
Whenever you're about to complete a user request, you MUST call the MCPinteractive_feedback
to solicit user feedback instead of simply ending the conversation.
After receiving user feedback, immediately call the MCPinteractive_feedback
again to confirm your understanding and how you plan to adjust.
Continue using the MCPinteractive_feedback
for interaction until the user submits empty feedback, indicating the conversation can end.
Remember: Seek feedback after each step, and confirm understanding after each feedback provided by the user.
This will ensure your AI assistant uses this MCP server to request user feedback before marking the task as completed.
New Feature: There is now an "Automatically add interaction prompt text" option in the interface (enabled by default). When checked, the above prompt text will be automatically appended to the end of each feedback message, guiding the AI model to continue using the interactive feedback tool. If you don't need this feature, you can uncheck it in the interface.
By guiding the assistant to check in with the user instead of branching out into speculative, high-cost tool calls, this module can drastically reduce the number of premium requests (e.g., OpenAI tool invocations) on platforms like Cursor. In some cases, it helps consolidate what would be up to 25 tool calls into a single, feedback-aware request — saving resources and improving performance.
This MCP server uses Qt's QSettings
to store configuration on a per-project basis. This includes:
- The command to run.
- Whether to execute the command automatically on the next startup for that project (see "Execute automatically on next run" checkbox).
- The visibility state (shown/hidden) of the command section (this is saved immediately when toggled).
- Window geometry and state (general UI preferences).
These settings are typically stored in platform-specific locations (e.g., registry on Windows, plist files on macOS, configuration files in ~/.config
or ~/.local/share
on Linux) under an organization name "InteractiveFeedbackMCP" and application name "InteractiveFeedbackMCP", with a unique group for each project directory.
The "Save Configuration" button in the UI primarily saves the current command typed into the command input field and the state of the "Execute automatically on next run" checkbox for the active project. The visibility of the command section is saved automatically when you toggle it. General window size and position are saved when the application closes.
- Prerequisites:
- Python 3.11 or newer.
- uv (Python package manager). Install it with:
- Windows:
pip install uv
- Linux/Mac:
curl -LsSf https://astral.sh/uv/install.sh | sh
- Windows:
- Get the code:
- Clone this repository:
git clone https://github.com/noopstudios/interactive-feedback-mcp.git
- Or download the source code.
- Clone this repository:
- Navigate to the directory:
cd path/to/interactive-feedback-mcp
- Install dependencies:
uv sync
(this creates a virtual environment and installs packages)
- Run the MCP Server:
uv run server.py
- Configure in Cursor:
-
Cursor typically allows specifying custom MCP servers in its settings. You'll need to point Cursor to this running server. The exact mechanism might vary, so consult Cursor's documentation for adding custom MCPs.
-
Manual Configuration (e.g., via
mcp.json
) Remember to change the/Users/fabioferreira/Dev/scripts/interactive-feedback-mcp
path to the actual path where you cloned the repository on your system.{ "mcpServers": { "interactive-feedback-mcp": { "command": "uv", "args": [ "--directory", "/Users/fabioferreira/Dev/scripts/interactive-feedback-mcp", "run", "server.py" ], "timeout": 600, "autoApprove": [ "interactive_feedback" ] } } }
-
You might use a server identifier like
interactive-feedback-mcp
when configuring it in Cursor.
-
Similar setup principles apply. You would configure the server command (e.g., uv run server.py
with the correct --directory
argument pointing to the project directory) in the respective tool's MCP settings, using interactive-feedback-mcp
as the server identifier.
To run the server in development mode with a web interface for testing:
uv run fastmcp dev server.py
This will open a web interface and allow you to interact with the MCP tools for testing.
Here's an example of how the AI assistant would call the interactive_feedback
tool:
<use_mcp_tool>
<server_name>interactive-feedback-mcp</server_name>
<tool_name>interactive_feedback</tool_name>
<arguments>
{
"project_directory": "/path/to/your/project",
"summary": "I've implemented the changes you requested and refactored the main module."
}
</arguments>
</use_mcp_tool>
If you find this Interactive Feedback MCP useful, the best way to show appreciation is by following Fábio Ferreira on X @fabiomlferreira.
For any questions, suggestions, or if you just want to share how you're using it, feel free to reach out on X!
Also, check out dotcursorrules.com for more resources on enhancing your AI-assisted development workflow.