Large language models are highly capable of understanding user needs and excel in communication and response. However, their abilities are primarily focused on language interaction. To expand their functionality, frameworks such as workflow and agent have been developed to enable diverse features. We leverage these frameworks to allow edge resources and computing power to serve large models, thereby creating integrated cloud-edge application scenarios. In this article, we will use the LangChain tool to implement an agent framework that enables an LLM (large language model) to call edge device functions.
Introduction to LangChain:
In the context of the current "battle of AI models," many developers are eager to build AI applications but often face the following challenges: needing to rewrite invocation code when switching models, dealing with complex multi-step task logic, inability to integrate private data, and the risk of losing conversation content. The emergence of LangChain provides a one-stop solution to these issues. As an open-source development framework for large model applications, LangChain has become an essential tool for AI developers. Its modular design makes the development process as simple as building blocks; it supports full model compatibility, allowing model switching without code modification; it has data processing and tool invocation capabilities, breaking the functional boundaries of large models; its intelligent workflow orchestration enables the completion of complex tasks with ease; and its full-stack ecosystem covers everything from development to deployment in a one-stop manner.
Steps to implement the framework
We will use LangChain to connect LLM and MCP, fully implementing the business framework for edge devices and edge computing access to large models. The specific steps are as follows (the MCP on the edge side uses Synaptics' Astra SL1640 device, and the MCP implementation method can be referred to in the previous blog post):
1. Start MCP Server:
According to the introduction in the previous blog post, first run Astra's MCP Server. The edge capabilities (tools) are defined as two main functions: taking photos using a camera and recognizing people in the photos.


2. Write a LangChain program:
Next, write a LangChain program to use MCP. This program can run on application hosts such as PCs, mobile phones, central control devices, or even the Astra platform itself. The LLM can choose to use either a local model or a cloud-based API.
Register MCP information:
Use the Python package of LangChain to register MCP information into the program.

The IP address here belongs to Astra.
Create Agent:
Create an Agent to bind LLM and MCP together.

3. Use AI to automatically invoke MCP:
After completing the above steps, users can use the AI as usual. The system will automatically determine whether it needs to call the MCP based on the requirements.
Cases of invoking MCP:


In cases where MCP is not invoked:


Through this framework, MCP only needs to accurately describe and implement its own functions, while the LLM can access the corresponding resources based on user needs. Even if the large model itself lacks image recognition or high-precision judgment capabilities, these functions can still be achieved through edge devices, while also ensuring a certain level of data security.
The above is the complete content of this blog post. If you have any questions, feel free to leave a comment below the blog or contact us, and we will do our best to address them (o´ω`o)و. Thank you for reading, and see you in the next post!
FAQ 1: Is LangChain suitable for beginners?
A1: LangChain offers a modular design and comprehensive documentation support, making it ideal for beginners to quickly get started with developing AI applications.
FAQ 2: What models does LangChain support?
A2: LangChain supports various large language models, including OpenAI's GPT series, Anthropic's Claude, Google's PaLM, and also localized models.
FAQ 3: How to protect data privacy?
A3: Through edge computing, data can be processed locally, reducing the need to upload it to the cloud, thereby effectively protecting user privacy.
FAQ 4: Does LangChain support multiple languages?
A4: Yes, LangChain supports multilingual processing and can facilitate multilingual conversations and tasks based on user needs.
FAQ 5: What are the advantages of edge computing?
A5: Edge computing can reduce data transmission latency, decrease bandwidth requirements, while enhancing data security and real-time processing capabilities.
FAQ 6: How to deploy LangChain applications?
A6: LangChain applications can be deployed on various platforms, including cloud servers, local devices, mobile devices, etc. The specific deployment method depends on the application scenario.
FAQ 6: Is LangChain open source?
A6: Yes, LangChain is an open-source framework that developers can freely use and extend its functionalities.