· Michał Roman · Tutorials · 5 min read
LangChain for beginners - A practical guide to building AI-powered applications with javaScript
A practical guide to building AI-powered applications with javaScript
LangChain for beginners - A practical guide to building AI-powered applications with javaScript
Introduction
Artificial intelligence is transforming the way we interact with technology, and large language models (LLMs) like OpenAI’s GPT-4 are at the heart of this revolution. However, working directly with LLM APIs can be cumbersome, especially when building complex applications that require memory, structured workflows, and interaction with external tools. This is where LangChain.js comes in.
LangChain.js is a JavaScript and TypeScript framework designed to simplify the development of LLM-powered applications. It provides an easy way to structure prompts, chain multiple AI calls together, store conversation history, and integrate with databases, APIs, and search tools. If you’ve ever wanted to build a chatbot, an AI-powered research assistant, or a smart automation tool, LangChain.js is a great place to start.
This guide will take you through the fundamentals of LangChain.js, how to set it up, and how to build your first AI application.
What is LangChain.js?
LangChain.js is an open-source framework that helps developers build applications using large language models efficiently. Instead of manually managing API calls, handling memory, and chaining multiple AI interactions, LangChain.js provides a structured way to do it.
With LangChain.js, you can:
- Build multi-step AI workflows where one AI response feeds into the next.
- Store and retrieve conversation history to maintain context.
- Use agents that dynamically decide which tools to use, like web searches or database queries.
- Connect AI to external sources, such as APIs, PDFs, or vector databases for knowledge retrieval.
At its core, LangChain.js is built around chains, which are sequences of AI interactions. These can be simple (like a single LLM call) or complex (involving memory, API calls, and tool usage).
Setting up LangChain.js
To get started, you need Node.js installed. If you haven’t already, install LangChain.js along with OpenAI’s package:
npm install langchain openai dotenv
Next, create a .env
file and store your OpenAI API key:
OPENAI_API_KEY=your-openai-api-key
Then, load the environment variables in your JavaScript code:
import dotenv from "dotenv";
dotenv.config();
Now, you’re ready to start coding!
Your first LangChain.js application
Let’s start with something simple: a basic chatbot using OpenAI’s GPT-4.
import { OpenAI } from "langchain/llms/openai";
const llm = new OpenAI({ openAIApiKey: process.env.OPENAI_API_KEY, modelName: "gpt-4" });
const response = await llm.call("What are the benefits of using LangChain.js?");
console.log(response);
This example calls the OpenAI API and returns a response. While this is straightforward, it doesn’t remember past interactions or allow for structured workflows. Let’s make it more useful by introducing chains.
Building a chain
A chain in LangChain.js is a structured sequence of AI calls, allowing you to process data step by step. Let’s say we want to build an assistant that takes a user’s question, rephrases it for better clarity, and then queries GPT-4.
import { LLMChain } from "langchain/chains";
import { PromptTemplate } from "langchain/prompts";
const llm = new OpenAI({ openAIApiKey: process.env.OPENAI_API_KEY, modelName: "gpt-4" });
const template = new PromptTemplate({
inputVariables: ["question"],
template: "Rephrase the following question to make it clearer: {question}"
});
const chain = new LLMChain({ llm, prompt: template });
const userQuestion = "Tell me how LangChain.js works";
const rephrasedQuestion = await chain.call({ question: userQuestion });
const finalResponse = await llm.call(rephrasedQuestion.text);
console.log(finalResponse);
This chain improves the clarity of user questions before sending them to the model, which can lead to better responses. Chains like this are useful when you need structured workflows with multiple steps.
Adding memory for context
One limitation of using a single AI call is that the model doesn’t remember previous messages. If you’re building a chatbot, you want it to recall previous interactions. LangChain.js provides memory modules for this purpose.
import { ConversationChain } from "langchain/chains";
import { BufferMemory } from "langchain/memory";
const memory = new BufferMemory();
const chat = new ConversationChain({ llm, memory });
console.log(await chat.call({ input: "Hello! What is LangChain.js?" }));
console.log(await chat.call({ input: "Can you explain it again but simpler?" }));
Now, when the user asks for clarification, the AI will remember the previous response and refine its answer instead of starting from scratch.
Using LangChain.js zgents for smart AI assistants
LangChain.js also allows us to build agents—AI-powered assistants that can decide how to respond based on available tools. Agents can perform tasks like searching the web, fetching stock prices, or retrieving database records.
import { initializeAgentExecutor } from "langchain/agents";
import { Tool } from "langchain/tools";
const weatherTool = new Tool({
name: "weather",
func: async (location) => `The weather in ${location} is 25°C and sunny.`,
description: "Get current weather for a location."
});
const agent = await initializeAgentExecutor({ tools: [weatherTool], llm });
console.log(await agent.call({ input: "What’s the weather like in Paris?" }));
This agent can decide whether to call the weather tool or just generate a response on its own.
Deploying your LangChain.js application
Once you’ve built a LangChain-powered application, you might want to deploy it. You can:
- Wrap it in an Express.js backend and serve it as an API.
- Use Next.js to create an AI-powered web application.
- Deploy it on Vercel, AWS Lambda, or Docker for cloud hosting.
For example, using Express.js:
import express from "express";
const app = express();
app.use(express.json());
app.post("/chat", async (req, res) => {
const response = await llm.call(req.body.message);
res.json({ response });
});
app.listen(3000, () => console.log("Server running on port 3000"));
Summary
LangChain.js makes building AI applications structured, scalable, and efficient. Whether you’re creating chatbots, research assistants, or AI-powered workflows, it provides the tools you need. Experiment with chains, memory, and agents, and soon, you’ll be building powerful AI applications with ease!