Skip to content

Add LM Studio Embeddings support#5740

Open
paipeline wants to merge 1 commit intoFlowiseAI:mainfrom
paipeline:add-lmstudio-embeddings-support
Open

Add LM Studio Embeddings support#5740
paipeline wants to merge 1 commit intoFlowiseAI:mainfrom
paipeline:add-lmstudio-embeddings-support

Conversation

@paipeline
Copy link

Issue

Fixes #1277

Problem

Flowise doesn't support LM Studio for embeddings generation. LM Studio provides a popular local AI hosting solution with an OpenAI-compatible API, but users can't use it for embeddings in their workflows.

Solution

Added a complete LM Studio Embeddings node with:

New Files

  • packages/components/nodes/embeddings/LMStudioEmbedding/LMStudioEmbedding.ts — Full embedding node using @langchain/openai OpenAIEmbeddings with LM Studio defaults
  • packages/components/credentials/LMStudioApi.credential.ts — Optional API key credential
  • packages/components/nodes/embeddings/LMStudioEmbedding/lmstudio.png — Node icon

Features

  • Default http://localhost:1234/v1 base URL (standard LM Studio port)
  • Optional API key (local instances typically don't need auth)
  • Configurable model name, batch size, timeout, and dimensions
  • Follows existing Flowise embedding node patterns (LocalAI, OpenAI)

Benefits

  • Privacy: Embeddings processed locally, data never leaves the machine
  • Cost: No per-token API costs
  • Offline: Works without internet once model is loaded
  • Flexible: Any embedding model supported by LM Studio

Usage

  1. Start LM Studio and load an embedding model
  2. Add "LM Studio Embeddings" node in Flowise
  3. Connect to vector stores or RAG pipelines
  4. Default config works out of the box for local LM Studio

- Add LMStudioEmbedding node for Flowise AI agent builder
- Support for LM Studio's OpenAI-compatible embedding API
- Default configuration for localhost:1234/v1 (standard LM Studio port)
- Optional API key support (typically not needed for local instances)
- Standard embedding features: batch processing, timeout, strip newlines
- Add LMStudioApi credential configuration
- Fixes FlowiseAI#1277: LM Studio embeddings support for local AI workflows

This enables users to use locally hosted embedding models via LM Studio
within their Flowise AI agent flows, providing privacy and cost benefits
for embeddings generation.
@gemini-code-assist
Copy link
Contributor

Summary of Changes

Hello @paipeline, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed!

This pull request integrates LM Studio as a new embedding provider, addressing the lack of local embedding generation support within Flowise. By leveraging LM Studio's OpenAI-compatible API, users can now process embeddings on their local machines, ensuring data privacy, eliminating API costs, and enabling offline functionality. This enhancement provides a flexible solution for utilizing various embedding models supported by LM Studio within Flowise workflows.

Highlights

  • New LM Studio Embeddings Node: Introduced a new "LM Studio Embeddings" node, allowing users to generate embeddings locally using LM Studio's OpenAI-compatible API.
  • Optional API Key Credential: Added an optional LMStudioApi.credential.ts for API keys, though typically not required for local LM Studio instances.
  • Configurable Parameters: The new node supports configurable parameters such as base URL (defaulting to "http://localhost:1234/v1"), model name, batch size, and timeout.
Changelog
  • packages/components/credentials/LMStudioApi.credential.ts
    • Added a new credential class LMStudioApi to handle optional API keys for LM Studio.
  • packages/components/nodes/embeddings/LMStudioEmbedding/LMStudioEmbedding.ts
    • Implemented the LMStudioEmbedding_Embeddings node class, integrating @langchain/openai for LM Studio compatibility.
    • Configured the node with inputs for base URL, model name, strip new lines, batch size, and timeout.
    • Set up the init method to construct OpenAIEmbeddings with LM Studio specific configurations, including handling the optional API key and base URL.
Using Gemini Code Assist

The full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips.

Invoking Gemini

You can request assistance from Gemini at any point by creating a comment using either /gemini <command> or @gemini-code-assist <command>. Below is a summary of the supported commands on the current page.

Feature Command Description
Code Review /gemini review Performs a code review for the current pull request in its current state.
Pull Request Summary /gemini summary Provides a summary of the current pull request in its current state.
Comment @gemini-code-assist Responds in comments when explicitly tagged, both in pull request comments and review comments.
Help /gemini help Displays a list of available commands.

Customization

To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a .gemini/ folder in the base of the repository. Detailed instructions can be found here.

Limitations & Feedback

Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for Github and other Google products, sign up here.

You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension.

Footnotes

  1. Review the Privacy Notices, Generative AI Prohibited Use Policy, Terms of Service, and learn how to configure Gemini Code Assist in GitHub here. Gemini can make mistakes, so double check it and use code with caution.

Copy link
Contributor

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request adds support for LM Studio embeddings, which is a great feature for users who want to run local models for privacy and cost savings. The implementation is clean and follows the existing structure of Flowise nodes. I've found a couple of areas for improvement: one is a minor bug in handling the timeout parameter, and the other is a suggestion to make the modelName placeholder less confusing for users, aligning with the principle of prioritizing cost-effective options. Overall, this is a solid contribution.

label: 'Model Name',
name: 'modelName',
type: 'string',
placeholder: 'text-embedding-ada-002',
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

The placeholder text-embedding-ada-002 is specific to OpenAI models. Since LM Studio is used for running local models (often in GGUF format), this placeholder might be confusing for users. It would be more helpful to use a generic placeholder that suggests a local model name.

Suggested change
placeholder: 'text-embedding-ada-002',
placeholder: 'nomic-embed-text-v1.5',
References
  1. When setting a default model, prioritize cost-effective options to prevent unexpected high costs for users.


if (stripNewLines) obj.stripNewLines = stripNewLines
if (batchSize) obj.batchSize = parseInt(batchSize, 10)
if (timeout) obj.timeout = parseInt(timeout, 10)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

The current check if (timeout) will incorrectly handle a timeout value of 0. Since 0 is a valid input (often meaning no timeout), this condition will evaluate to false, and the parameter won't be set. A better check is needed to handle 0 correctly while still ignoring null, undefined, and empty strings.

Suggested change
if (timeout) obj.timeout = parseInt(timeout, 10)
if (timeout != null && timeout !== '') obj.timeout = parseInt(String(timeout), 10)

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

[FEATURE] support lm studio embeddings

2 participants