Conversation
- Add LMStudioEmbedding node for Flowise AI agent builder - Support for LM Studio's OpenAI-compatible embedding API - Default configuration for localhost:1234/v1 (standard LM Studio port) - Optional API key support (typically not needed for local instances) - Standard embedding features: batch processing, timeout, strip newlines - Add LMStudioApi credential configuration - Fixes FlowiseAI#1277: LM Studio embeddings support for local AI workflows This enables users to use locally hosted embedding models via LM Studio within their Flowise AI agent flows, providing privacy and cost benefits for embeddings generation.
Summary of ChangesHello @paipeline, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed! This pull request integrates LM Studio as a new embedding provider, addressing the lack of local embedding generation support within Flowise. By leveraging LM Studio's OpenAI-compatible API, users can now process embeddings on their local machines, ensuring data privacy, eliminating API costs, and enabling offline functionality. This enhancement provides a flexible solution for utilizing various embedding models supported by LM Studio within Flowise workflows. Highlights
Changelog
Using Gemini Code AssistThe full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips. Invoking Gemini You can request assistance from Gemini at any point by creating a comment using either
Customization To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a Limitations & Feedback Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for Github and other Google products, sign up here. You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension. Footnotes
|
There was a problem hiding this comment.
Code Review
This pull request adds support for LM Studio embeddings, which is a great feature for users who want to run local models for privacy and cost savings. The implementation is clean and follows the existing structure of Flowise nodes. I've found a couple of areas for improvement: one is a minor bug in handling the timeout parameter, and the other is a suggestion to make the modelName placeholder less confusing for users, aligning with the principle of prioritizing cost-effective options. Overall, this is a solid contribution.
| label: 'Model Name', | ||
| name: 'modelName', | ||
| type: 'string', | ||
| placeholder: 'text-embedding-ada-002', |
There was a problem hiding this comment.
The placeholder text-embedding-ada-002 is specific to OpenAI models. Since LM Studio is used for running local models (often in GGUF format), this placeholder might be confusing for users. It would be more helpful to use a generic placeholder that suggests a local model name.
| placeholder: 'text-embedding-ada-002', | |
| placeholder: 'nomic-embed-text-v1.5', |
References
- When setting a default model, prioritize cost-effective options to prevent unexpected high costs for users.
|
|
||
| if (stripNewLines) obj.stripNewLines = stripNewLines | ||
| if (batchSize) obj.batchSize = parseInt(batchSize, 10) | ||
| if (timeout) obj.timeout = parseInt(timeout, 10) |
There was a problem hiding this comment.
The current check if (timeout) will incorrectly handle a timeout value of 0. Since 0 is a valid input (often meaning no timeout), this condition will evaluate to false, and the parameter won't be set. A better check is needed to handle 0 correctly while still ignoring null, undefined, and empty strings.
| if (timeout) obj.timeout = parseInt(timeout, 10) | |
| if (timeout != null && timeout !== '') obj.timeout = parseInt(String(timeout), 10) |
Issue
Fixes #1277
Problem
Flowise doesn't support LM Studio for embeddings generation. LM Studio provides a popular local AI hosting solution with an OpenAI-compatible API, but users can't use it for embeddings in their workflows.
Solution
Added a complete LM Studio Embeddings node with:
New Files
packages/components/nodes/embeddings/LMStudioEmbedding/LMStudioEmbedding.ts— Full embedding node using@langchain/openaiOpenAIEmbeddings with LM Studio defaultspackages/components/credentials/LMStudioApi.credential.ts— Optional API key credentialpackages/components/nodes/embeddings/LMStudioEmbedding/lmstudio.png— Node iconFeatures
http://localhost:1234/v1base URL (standard LM Studio port)Benefits
Usage