Notemd

by Jacob
5
4
3
2
1
Score: 55/100

Description

The Notemd plugin transforms your note-taking experience by integrating AI-powered processing and knowledge graph automation. It connects to both cloud and local LLM providers like OpenAI, Anthropic, Ollama, and LMStudio to intelligently chunk and process documents, insert context-aware wiki-links, and auto-generate concept notes. It includes features like web search summarization using Tavily or DuckDuckGo, duplicate detection and cleanup, and batch Mermaid/LaTeX syntax correction. Notes can be processed in bulk, with customizable file naming and storage paths. The plugin also supports retry logic for failed API calls, stable connection testing, and detailed configuration of provider settings. It's ideal for users building a personal knowledge base or conducting research-driven writing with structured link graphs.

Reviews

  • Jacob
    Reviewed on Apr 13th, 2026
    I’ve been using NOTEMD for a while now, and it has truly streamlined my writing process. It feels lightweight yet powerful, making the overall editing experience in Obsidian much more fluid and intuitive. It’s one of those rare plugins that does exactly what it promises without any unnecessary clutter. If you're looking for a more seamless and focused way to handle your notes, I highly recommend giving this a try. Great work!
  • Reviewed on Apr 13th, 2026
    Perfect plugins, it really helps me out from the paper work and extract structure informations of complicated paras

Stats

160
stars
3,503
downloads
4
forks
340
days
0
days
1
days
4
total PRs
0
open PRs
0
closed PRs
4
merged PRs
5
total issues
0
open issues
5
closed issues
0
commits

RequirementsExperimental

  • API key for LLM providers like OpenAI, Anthropic, DeepSeek, Google, Mistral, Azure, or OpenRouter.

  • API key for Tavily (if using web research).

  • LMStudio or Ollama server running locally (if using local LLMs).

Latest Version

2 days ago

Changelog

Notemd v1.8.5

Highlights

  • Canonical Diagram UX Wording: Sidebar actions, workflow-builder help, command labels, workbench logs, and localized notices now consistently use Generate diagram / Preview diagram while legacy compatibility IDs remain functional underneath.
  • Release Truth Synchronization: Packaged version metadata, bilingual release notes, README version references, and the welcome-modal release digest are all aligned to 1.8.5.
  • Next CLI Wave Activated: The current contract-layering phase is treated as landed, and the next maintainer-facing work now advances under packaging / semantic-verification convergence with a checked-in helper, aligned runbooks, and dedicated Trellis task context.

New Features

  • Updated the first-install welcome modal so its built-in recent-release digest now shows 1.8.5 and 1.8.4, matching the actual shipped plugin version and the latest two release-note files.
  • Checked in the first maintainer semantic-verification convergence slice: npm run verify:diagram-semantics now emits packaging-boundary-aware checklist templates, and the matching maintainer docs/tests are aligned to the same truth.
  • Added a dedicated Trellis task shell for packaging / semantic-verification convergence so the next implementation batch can persist PRD, research, and verification context cleanly.

Fixes

  • Removed the remaining user-visible experimental diagram wording from canonical diagram actions where runtime behavior had already been unified internally.
  • Aligned workbench logs and completion/error notices with the canonical diagram naming, reducing visible drift between settings, workflow builder, and runtime progress output.
  • Synchronized package.json, manifest.json, versions.json, README.md, README_zh.md, and release-note references for the 1.8.5 patch boundary.
  • Locked the maintainer packaging-boundary wording so audit:render-host is no longer easy to overread as proof of true heavy-runtime isolation.

Chores

  • Prepared the repo for the next packaging / semantic-verification convergence follow-through without reopening the already-landed diagram/provider contract split.

Notemd v1.8.5

亮点

  • 图形能力文案收口到 canonical 表达:侧边栏动作、workflow builder 帮助、命令标签、工作台日志与本地化通知现在统一使用 Generate diagram / Preview diagram 这一套规范表达,同时继续保留底层 legacy 兼容 ID。
  • 发布真值完全同步:随包版本元数据、双语 release notes、README 版本标记,以及欢迎弹窗中的最近更新摘要,现已统一对齐到 1.8.5
  • CLI 下一阶段正式起步:当前 contract-layering 阶段被视为已落地,下一波维护者工作现已沿着 packaging / semantic-verification convergence 推进,并补齐了已检入 helper、对齐后的 runbook,以及专用 Trellis 任务上下文。

新功能

  • 首次安装欢迎弹窗中的最近两次更新摘要现已显示 1.8.51.8.4,与实际发布版本和最新两份 release note 文件保持一致。
  • 已检入第一批维护者 semantic-verification convergence 成果:npm run verify:diagram-semantics 现在会生成带 packaging-boundary 提醒的检查模板,相关维护者文档与测试也已对齐到同一真值。
  • 新增 packaging / semantic-verification convergence 的专用 Trellis 任务目录,为下一批实现沉淀 PRD、research 与验证上下文。

修复

  • 去除了 canonical 图形动作上残留的用户可见 experimental diagram 文案,而底层运行时内部收口与兼容 alias 行为保持不变。
  • 工作台日志、完成提示与错误通知已统一切换到 canonical 图形命名,减少设置页、workflow builder 与运行时输出之间的可见漂移。
  • 已同步 package.jsonmanifest.jsonversions.jsonREADME.mdREADME_zh.md 与 release-note 引用,确保 1.8.5 patch 边界一致。
  • 已锁定维护者 packaging-boundary 文案,避免再把 audit:render-host 误读成真正的重型运行时隔离已完成。

杂项

  • 在不重开已落地 diagram/provider contract 分层的前提下,为下一波 packaging / semantic-verification convergence 做好了仓库层面的承接准备。

README file from

Github

GitHub Release GitHub Downloads GitHub Repo stars

Notemd Plugin for Obsidian

English | 简体中文 | Español | Français | Deutsch | Italiano | Português | 繁體中文 | 日本語 | 한국어 | Русский | العربية | हिन्दी | বাংলা | Nederlands | Svenska | Suomi | Dansk | Norsk | Polski | Türkçe | עברית | ไทย | Ελληνικά | Čeština | Magyar | Română | Українська | Tiếng Việt | Bahasa Indonesia | Bahasa Melayu

Read docs in more languages: Language Hub Browse repository docs: Docs Hub

==================================================
  _   _       _   _ ___    __  __ ___
 | \ | | ___ | |_| |___|  |  \/  |___ \
 |  \| |/ _ \| __| |___|  | |\/| |   | |
 | |\  | (_) | |_| |___   | |  | |___| |
 |_| \_|\___/ \__|_|___|  | |  | |____/
==================================================
 AI-Powered Multi-Languages Knowledge Enhancement
==================================================

A Easy way to create your own Knowledge-base!

Notemd enhances your Obsidian workflow by integrating with various Large Language Models (LLMs) to process your multi-languages notes, automatically generate wiki-links for key concepts, create corresponding concept notes, perform web research, helping you build powerful knowledge graphs and more.

If you love using Notemd, please consider ⭐ Give a Star on GitHub or ☕️ Buy Me a Coffee.

Version: 1.8.5

Table of Contents

Quick Start

  1. Install & Enable: Get the plugin from the Obsidian Marketplace.
  2. Configure LLM: Go to Settings -> Notemd, select your LLM provider (like OpenAI or a local one like Ollama), and enter your API key/URL.
  3. Open Sidebar: Click the Notemd wand icon in the left ribbon to open the sidebar.
  4. Process a Note: Open any note and click "Process File (Add Links)" in the sidebar to automatically add [[wiki-links]] to key concepts.
  5. Run a Quick Workflow: Use the default "One-Click Extract" button to chain processing, batch generation, and Mermaid cleanup from one entry point.

That's it! Explore the settings to unlock more features like web research, translation, and content generation.

Language Support

Language Behavior Contract

Concern Scope Default Notes
UI Locale Plugin UI text only (settings, sidebar, notices, dialogs) auto Follows Obsidian locale; current UI catalogs are en, ar, de, es, fa, fr, id, it, ja, ko, nl, pl, pt, pt-BR, ru, th, tr, uk, vi, zh-CN, zh-TW.
Task Output Language LLM-generated task output (links, summaries, generation, extraction, translation target) en Can be global or per-task when Use different languages for tasks is enabled.
Disable auto translation Non-Translate tasks keep source-language context false Explicit Translate tasks still enforce the configured target language.
Locale fallback Missing UI key resolution locale -> en Implementation safety net; supported visible surfaces are regression-tested and should not silently fall back during normal use.
  • Maintainer source docs are English and Simplified Chinese, and the published README translations are linked in the header above.
  • In-app UI locale coverage currently matches the explicit code catalog: en, ar, de, es, fa, fr, id, it, ja, ko, nl, pl, pt, pt-BR, ru, th, tr, uk, vi, zh-CN, zh-TW.
  • English fallback remains an implementation safety net, but supported visible surfaces are regression-tested and should not silently fall back during normal use.
  • Further details and contributing guidelines are tracked in the Language Hub.

Features

AI-Powered Document Processing

  • Multi-LLM Support: Connect to various cloud and local LLM providers (see Supported LLM Providers).
  • Smart Chunking: Automatically splits large documents into manageable chunks based on word count for processing.
  • Content Preservation: Aims to maintain original formatting while adding structure and links.
  • Progress Tracking: Real-time updates via the Notemd Sidebar or a progress modal.
  • Cancellable Operations: Cancel any processing task (single or batch) initiated from the sidebar via its dedicated cancel button. Command palette operations use a modal which can also be cancelled.
  • Multi-Model Configuration: Use different LLM providers and specific models for different tasks (Add Links, Research, Generate Title, Translate) or use a single provider for all.
  • Stable API Calls (Retry Logic): Optionally enable automatic retries for failed LLM API calls with configurable interval and attempt limits.
  • Resilient Provider Connection Tests: If the first provider test hits a transient network disconnect, Notemd now falls back to the stable retry sequence before failing, covering OpenAI-compatible, Anthropic, Google, Azure OpenAI, and Ollama transports.
  • Runtime Environment Transport Fallback: When a long-running provider request is dropped by requestUrl with transient network errors such as ERR_CONNECTION_CLOSED, Notemd now retries the same attempt through environment-specific fallback transport before entering the configured retry loop: desktop builds use Node http/https, while non-desktop environments use browser fetch. This reduces false failures on slow gateways and reverse proxies.
  • OpenAI-Compatible Stable Long-Request Chain Hardening: In stable mode, OpenAI-compatible calls now use an explicit 3-stage order for each attempt: primary direct streaming transport, then direct non-stream transport, then requestUrl fallback (which can still upgrade to streamed parsing when needed). This reduces false negatives where providers complete buffered responses but streaming pipes are unstable.
  • Protocol-Aware Streaming Fallback Across LLM APIs: Long-running fallback attempts now upgrade to protocol-aware streamed parsing across every built-in LLM path, not just OpenAI-compatible endpoints. Notemd now handles OpenAI/Azure-style SSE, Anthropic Messages streaming, Google Gemini SSE responses, and Ollama NDJSON streams on both desktop http/https and non-desktop fetch, and the remaining direct OpenAI-style provider entrypoints reuse that same shared fallback path.
  • China-Ready Provider Presets: Built-in presets now cover Qwen, Qwen Code, Doubao, Moonshot, Xiaomi MiMo, GLM, Z AI, MiniMax, Huawei Cloud MaaS, Baidu Qianfan, and SiliconFlow in addition to the existing global and local providers.
  • Reliable Batch Processing: Improved concurrent processing logic with staggered API calls to prevent rate-limiting errors and ensure stable performance during large batch jobs. The new implementation ensures that tasks are initiated at different intervals rather than all at once.
  • Accurate Progress Reporting: Fixed a bug where the progress bar could get stuck, ensuring that the UI always reflects the true status of the operation.
  • Robust Parallel Batch Processing: Resolved an issue where parallel batch operations would stall prematurely, ensuring all files are processed reliably and efficiently.
  • Progress Bar Accuracy: Fixed a bug where the progress bar for the "Create Wiki-Link & Generate Note" command would get stuck at 95%, ensuring it now correctly shows 100% upon completion.
  • Enhanced API Debugging: The "API Error Debugging Mode" now captures full response bodies from LLM providers and search services (Tavily/DuckDuckGo), and also records a per-attempt transport timeline with sanitized request URLs, elapsed duration, response headers, partial response bodies, parsed partial stream content, and stack traces for better troubleshooting across OpenAI-compatible, Anthropic, Google, Azure OpenAI, and Ollama fallbacks.
  • Developer Mode Panel: Settings now include a dedicated developer-only diagnostics panel that stays hidden unless "Developer mode" is enabled. It supports selecting diagnostic call paths and running repeated stability probes for the selected mode.
  • Redesigned Sidebar: Built-in actions are grouped into focused sections with clearer labels, live status, cancellable progress, and copyable logs to reduce sidebar clutter. The progress/log footer now stays visible even when every section is expanded, and the ready state uses a clearer standby progress track.
  • Sidebar Interaction & Readability Polish: Sidebar buttons now provide clearer hover/press/focus feedback, and colorful CTA buttons (including One-Click Extract and Batch generate from titles) use stronger text contrast for better readability across themes.
  • Single-File CTA Mapping: Colorful CTA styling is now reserved for single-file actions only. Batch/folder-level actions and mixed workflows use non-CTA styling to reduce action-scope misclicks.
  • Custom One-Click Workflows: Turn built-in sidebar utilities into reusable custom buttons with user-defined names and assembled action chains. A default One-Click Extract workflow is included out of the box.
  • Welcome Modal Release Digest: On first install, the welcome modal now includes the latest two release summaries in a scrollable panel so new users can quickly see what changed before configuring providers.

Knowledge Graph Enhancement

  • Automatic Wiki-Linking: Identifies and adds [[wiki-links]] to core concepts within your processed notes based on LLM output.
  • Concept Note Creation (Optional & Customizable): Automatically creates new notes for discovered concepts in a specified vault folder.
  • Customizable Output Paths: Configure separate relative paths within your vault for saving processed files and newly created concept notes.
  • Customizable Output Filenames (Add Links): Optionally overwrite the original file or use a custom suffix/replacement string instead of the default _processed.md when processing files for links.
  • Link Integrity Maintenance: Basic handling for updating links when notes are renamed or deleted within the vault.
  • Pure Concept Extraction: Extract concepts and create corresponding concept notes without modifying the original document. This is ideal for populating a knowledge base from existing documents without altering them. This feature has configurable options for creating minimal concept notes and adding backlinks.

Translation

  • AI-Powered Translation:
    • Translate note content using the configured LLM.
    • Large File Support: Automatically splits large files into smaller chunks based on the Chunk word count setting before sending them to the LLM. The translated chunks are then seamlessly combined back into a single document.
    • Supports translation between multiple languages.
    • Customizable target language in settings or in UI.
    • Automatically open the translated text on the right side of the original text for easy reading.
  • Batch Translate:
    • Translate all files within a selected folder.
    • Supports parallel processing when "Enable Batch Parallelism" is on.
    • Uses custom prompts for translation if configured.
    • Adds a "Batch translate this folder" option to the file explorer context menu.
  • Disable auto translation: When this option is enabled, non-Translate tasks will no longer force outputs into a specific language, preserving the original language context. The explicit "Translate" task will still perform translation as configured.

Web Research & Content Generation

  • Web Research & Summarization:
    • Perform web searches using Tavily (requires API key) or DuckDuckGo (experimental).
    • Improved Search Robustness: DuckDuckGo search now features enhanced parsing logic (DOMParser with Regex fallback) to handle layout changes and ensure reliable results.
    • Summarize search results using the configured LLM.
    • The output language of the summary can be customized in the settings.
    • Append summaries to the current note.
    • Configurable token limit for research content sent to the LLM.
  • Content Generation from Title:
    • Use the note title to generate initial content via LLM, replacing existing content.
    • Optional Research: Configure whether to perform web research (using the selected provider) to provide context for generation.
  • Batch Content Generation from Titles: Generate content for all notes within a selected folder based on their titles (respects the optional research setting). Successfully processed files are moved to a configurable "complete" subfolder (e.g., [foldername]_complete or a custom name) to avoid reprocessing.
  • Mermaid Auto-Fix Coupling: When Mermaid auto-fix is enabled, Mermaid-related workflows now automatically repair generated files or output folders after processing. This covers Process, Generate from Title, Batch Generate from Titles, Research & Summarize, Summarise as Mermaid, and Translate flows.

Utility Features

  • Summarise as Mermaid diagram:
    • This feature allows you to summarize the content of a note into a Mermaid diagram.
    • The output language of the Mermaid diagram can be customized in the settings.
    • Mermaid Output Folder: Configure the folder where the generated Mermaid diagram files will be saved.
    • Translate Summarize to Mermaid Output: Optionally translate the generated Mermaid diagram content into the configured target language.
  • Experimental Diagram Pipeline:
    • A spec-first diagram path can route note content into Mermaid, Obsidian JSON Canvas, or Vega-Lite instead of forcing every case through Mermaid text generation.
    • Current Mermaid adapter coverage in the spec-first path includes mindmap, flowchart, sequenceDiagram, classDiagram, erDiagram, and stateDiagram-v2.
    • Current Vega-Lite adapter coverage in the spec-first path includes cartesian bar, line, area, and point charts, plus controlled scatter, pie, and table layout hints that map onto safe built-in Vega-Lite templates.
    • For dataChart plans, the planner now seeds preferred Vega-Lite chart templates (line, pie, scatter, table, or fallback bar) so omitted layoutHints.chartType values do not silently collapse to the wrong chart shape.
    • Generated Mermaid artifacts are now validated with mermaid.parse before the renderer returns them, so malformed diagrams fail early instead of quietly leaking into preview/export steps.
    • Generated .canvas and .json artifacts are saved through the same output-path policy as Mermaid summaries, and preview surfaces now cover Mermaid, JSON Canvas, and Vega-Lite results.
    • HTML fallback artifacts are now generated as dedicated .html summaries when a richer renderer is not available, and the preview modal can open them through the iframe fallback path instead of only showing escaped source text.
    • Preview modals can now export rendered Mermaid/Canvas/Vega-Lite output as .svg and .png files beside the source note or beside the generated artifact, giving you stable image handoff paths without flattening everything into screenshots first.
    • Preview-only runs can also persist the raw generated artifact beside the current note using target-aware extensions and suffixes (_summ.md, _diagram.canvas, _diagram.json), so validation and handoff do not require rerunning the LLM step.
    • The existing Mermaid auto-fix path remains intact for Mermaid outputs only; non-Mermaid artifacts bypass the fixer instead of being pushed through incompatible post-processing.
    • Preview UI strings continue to follow the plugin UI locale (uiLocale: auto follows Obsidian), and preview/export theme defaults to the active Obsidian light/dark theme so Mermaid, JSON Canvas, Vega-Lite, and HTML fallback previews do not stay locked to the wrong palette after a theme switch.
Target Generated artifact Inline preview Export SVG Export PNG Save raw source Notes
Mermaid _summ.md Yes Yes Yes Yes Mermaid auto-fix remains available for Mermaid-only flows.
JSON Canvas _diagram.canvas Yes Yes Yes Yes Preview/export uses a theme-aware Canvas palette.
Vega-Lite _diagram.json Yes Yes Yes Yes Preview/export uses a theme-aware Vega-Lite config patch.
HTML _diagram.html Yes (iframe fallback) No No Yes Current pipeline does not promise raster/vector export for HTML artifacts yet.
  • Simple Formula Format Correction:

    • Quickly fixes single-line math formulas delimited by single $ to standard double $$ blocks.
    • Single File: Process the current file via the sidebar button or command palette.
    • Batch Fix: Process all files in a selected folder via the sidebar button or command palette.
  • Check for Duplicates in Current File: This command helps identify potential duplicate terms within the active file.

  • Duplicate Detection: Basic check for duplicate words within the currently processed file's content (results logged to console).

  • Check and Remove Duplicate Concept Notes: Identifies potential duplicate notes within the configured Concept Note Folder based on exact name matches, plurals, normalization, and single-word containment compared to notes outside the folder. The scope of the comparison (which notes outside the concept folder are checked) can be configured to the entire vault, specific included folders, or all folders excluding specific ones. Presents a detailed list with reasons and conflicting files, then prompts for confirmation before moving identified duplicates to system trash. Shows progress during deletion.

  • Batch Mermaid Fix: Applies Mermaid and LaTeX syntax corrections to all Markdown files within a user-selected folder.

    • Workflow Ready: Can be used as a standalone utility or as a step inside a custom one-click workflow button.
    • Error Reporting: Generates a mermaid_error_{foldername}.md report listing files that still contain potential Mermaid errors after processing.
    • Move Error Files: Optionally moves files with detected errors to a specified folder for manual review.
    • Smart Detection: Now intelligently checks files for syntax errors using mermaid.parse before attempting fixes, saving processing time and avoiding unnecessary edits.
    • Safe Processing: Ensures syntax fixes are applied exclusively to Mermaid code blocks, preventing accidental modification of Markdown tables or other content. Includes robust safeguards to protect table syntax (e.g., | :--- |) from aggressive debug fixes.
    • Deep Debug Mode: If errors persist after the initial fix, an advanced deep debug mode is triggered. This mode handles complex edge cases, including:
      • Comment Integration: Automatically merges trailing comments (starting with %) into the edge label (e.g., A -- Label --> B; % Comment becomes A -- "Label(Comment)" --> B;).
      • Malformed Arrows: Fixes arrows absorbed into quotes (e.g., A -- "Label -->" B becomes A -- "Label" --> B).
      • Inline Subgraphs: Converts inline subgraph labels to edge labels.
      • Reverse Arrow Fix: Corrects non-standard X <-- Y arrows to Y --> X.
      • Direction Keyword Fix: Ensures direction keyword is lowercase inside subgraphs (e.g., Direction TB -> direction TB).
      • Comment Conversion: Converts // comments into edge labels (e.g., A --> B; // Comment -> A -- "Comment" --> B;).
      • Duplicate Label Fix: Simplifies repeated bracketed labels (e.g., Node["Label"]["Label"] -> Node["Label"]).
      • Invalid Arrow Fix: Converts invalid arrow syntax --|> to the standard -->.
      • Robust Label & Note Handling: Improved handling for labels containing special characters (like /) and better support for custom note syntax (note for ...), ensuring artifacts like trailing brackets are cleanly removed.
      • Advanced Fix Mode: Includes robust fixes for unquoted node labels containing spaces, special characters, or nested brackets (e.g., Node[Label [Text]] -> Node["Label [Text]"]), ensuring compatibility with complex diagrams like Stellar Evolution paths. Also corrects malformed edge labels (e.g., --["Label["--> to -- "Label" -->). Additionally converts inline comments (Consensus --> Adaptive; # Some advanced consensus to Consensus -- "Some advanced consensus" --> Adaptive) and fixes incomplete quotes at line ends (;" at the end replaced with "]). - Note Conversion: Automatically converts note right/left of and standalone note : comments into standard Mermaid node definitions and connections (e.g., note right of A: text becomes NoteA["Note: text"] linked to A), preventing syntax errors and improving layout. Now supports both arrow links (-->) and solid links (---). - Extended Note Support: Automatically converts note for Node "Content" and note of Node "Content" into standard linked note nodes (e.g. NoteNode[" Content"] linked to Node), ensuring compatibility with user-extended syntax. - Enhanced Note Correction: Automatically renames notes with sequential numbering (e.g., Note1, Note2) to prevent aliasing issues when multiple notes are present. - Parallelogram/Shape Fix: Corrects malformed node shapes like [/["Label["/] to standard ["Label"], ensuring compatibility with generated content. - Standardize Pipe Labels: Automatically fixes and standardizes edge labels containing pipes, ensuring they are properly quoted (e.g., -->|Text| becomes -->|"Text"| and -->|Math|^2| becomes -->|"Math|^2"|).
      • Misplaced Pipe Fix: Corrects misplaced edge labels appearing before the arrow (e.g., >|"Label"| A --> B becomes A -->|"Label"| B). - Merge Double Labels: Detects and merges complex double labels on a single edge (e.g., A -- Label1 -- Label2 --> B or A -- Label1 -- Label2 --- B) into a single, clean label with line breaks (A -- "Label1<br>Label2" --> B). - Unquoted Label Fix: Automatically quotes node labels that contain potentially problematic characters (e.g., quotes, equals signs, math operators) but are missing outer quotes (e.g., Plot[Plot "A"] becomes Plot["Plot "A""]), preventing render errors. - Intermediate Node Fix: Splits edges that contain an intermediate node definition into two separate edges (e.g., A -- B[...] --> C becomes A --> B[...] and B[...] --> C), ensuring valid Mermaid syntax. - Concatenated Label Fix: Robustly fixes node definitions where the ID is concatenated with the label (e.g., SubdivideSubdivide... becomes Subdivide["Subdivide..."]), even when preceded by pipe labels or when the duplication isn't exact, by validating against known node IDs. - Extract Specific Original Text: - Define a list of questions in settings. - Extracts verbatim text segments from the active note that answer these questions. - Merged Query Mode: Option to process all questions in a single API call for efficiency. - Translation: Option to include translations of the extracted text in the output. - Custom Output: Configurable save path and filename suffix for the extracted text file.- LLM Connection Test: Verify API settings for the active provider.

Installation

  1. Open Obsidian SettingsCommunity plugins.
  2. Ensure "Restricted mode" is off.
  3. Click Browse community plugins and search for "Notemd".
  4. Click Install.
  5. Once installed, click Enable.

Manual Installation

  1. Download the latest release assets from the GitHub Releases page. Each release also includes README.md for packaged reference, but manual installation only requires main.js, styles.css, and manifest.json.
  2. Navigate to your Obsidian vault's configuration folder: <YourVault>/.obsidian/plugins/.
  3. Create a new folder named notemd.
  4. Copy main.js, styles.css, and manifest.json into the notemd folder.
  5. Restart Obsidian.
  6. Go to SettingsCommunity plugins and enable "Notemd".

Configuration

Access plugin settings via: SettingsCommunity PluginsNotemd (Click the gear icon).

LLM Provider Configuration

  1. Active Provider: Select the LLM provider you want to use from the dropdown menu.
  2. Provider Settings: Configure the specific settings for the selected provider:
    • API Key: Required for most cloud providers (e.g., OpenAI, Anthropic, DeepSeek, Qwen, Qwen Code, Doubao, Moonshot, Xiaomi MiMo, GLM, Z AI, MiniMax, Huawei Cloud MaaS, Baidu Qianfan, SiliconFlow, Google, Mistral, Azure OpenAI, OpenRouter, xAI, Groq, Together, Fireworks, Requesty). Not needed for Ollama. Optional for LM Studio and the generic OpenAI Compatible preset when your endpoint accepts anonymous or placeholder access.
    • Base URL / Endpoint: The API endpoint for the service. Defaults are provided, but you may need to change this for local models (LMStudio, Ollama), gateways (OpenRouter, Requesty, OpenAI Compatible), or specific Azure deployments. Required for Azure OpenAI.
    • Model: The specific model name/ID to use (e.g., gpt-4o, claude-3-5-sonnet-20240620, google/gemini-flash-1.5, grok-4, moonshotai/kimi-k2-instruct-0905, accounts/fireworks/models/kimi-k2p5, anthropic/claude-3-7-sonnet-latest). Ensure the model is available at your endpoint/provider.
    • Temperature: Controls the randomness of the LLM's output (0=deterministic, 1=max creativity). Lower values (e.g., 0.2-0.5) are generally better for structured tasks.
    • API Version (Azure Only): Required for Azure OpenAI deployments (e.g., 2024-02-15-preview).
  3. Test Connection: Use the "Test Connection" button for the active provider to verify your settings. OpenAI-compatible providers now use provider-aware checks: endpoints such as Qwen, Qwen Code, Doubao, Moonshot, Xiaomi MiMo, GLM, Z AI, MiniMax, Huawei Cloud MaaS, Baidu Qianfan, SiliconFlow, Groq, Together, Fireworks, LMStudio, and OpenAI Compatible probe chat/completions directly, while providers with a reliable /models endpoint can still use model listing first. If the first probe fails with a transient network disconnect such as ERR_CONNECTION_CLOSED, Notemd automatically falls back to the stable retry sequence instead of failing immediately.
  4. Manage Provider Configurations: Use the "Export Providers" and "Import Providers" buttons to save/load your LLM provider settings to/from a notemd-providers.json file within the plugin's configuration directory. This allows for easy backup and sharing.
  5. Preset Coverage: In addition to the original providers, Notemd now includes preset entries for Qwen, Qwen Code, Doubao, Moonshot, Xiaomi MiMo, GLM, Z AI, MiniMax, Huawei Cloud MaaS, Baidu Qianfan, SiliconFlow, xAI, Groq, Together, Fireworks, Requesty, and a generic OpenAI Compatible target for LiteLLM, vLLM, Perplexity, Vercel AI Gateway, or custom proxies.

Multi-Model Configuration

  • Use Different Providers for Tasks:
    • Disabled (Default): Uses the single "Active Provider" (selected above) for all tasks.
    • Enabled: Allows you to select a specific provider and optionally override the model name for each task ("Add Links", "Research & Summarize", "Generate from Title", "Translate", "Extract Concepts"). If the model override field for a task is left blank, it will use the default model configured for that task's selected provider.
  • Select different languages for different tasks:
    • Disabled (Default): Uses the single "Output language" for all tasks.
    • Enabled: Allows you to select a specific language for each task ("Add Links", "Research & Summarize", "Generate from Title", "Summarise as Mermaid diagram", "Extract Concepts").

Language Architecture (UI Locale vs Task Output Language)

  • UI Locale controls only plugin interface text (Settings labels, sidebar buttons, notices, and dialogs). The default auto mode follows Obsidian's current UI language.
    • Regional/script variants now resolve to the nearest shipped catalog instead of falling straight back to English. For example, fr-CA uses French, es-419 uses Spanish, pt-PT uses Portuguese, zh-Hans uses Simplified Chinese, and zh-Hant-HK uses Traditional Chinese.
  • Task Output Language controls model-generated task output (links, summaries, title generation, Mermaid summary, concept extraction, translation target).
  • Per-task language mode lets each task resolve its own output language from a unified policy layer instead of scattered per-module overrides.
  • Disable auto translation keeps non-Translate tasks in source-language context, while explicit Translate tasks still enforce the configured target language.
  • Mermaid-related generation paths follow the same language policy and can still trigger Mermaid auto-fix when enabled.

Stable API Call Settings

  • Enable Stable API Calls (Retry Logic):
    • Disabled (Default): A single API call failure will stop the current task.
    • Enabled: Automatically retries failed LLM API calls (useful for intermittent network issues or rate limits).
    • Connection Test Fallback: Even when normal calls are not already running in stable mode, provider connection tests now switch into the same retry sequence after the first transient network failure.
    • Runtime Transport Fallback (Environment-Aware): Long-running task requests that are transiently dropped by requestUrl now retry the same attempt through an environment-aware fallback first. Desktop builds use Node http/https; non-desktop environments use browser fetch. Those fallback attempts now use protocol-aware streaming parsing across the built-in LLM paths, covering OpenAI-compatible SSE, Azure OpenAI SSE, Anthropic Messages SSE, Google Gemini SSE, and Ollama NDJSON output, so slow gateways can return body chunks earlier. The remaining direct OpenAI-style provider entrypoints reuse that same shared fallback path.
    • OpenAI-Compatible Stable Order: In stable mode, each OpenAI-compatible attempt now follows direct streaming -> direct non-stream -> requestUrl (with streamed fallback when needed) before counting as a failed attempt. This prevents overly aggressive failures when only one transport mode is flaky.
  • Retry Interval (seconds): (Visible only when enabled) Time to wait between retry attempts (1-300 seconds). Default: 5.
  • Maximum Retries: (Visible only when enabled) Maximum number of retry attempts (0-10). Default: 3.
  • API Error Debugging Mode:
    • Disabled (Default): Uses standard, concise error reporting.
    • Enabled: Activates detailed error logging (similar to DeepSeek's verbose output) for all providers and tasks (including Translate, Search, and Connection Tests). This includes HTTP status codes, raw response text, request transport timelines, sanitized request URLs and headers, elapsed attempt durations, response headers, partial response bodies, parsed partial stream output, and stack traces, which is crucial for troubleshooting API connection issues and upstream gateway resets.
  • Developer Mode:
    • Disabled (Default): Hides all developer-only diagnostics controls from normal users.
    • Enabled: Shows a dedicated developer diagnostics panel in Settings.
  • Developer Provider Diagnostic (Long Request):
    • Diagnostic Call Mode: Choose runtime path per probe. OpenAI-compatible providers support additional forced modes (direct streaming, direct buffered, requestUrl-only) besides runtime modes.
    • Run Diagnostic: Runs one long-request probe with the selected call mode and writes Notemd_Provider_Diagnostic_*.txt in vault root.
    • Run Stability Test: Repeats the probe for configurable runs (1-10) using the selected call mode and saves an aggregated stability report.
    • Diagnostic Timeout: Configurable timeout per run (15-3600 seconds).
    • Why Use It: Faster than manual reproduction when a provider passes "Test connection" but fails on real long-running tasks (for example, translation on slow gateways).

General Settings

Processed File Output
  • Customize Processed File Save Path:

    • Disabled (Default): Processed files (e.g., YourNote_processed.md) are saved in the same folder as the original note.
    • Enabled: Allows you to specify a custom save location.
  • Processed File Folder Path: (Visible only when the above is enabled) Enter a relative path within your vault (e.g., Processed Notes or Output/LLM) where processed files should be saved. Folders will be created if they don't exist. Do not use absolute paths (like C:...) or invalid characters.

  • Use Custom Output Filename for 'Add Links':

    • Disabled (Default): Processed files created by the 'Add Links' command use the default _processed.md suffix (e.g., YourNote_processed.md).
    • Enabled: Allows you to customize the output filename using the setting below.
  • Custom Suffix/Replacement String: (Visible only when the above is enabled) Enter the string to use for the output filename.

    • If left empty, the original file will be overwritten with the processed content.
    • If you enter a string (e.g., _linked), it will be appended to the original base name (e.g., YourNote_linked.md). Ensure the suffix doesn't contain invalid filename characters.
  • Remove Code Fences on Add Links:

    • Disabled (Default): Code fences (`\``) are kept in the content when adding links, and (`\`markdown) will be delete automaticly.
    • Enabled: Removes code fences from the content before adding links.
Concept Note Output
  • Customize Concept Note Path:
    • Disabled: Automatic creation of notes for [[linked concepts]] is disabled.
    • Enabled (Default): Allows you to specify a folder where new concept notes will be created.
  • Concept Note Folder Path: (Visible only when the above is enabled) Enter a relative path within your vault (e.g., Concepts or Generated/Topics) where new concept notes should be saved. Folders will be created if they don't exist. Must be filled if customization is enabled. Do not use absolute paths or invalid characters.
  • Prerequisite Guidance Modal: Flows that need concept-note creation, including Process File/Folder (Add Links) and Extract Concepts, now warn you when this setting is not configured correctly. The modal offers Configure, Skip once, and Do not show again.
Concept Log File Output
  • Generate Concept Log File:
    • Disabled (Default): No log file is generated.
    • Enabled: Creates a log file listing newly created concept notes after processing. The format is:
      generate xx concepts md file
      1. concepts1
      2. concepts2
      ...
      n. conceptsn
      
  • Customize Log File Save Path: (Visible only when "Generate Concept Log File" is enabled)
    • Disabled (Default): The log file is saved in the Concept Note Folder Path (if specified) or the vault root otherwise.
    • Enabled: Allows you to specify a custom folder for the log file.
  • Concept Log Folder Path: (Visible only when "Customize Log File Save Path" is enabled) Enter a relative path within your vault (e.g., Logs/Notemd) where the log file should be saved. Must be filled if customization is enabled.
  • Customize Log File Name: (Visible only when "Generate Concept Log File" is enabled)
    • Disabled (Default): The log file is named Generate.log.
    • Enabled: Allows you to specify a custom name for the log file.
  • Concept Log File Name: (Visible only when "Customize Log File Name" is enabled) Enter the desired file name (e.g., ConceptCreation.log). Must be filled if customization is enabled.
Extract Concepts Task
  • Create minimal concept notes:
    • On (Default): Newly created concept notes will only contain the title (e.g., # Concept).
    • Off: Concept notes may include additional content, such as a "Linked From" backlink, if not disabled by the setting below.
  • Add "Linked From" backlink:
    • Off (Default): Does not add a backlink to the source document in the concept note during extraction.
    • On: Adds a "Linked From" section with a backlink to the source file.
Extract Specific Original Text
  • Questions for extraction: Enter a list of questions (one per line) that you want the AI to extract verbatim answers for from your notes.
  • Batch Extract Specific Original Text: In addition to the single-file command, the sidebar/workflow builder now supports running the same configured extraction questions across all .md and .txt files in a selected folder.
  • Translate output to corresponding language:
    • Off (Default): Outputs only the extracted text in its original language.
    • On: Appends a translation of the extracted text in the language selected for this task.
  • Merged query mode:
    • Off: Processes each question individually (higher precision but more API calls).
    • On: Sends all questions in a single prompt (faster and fewer API calls).
  • Customise extracted text save path & filename:
    • Off: Saves to the same folder as the original file with _Extracted suffix.
    • On: Allows you to specify a custom output folder and filename suffix.
Batch Mermaid Fix
  • Enable Mermaid Error Detection:
    • Off: Error detection is skipped after processing.
    • On (Default): Scans processed files for remaining Mermaid syntax errors and generates a mermaid_error_{foldername}.md report.
  • Move files with Mermaid errors to specified folder:
    • Off: Files with errors remain in place.
    • On: Moves any files that still contain Mermaid syntax errors after the fix attempt to a dedicated folder for manual review.
  • Mermaid error folder path: (Visible if above is enabled) The folder to move error files to.
Processing Parameters
  • Enable Batch Parallelism:
    • Disabled (Default): Batch processing tasks (like "Process Folder" or "Batch Generate from Titles") process files one by one (serially).
    • Enabled: Allows the plugin to process multiple files concurrently, which can significantly speed up large batch jobs.
  • Batch Concurrency: (Visible only when parallelism is enabled) Sets the maximum number of files to process in parallel. A higher number can be faster but uses more resources and may hit API rate limits. (Default: 1, Range: 1-20)
  • Batch Size: (Visible only when parallelism is enabled) The number of files to group into a single batch. (Default: 50, Range: 10-200)
  • Delay Between Batches (ms): (Visible only when parallelism is enabled) An optional delay in milliseconds between processing each batch, which can help manage API rate limits. (Default: 1000ms)
  • API Call Interval (ms): Minimum delay in milliseconds before and after each individual LLM API call. Crucial for low-rate APIs or to prevent 429 errors. Set to 0 for no artificial delay. (Default: 500ms)
  • Max Tokens: Maximum tokens the LLM should generate per response chunk. Affects cost and detail. (Default: 8192)
  • Chunk Word Count: Maximum words per chunk sent to the LLM. Affects the number of API calls for large files. Recommended: about one third of Max Tokens. If you have not customized it yet, changing Max Tokens auto-fills the recommended chunk size for you. (Default: 3000)
  • Enable Duplicate Detection: Toggles the basic check for duplicate words within processed content (results in console). (Default: Enabled)
Translation
  • Default Target Language: Select the default language you want to translate your notes into. This can be overridden in the UI when running the translation command. (Default: English)
  • Customise Translation File Save Path:
    • Disabled (Default): Translated files are saved in the same folder as the original note.
    • Enabled: Allows you to specify a relative path within your vault (e.g., Translations) where translated files should be saved. Folders will be created if they don't exist.
  • Use custom suffix for translated files:
    • Disabled (Default): Translated files use the default _translated.md suffix (e.g., YourNote_translated.md).
    • Enabled: Allows you to specify a custom suffix.
  • Custom Suffix: (Visible only when the above is enabled) Enter the custom suffix to append to translated filenames (e.g., _es or _fr).
Content Generation
  • Enable Research in "Generate from Title":
    • Disabled (Default): "Generate from Title" uses only the title as input.
    • Enabled: Performs web research using the configured Web Research Provider and includes the findings as context for the LLM during title-based generation.
  • Auto-run Mermaid Syntax Fix after Generation:
    • Enabled (Default): Automatically runs a Mermaid syntax-fixing pass after Mermaid-related workflows such as Process, Generate from Title, Batch Generate from Titles, Research & Summarize, Summarise as Mermaid, and Translate.
    • Disabled: Leaves generated Mermaid output untouched unless you run Batch Mermaid Fix manually or add it to a custom workflow.
  • Output Language: (New) Select the desired output language for "Generate from Title" and "Batch Generate from Title" tasks.
    • English (Default): Prompts are processed and output in English.
    • Other Languages: The LLM is instructed to perform its reasoning in English but provide the final documentation in your selected language (e.g., Español, Français, 简体中文, 繁體中文, العربية, हिन्दी, etc.).
  • Change Prompt Word: (New)
    • Change Prompt Word: Allows you to change the prompt word for a specific task.
    • Custom Prompt Word: Enter your custom prompt word for the task.
  • Use Custom Output Folder for 'Generate from Title':
    • Disabled (Default): Successfully generated files are moved to a subfolder named [OriginalFolderName]_complete relative to the original folder's parent (or Vault_complete if the original folder was the root).
    • Enabled: Allows you to specify a custom name for the subfolder where completed files are moved.
  • Custom Output Folder Name: (Visible only when the above is enabled) Enter the desired name for the subfolder (e.g., Generated Content, _complete). Invalid characters are not allowed. Defaults to _complete if left empty. This folder is created relative to the original folder's parent directory.
One-click Workflow Buttons
  • Visual Workflow Builder: Create custom workflow buttons from built-in actions without hand-writing the DSL.
  • Custom Workflow Buttons DSL: Advanced users can still edit the workflow definition text directly. Invalid DSL falls back to the default workflow safely and shows a warning in the sidebar/settings UI.
  • Workflow Error Strategy:
    • Stop on Error (Default): Stops the workflow immediately when one step fails.
    • Continue on Error: Continues running later steps and reports the number of failed actions at the end.
  • Default Workflow Included: One-Click Extract chains Process File (Add Links), Batch Generate from Titles, and Batch Mermaid Fix.
Custom Prompt Settings

This feature allows you to override the default instructions (prompts) sent to the LLM for specific tasks, giving you fine-grained control over the output.

  • Enable Custom Prompts for Specific Tasks:

    • Disabled (Default): The plugin uses its built-in default prompts for all operations.
    • Enabled: Activates the ability to set custom prompts for the tasks listed below. This is the master switch for this feature.
  • Use Custom Prompt for [Task Name]: (Visible only when the above is enabled)

    • For each supported task ("Add Links", "Generate from Title", "Research & Summarize", "Extract Concepts"), you can individually enable or disable your custom prompt.
    • Disabled: This specific task will use the default prompt.
    • Enabled: This task will use the text you provide in the corresponding "Custom Prompt" text area below.
  • Custom Prompt Text Area: (Visible only when a task's custom prompt is enabled)

    • Default Prompt Display: For your reference, the plugin displays the default prompt that it would normally use for the task. You can use the "Copy Default Prompt" button to copy this text as a starting point for your own custom prompt.
    • Custom Prompt Input: This is where you write your own instructions for the LLM.
    • Placeholders: You can (and should) use special placeholders in your prompt, which the plugin will replace with actual content before sending the request to the LLM. Refer to the default prompt to see which placeholders are available for each task. Common placeholders include:
      • {TITLE}: The title of the current note.
      • {RESEARCH_CONTEXT_SECTION}: The content gathered from web research.
      • {USER_PROMPT}: The content of the note being processed.
Duplicate Check Scope
  • Duplicate Check Scope Mode: Controls which files are checked against the notes in your Concept Note Folder for potential duplicates.
    • Entire Vault (Default): Compares concept notes against all other notes in the vault (excluding the Concept Note Folder itself).
    • Include Specific Folders Only: Compares concept notes only against notes within the folders listed below.
    • Exclude Specific Folders: Compares concept notes against all notes except those within the folders listed below (and also excluding the Concept Note Folder).
    • Concept Folder Only: Compares concept notes only against other notes within the Concept Note Folder. This helps find duplicates purely inside your generated concepts.
  • Include/Exclude Folders: (Visible only if Mode is 'Include' or 'Exclude') Enter the relative paths of the folders you want to include or exclude, one path per line. Paths are case-sensitive and use / as the separator (e.g., Reference Material/Papers or Daily Notes). These folders cannot be the same as or inside the Concept Note Folder.
Web Research Provider
  • Search Provider: Choose between Tavily (requires API key, recommended) and DuckDuckGo (experimental, often blocked by the search engine for automated requests). Used for "Research & Summarize Topic" and optionally for "Generate from Title".
  • Tavily API Key: (Visible only if Tavily is selected) Enter your API key from tavily.com.
  • Tavily Max Results: (Visible only if Tavily is selected) Maximum number of search results Tavily should return (1-20). Default: 5.
  • Tavily Search Depth: (Visible only if Tavily is selected) Choose basic (default) or advanced. Note: advanced provides better results but costs 2 API credits per search instead of 1.
  • DuckDuckGo Max Results: (Visible only if DuckDuckGo is selected) Maximum number of search results to parse (1-10). Default: 5.
  • DuckDuckGo Content Fetch Timeout: (Visible only if DuckDuckGo is selected) Maximum seconds to wait when trying to fetch content from each DuckDuckGo result URL. Default: 15.
  • Max Research Content Tokens: Approximate maximum tokens from combined web research results (snippets/fetched content) to include in the summarization prompt. Helps manage context window size and cost. (Default: 3000)
Focused Learning Domain
  • Enable Focused Learning Domain:
    • Disabled (Default): Prompts sent to the LLM use the standard, general-purpose instructions.
    • Enabled: Allows you to specify one or more fields of study to improve the LLM's contextual understanding.
  • Learning Domain: (Visible only when the above is enabled) Enter your specific field(s), e.g., 'Materials Science', 'Polymer Physics', 'Machine Learning'. This will add a "Relevant Fields: [...]" line to the beginning of prompts, helping the LLM generate more accurate and relevant links and content for your specific area of study.

Usage Guide

Quick Workflows & Sidebar

  • Open the Notemd sidebar to access grouped action sections for core processing, generation, translation, knowledge, and utilities.
  • Use the Quick Workflows area at the top of the sidebar to launch custom multi-step buttons.
  • The default One-Click Extract workflow runs Process File (Add Links) -> Batch Generate from Titles -> Batch Mermaid Fix.
  • Workflow progress, per-step logs, and failures are shown in the sidebar, with a pinned footer that protects the progress bar and log area from being squeezed out by expanded sections.
  • The progress card keeps status text, a dedicated percentage pill, and time remaining readable at a glance, and the same custom workflows can be reconfigured from settings.

This is the core functionality focused on identifying concepts and adding [[wiki-links]].

Important: This process only works on .md or .txt files. You can convert PDF files to MD files for free using Mineru before further processing.

  1. Using the Sidebar:
    • Open the Notemd Sidebar (wand icon or command palette).
    • Open the .md or .txt file.
    • Click "Process File (Add Links)".
    • To process a folder: Click "Process Folder (Add Links)", select the folder, and click "Process".
    • Progress is shown in the sidebar. You can cancel the task using the "Cancel Processing" button in the sidebar.
    • Note for folder processing: Files are processed in the background without being opened in the editor.
  1. Using the Command Palette (Ctrl+P or Cmd+P):
    • Single File: Open the file and run Notemd: Process Current File.
    • Folder: Run Notemd: Process Folder, then select the folder. Files are processed in the background without being opened in the editor.
    • A progress modal appears for command palette actions, which includes a cancel button.
    • Note: the plugin automatically removes leading \boxed{ and trailing } lines if found in the final processed content before saving.

New Features

  1. Summarise as Mermaid diagram:

    • Open the note you want to summarize.
    • Run the command Notemd: Summarise as Mermaid diagram (via command palette or sidebar button).
    • The plugin will generate a new note with the Mermaid diagram.
  2. Translate Note/Selection:

    • Select text in a note to translate just that selection, or invoke the command with no selection to translate the entire note.
    • Run the command Notemd: Translate Note/Selection (via command palette or sidebar button).
    • A modal will appear allowing you to confirm or change the Target Language (defaulting to the setting specified in Configuration).
    • The plugin uses the configured LLM Provider (based on Multi-Model settings) to perform the translation.
    • The translated content is saved to the configured Translation Save Path with the appropriate suffix, and opened in a new pane to the right of the original content for easy comparison.
    • You can cancel this task via the sidebar button or modal cancel button.
  3. Batch Translate:

    • Run the command Notemd: Batch Translate Folder from the command palette and select a folder, or right-click a folder in the file explorer and choose "Batch translate this folder".
    • The plugin will translate all Markdown files in the selected folder.
    • Translated files are saved to the configured translation path but are not opened automatically.
    • This process can be cancelled via the progress modal.
  1. Research & Summarize Topic:
    • Select text in a note OR ensure the note has a title (this will be the search topic).
    • Run the command Notemd: Research and Summarize Topic (via command palette or sidebar button).
    • The plugin uses the configured Search Provider (Tavily/DuckDuckGo) and the appropriate LLM Provider (based on Multi-Model settings) to find and summarize information.
    • The summary is appended to the current note.
    • You can cancel this task via the sidebar button or modal cancel button.
    • Note: DuckDuckGo searches may fail due to bot detection. Tavily is recommended.
  1. Generate Content from Title:

    • Open a note (it can be empty).
    • Run the command Notemd: Generate Content from Title (via command palette or sidebar button).
    • The plugin uses the appropriate LLM Provider (based on Multi-Model settings) to generate content based on the note's title, replacing any existing content.
    • If the "Enable Research in 'Generate from Title'" setting is enabled, it will first perform web research (using the configured Web Research Provider) and include that context in the prompt sent to the LLM.
    • You can cancel this task via the sidebar button or modal cancel button.
  2. Batch Generate Content from Titles:

    • Run the command Notemd: Batch Generate Content from Titles (via command palette or sidebar button).
    • Select the folder containing the notes you want to process.
    • The plugin will iterate through each .md file in the folder (excluding _processed.md files and files in the designated "complete" folder), generating content based on the note's title and replacing existing content. Files are processed in the background without being opened in the editor.
    • Successfully processed files are moved to the configured "complete" folder.
    • This command respects the "Enable Research in 'Generate from Title'" setting for each note processed.
    • You can cancel this task via the sidebar button or modal cancel button.
    • Progress and results (number of files modified, errors) are shown in the sidebar/modal log.
  3. Check and Remove Duplicate Concept Notes:

    • Ensure the Concept Note Folder Path is correctly configured in settings.
    • Run Notemd: Check and Remove Duplicate Concept Notes (via command palette or sidebar button).
    • The plugin scans the concept note folder and compares filenames against notes outside the folder using several rules (exact match, plurals, normalization, containment).
    • If potential duplicates are found, a modal window appears listing the files, the reason they were flagged, and the conflicting files.
    • Review the list carefully. Click "Delete Files" to move the listed files to the system trash, or "Cancel" to take no action.
    • Progress and results are shown in the sidebar/modal log.
  4. Extract Concepts (Pure Mode):

    • This feature allows you to extract concepts from a document and create the corresponding concept notes without altering the original file. It's perfect for quickly populating your knowledge base from a set of documents.
    • Single File: Open a file and run the command Notemd: Extract concepts (create concept notes only) from the command palette or click the "Extract concepts (current file)" button in the sidebar.
    • Folder: Run the command Notemd: Batch extract concepts from folder from the command palette or click the "Extract concepts (folder)" button in the sidebar, then select a folder to process all its notes.
    • The plugin will read the files, identify concepts, and create new notes for them in your designated Concept Note Folder, leaving your original files untouched.
  5. Create Wiki-Link & Generate Note from Selection:

    • This powerful command streamlines the process of creating and populating new concept notes.
    • Select a word or phrase in your editor.
    • Run the command Notemd: Create Wiki-Link & Generate Note from Selection (it is recommended to assign a hotkey to this, like Cmd+Shift+W).
    • The plugin will:
      1. Replace your selected text with a [[wiki-link]].
      2. Check if a note with that title already exists in your Concept Note Folder.
      3. If it exists, it adds a backlink to the current note.
      4. If it doesn't exist, it creates a new, empty note.
      5. It then automatically runs the "Generate Content from Title" command on the new or existing note, populating it with AI-generated content.
  6. Extract Concepts and Generate Titles:

    • This command chains two powerful features together for a streamlined workflow.
    • Run the command Notemd: Extract Concepts and Generate Titles from the command palette (it is recommended to assign a hotkey to this).
    • The plugin will:
      1. First, run the "Extract concepts (current file)" task on the currently active file.
      2. Then, it will automatically run the "Batch generate from titles" task on the folder you have configured as your Concept note folder path in the settings.
    • This allows you to first populate your knowledge base with new concepts from a source document and then immediately flesh out those new concept notes with AI-generated content in a single step.
  7. Extract Specific Original Text:

    • Configure your questions in the settings under "Extract Specific Original Text".
    • Use the "Extract Specific Original Text" button in the sidebar to process the active file.
    • Use the "Batch Extract Specific Original Text" sidebar/workflow action to process every supported file in a selected folder with the same configured questions.
    • Merged Mode: Enables faster processing by sending all questions in one prompt.
    • Translation: Optionally translates the extracted text to your configured language.
    • Custom Output: Configure where and how the extracted file is saved.
  8. Batch Mermaid Fix:

    • Use the "Batch Mermaid Fix" button in the sidebar to scan a folder and fix common Mermaid syntax errors.
    • The plugin will report any files that still contain errors in a mermaid_error_{foldername}.md file.
    • Optionally configure the plugin to move these problematic files to a separate folder for review.

Supported LLM Providers

Provider Type API Key Required Notes
DeepSeek Cloud Yes Native DeepSeek endpoint with reasoning-model handling
Qwen Cloud Yes DashScope compatible-mode preset for Qwen / QwQ models
Qwen Code Cloud Yes DashScope coding-focused preset for Qwen coder models
Doubao Cloud Yes Volcengine Ark preset; usually set the model field to your endpoint ID
Moonshot Cloud Yes Official Kimi / Moonshot endpoint
Xiaomi MiMo Cloud Yes Xiaomi MiMo OpenAI-compatible endpoint for chat, coding, and multimodal models
GLM Cloud Yes Official Zhipu BigModel OpenAI-compatible endpoint
Z AI Cloud Yes International GLM/Zhipu OpenAI-compatible endpoint; complements GLM
MiniMax Cloud Yes Official MiniMax chat-completions endpoint
Huawei Cloud MaaS Cloud Yes Huawei ModelArts MaaS OpenAI-compatible endpoint for hosted models
Baidu Qianfan Cloud Yes Official Qianfan OpenAI-compatible endpoint for ERNIE models
SiliconFlow Cloud Yes Official SiliconFlow OpenAI-compatible endpoint for hosted OSS models
OpenAI Cloud Yes Supports GPT and o-series models
Anthropic Cloud Yes Supports Claude models
Google Cloud Yes Supports Gemini models
Mistral Cloud Yes Supports Mistral and Codestral families
Azure OpenAI Cloud Yes Requires Endpoint, API Key, deployment name, and API Version
OpenRouter Gateway Yes Access many providers through OpenRouter model IDs
xAI Cloud Yes Native Grok endpoint
Groq Cloud Yes Fast OpenAI-compatible inference for hosted OSS models
Together Cloud Yes OpenAI-compatible endpoint for hosted OSS models
Fireworks Cloud Yes OpenAI-compatible inference endpoint
Requesty Gateway Yes Multi-provider router behind one API key
OpenAI Compatible Gateway Optional Generic preset for LiteLLM, vLLM, Perplexity, Vercel AI Gateway, etc.
LMStudio Local Optional (EMPTY) Runs models locally via LM Studio server
Ollama Local No Runs models locally via Ollama server

Note: For local providers (LMStudio, Ollama), ensure the respective server application is running and accessible at the configured Base URL. Note: For OpenRouter and Requesty, use the provider-prefixed/full model identifier shown by the gateway (for example google/gemini-flash-1.5 or anthropic/claude-3-7-sonnet-latest). Note: Doubao usually expects an Ark endpoint/deployment ID in the model field rather than a raw model family name. The settings screen now warns when the placeholder value is still present and blocks connection tests until you replace it with a real endpoint ID. Note: Z AI targets the international api.z.ai line, while GLM keeps the mainland China BigModel endpoint. Choose the preset that matches your account region. Note: China-focused presets use chat-first connection checks so the test validates the actual configured model/deployment, not only API-key reachability. Note: OpenAI Compatible is intended for custom gateways and proxies. Set the Base URL, API key policy, and model ID according to your provider's documentation.

Network Usage & Data Handling

Notemd runs locally inside Obsidian, but some features send outbound requests.

LLM Provider Calls (Configurable)

  • Trigger: file processing, generation, translation, research summarization, Mermaid summarization, and connection/diagnostic actions.
  • Endpoint: your configured provider base URL(s) in Notemd settings.
  • Data sent: prompt text and task content required for processing.
  • Data handling note: API keys are configured locally in plugin settings and used to sign requests from your device.

Web Research Calls (Optional)

  • Trigger: when web research is enabled and a search provider is selected.
  • Endpoint: Tavily API or DuckDuckGo endpoints.
  • Data sent: your research query and required request metadata.

Developer Diagnostics & Debug Logs (Optional)

  • Trigger: API debug mode and developer diagnostic actions.
  • Storage: diagnostic and error logs are written to your vault root (for example Notemd_Provider_Diagnostic_*.txt and Notemd_Error_Log_*.txt).
  • Risk note: logs can contain request/response excerpts. Review logs before sharing them publicly.

Local Storage

  • Plugin configuration is stored in .obsidian/plugins/notemd/data.json.
  • Generated files, reports, and optional logs are stored in your vault according to your settings.

Troubleshooting

Common Issues

  • Plugin Not Loading: Ensure manifest.json, main.js, styles.css are in the correct folder (<Vault>/.obsidian/plugins/notemd/) and restart Obsidian. Check the Developer Console (Ctrl+Shift+I or Cmd+Option+I) for errors on startup.
  • Processing Failures / API Errors:
    1. Check File Format: Ensure the file you are trying to process or check has a .md or .txt extension. Notemd currently only supports these text-based formats.
    2. Use the "Test LLM Connection" command/button to verify settings for the active provider.
    3. Double-check API Key, Base URL, Model Name, and API Version (for Azure). Ensure the API key is correct and has sufficient credits/permissions.
    4. Ensure your local LLM server (LMStudio, Ollama) is running and the Base URL is correct (e.g., http://localhost:1234/v1 for LMStudio).
    5. Check your internet connection for cloud providers.
    6. For single file processing errors: Review the Developer Console for detailed error messages. Copy them using the button in the error modal if needed.
    7. For batch processing errors: Check the error_processing_filename.log file in your vault root for detailed error messages for each failed file. The Developer Console or error modal might show a summary or general batch error.
    8. Automatic Error Logs: If a process fails, the plugin automatically saves a detailed log file named Notemd_Error_Log_[Timestamp].txt in your vault's root directory. This file contains the error message, stack trace, and session logs. If you encounter persistent issues, please check this file. Enabling "API Error Debugging Mode" in settings will populate this log with even more detailed API response data.
    9. Real Endpoint Long-Request Diagnostics (Developer):
      • In-plugin path (recommended first): use Settings -> Notemd -> Developer provider diagnostic (long request) to run a runtime probe on the active provider and generate Notemd_Provider_Diagnostic_*.txt in vault root.
      • CLI path (outside Obsidian runtime): for reproducible endpoint-level comparison between buffered and streaming behavior, use:
      npm run diagnose:llm -- \
        --transport openai-compatible \
        --provider-name OpenRouter \
        --base-url https://openrouter.ai/api/v1 \
        --api-key "$OPENROUTER_API_KEY" \
        --model anthropic/claude-3.7-sonnet \
        --prompt-file ./tmp/prompt.txt \
        --content-file ./tmp/content.txt \
        --mode compare \
        --timeout-ms 360000 \
        --output ./tmp/openrouter-diagnostic.txt
      
      The generated report contains per-attempt timing (First Byte, Duration), sanitized request metadata, response headers, raw/partial body fragments, parsed stream fragments, and transport-layer failure points.
  • LM Studio/Ollama Connection Issues:
    • Test Connection Fails: Ensure the local server (LM Studio or Ollama) is running and the correct model is loaded/available.
    • CORS Errors (Ollama on Windows): If you encounter CORS (Cross-Origin Resource Sharing) errors when using Ollama on Windows, you may need to set the OLLAMA_ORIGINS environment variable. You can do this by running set OLLAMA_ORIGINS=* in your command prompt before starting Ollama. This allows requests from any origin.
    • Enable CORS in LM Studio: For LM Studio, you can enable CORS directly in the server settings, which may be necessary if Obsidian is running in a browser or has strict origin policies.
  • Folder Creation Errors ("File name cannot contain..."):
    • This usually means the path provided in the settings (Processed File Folder Path or Concept Note Folder Path) is invalid for Obsidian.
    • Ensure you are using relative paths (e.g., Processed, Notes/Concepts) and not absolute paths (e.g., C:\Users\..., /Users/...).
    • Check for invalid characters: * " \ / < > : | ? # ^ [ ]. Note that \ is invalid even on Windows for Obsidian paths. Use / as the path separator.
  • Performance Problems: Processing large files or many files can take time. Reduce the "Chunk Word Count" setting for potentially faster (but more numerous) API calls. Try a different LLM provider or model.
  • Unexpected Linking: The quality of linking depends heavily on the LLM and the prompt. Experiment with different models or temperature settings.
  • Diagram Preview / Export Issues:
    1. Mermaid, JSON Canvas, and Vega-Lite artifacts support inline preview plus .svg / .png export. HTML artifacts support iframe fallback preview and raw-source save only.
    2. Preview/export theme follows the active Obsidian light/dark theme when the preview session is using system. If you switch Obsidian theme while the preview modal is already open, close and reopen the modal before exporting so the new theme is baked into the snapshot.
    3. Exported _preview.svg and _preview.png files are snapshots. Re-export after editing the source artifact or changing theme if the saved preview is stale.
    4. Invalid Mermaid artifacts now fail early with explicit validation errors before preview/export. Invalid JSON Canvas or Vega-Lite artifacts surface explicit preview errors. Save the raw artifact first if you need to inspect or repair the generated .md, .canvas, .json, or .html content manually.

Contributing

Contributions are welcome! Please refer to the GitHub repository for guidelines: https://github.com/Jacobinwwey/obsidian-NotEMD

Maintainer Docs

License

MIT License - See LICENSE file for details.


Notemd v1.8.5 - Enhance your Obsidian knowledge graph with AI.

Development Chronicle

Quarterly chronicle rendered in the original repo-saga visual style. The slicing is now produced directly by repo-saga itself with quarter granularity.

Notemd Development Chronicle

Last refreshed for release tag 1.8.5 on 2026-05-08. Latest commit date: 2026-05-08.

Star History Chart

Similar Plugins

info
• Similar plugins are suggested based on the common tags between the plugins.
Smart Connections
3 years ago by Brian Petro
Chat with your notes & see links to related content with AI embeddings. Use local models or 100+ via APIs like Claude, Gemini, ChatGPT & Llama 3
Canvas Conversation
3 years ago by André Baltazar
A plugin for Obsidian that allows you to create a canvas conversation using ChatGPT.
Khoj
3 years ago by Debanjum Singh Solanky
Your AI second brain. Self-hostable. Get answers from the web or your docs. Build custom agents, schedule automations, do deep research. Turn any online or local LLM into your personal, autonomous AI (gpt, claude, gemini, llama, qwen, mistral). Get started - free.
Note aliases
3 years ago by Pulsovi
This plugin manages wikilinks aliases and save them on the aliases list of the linked note
ChatGPT MD
3 years ago by Bram Adams
A (nearly) seamless integration of ChatGPT into Obsidian.
Fantasy Content Generator
3 years ago by Gregory-Jagermeister
a fantasy name generator for Obsidian
GPT Assistant
3 years ago by M7mdisk
Ask GPT from your notes and get personalized answers based on your knowledge base.
AI Assistant
3 years ago by Quentin Grail
AI Assistant Plugin for Obsidian
Link Range
3 years ago by Ryan Mellmer
Add ranged link support to Obsidian
GPT-LiteInquirer
3 years ago by ittuann
💬 Experience OpenAI ChatGPT assistance directly within Obsidian, drafting content without interrupting your creative flow.
Links
3 years ago by MiiKey
manipulate & manage obisidian links
Personal Assistant
3 years ago by edony
A plugin that harnesses AI agents and streamlining techniques to help you automatically manage Obsidian.
AI Mentor
3 years ago by clementpoiret
brAIn
3 years ago by lusob
Silicon AI
3 years ago by deepfates
Add some intelligence to your notes with Silicon AI for Obsidian
Arcana
3 years ago by A-F-V
Supercharge your Obsidian note-taking through AI-powered insights and suggestions
Vault Chat
3 years ago by Exo Ascension
A ChatGPT bot trained on your vault notes. Ask your AI questions about your own thoughts and ideas!
BMO Chatbot
3 years ago by Longy2k
Generate and brainstorm ideas while creating your notes using Large Language Models (LLMs) from Ollama, LM Studio, Anthropic, Google Gemini, Mistral AI, OpenAI, and more for Obsidian.
AI Notes Summary
3 years ago by R. Ian Bull (irbull)
An Obsidian plugin that uses ChatGPT to generate a summary of referenced notes
AI Research Assistant
3 years ago by Interweb Alchemy
Prompt Engineering Research Tool for AI APIs
ChatGPT Definition
3 years ago by julix14
Flashcard Generator
3 years ago by ChloeDia
Obsidian Plug-in to automatically create a set of questions/answers on your notes !
AI Editor
3 years ago by Zekun Shen
Chat with Bard
3 years ago by Artel250
An obsidian plugin that enables you to talk to Google Gemnini directly
Canvas LLM Extender
3 years ago by Pasi Saarinen
Let the OpenAI LLM add nodes to your Obsidian canvas
ChatCBT
2 years ago by Claire Froelich
AI-powered journaling plugin for your Obsidian notes, inspired by cognitive behavioral therapy
Intelligence
2 years ago by John Mavrick
Gemini Assistant
2 years ago by eatgrass
Your AI assistant in obsidian
Smart Second Brain
2 years ago by Leo310, nicobrauchtgit
An Obsidian plugin to interact with your privacy focused AI-Assistant making your second brain even smarter!
WordWise
2 years ago by ckt1031
Writing companion for AI content generation.
AI Tagger
2 years ago by Luca Grippa
Simplify tagging in Obsidian. Instantly analyze and tag your document with one click for efficient note organization.
Quiz Generator
2 years ago by Edward Cui
Generate interactive flashcards from your notes using models from OpenAI (ChatGPT), Google (Gemini), Ollama (local LLMs), and more. Or manually create your own to use with the quiz UI.
Select & Complete
2 years ago by Mario De Luca
A really simple and easy to use AI completion for Obsidian
AI Zhipu
2 years ago by Tarslab
AI-zhipu is an Obsidian plugin that helps you utilize the Zhipu API. 智谱AI obsidian 插件
AI LLM
2 years ago by Sparky4567
Lets to use local llms in your Obsidian Vaults, extend your stories or create entirely new texts based on your previous input
AI Summarize
2 years ago by Alp Sariyer
Easy to use AI Summary tool for your notes in Obsidian
Cloud Atlas
2 years ago by Cloud Atlas
Cloud Atlas Obsidian Client
Reverse Prompter
2 years ago by Ryan Halliday
Let AI generate prompts to keep you writing
Markpilot
2 years ago by Taichi Maeda
AI-powered inline completions and chat view for Obsidian
AI for Templater
2 years ago by TfTHacker
Extends Templater with AI Chat commands using the OpenAI Client Library
Strapi Exporter AI
2 years ago by Cinquin Andy
[prod] - 🚀 Strapi Exporter: Supercharge Your Obsidian-to-Strapi Workflow, export an obsidian notes directly to your Strapi API
CoCo AskAI
2 years ago by Yukee
CoCo-AskAI is an Obsidian plugin that enables AI-powered note assistance, enhancing the writing experience with customizable functions.
ai-writer
2 years ago by Donovan Ye
A plugin for Obsidian that uses AI to help you write better and faster.
AI Chat
2 years ago by arenasys
Github Copilot
2 years ago by Vasseur Pierre-Adrien
A bridge between Obsidian and Github Copilot
Ayanite
2 years ago by jemstelos
Rapid AI
2 years ago by Rapid AI
AI Assistant for selected text and generating content with Markdown. Shortcuts and quick action buttons provide instant AI assistance. It provides a high availability API for unlimited Chat GPT request rates, so you can ensure smooth work for any workload.
Simple Prompt
2 years ago by David Zachariae
Simple Prompt Plugin is a plugin for Obsidian that allows you generate content in your notes using LLMs.
Explain Selection With AI
2 years ago by Ben Wurster
This is my first go at making an Obsidian plugin to elaborate on and describe selected bits of information and their context.
Tars
2 years ago by Tarslab
Obsidian tars plugin that supports text generation based on tag suggestions, using services like DeepSeek, Claude, OpenAI, OpenRouter, SiliconFlow, Gemini, Ollama, Kimi, Doubao, Qwen, Zhipu, QianFan & more.
Nextcloud Link Fixer
2 years ago by KaelLarkin
Caret
2 years ago by Jake Colling
Caret, an Obsidian Plugin
AI image analyzer
2 years ago by Swaggeroo
Analyze images with AI to get keywords of the image.
Smart Templates
2 years ago by 🌴 Brian Petro
Smart Templates is an AI powered templates for generating structured content in Obsidian. Works with Local Models, Anthropic Claude, Gemini, OpenAI and more.
AI LaTeX Generator
2 years ago by Aayush Shah
An Obsidian plugin that generates latex code from natural language inputs.
Mesh AI
2 years ago by Chasebank87
Add links to current note
6 years ago by MrJackphil
This plugin adds a command which allows to add a link to the current note at the bottom of selected notes
Wikilinks to MDLinks
5 years ago by Agatha Uy
An Obsidian md plugin which allows for the conversion of individually selected wikilinks to markdown links, and vice versa.
Page Heading From Links
5 years ago by Mark Beattie
Obsidian plugin to populate page headings
InfraNodus AI Graph View
2 years ago by Nodus Labs
Advanced graph view for Obsidian: text analysis, topic modeling, and AI with InfraNodus AI text analysis tool: https://infranodus.com
Open Interpreter
2 years ago by Mike Bird
The power of Open Interpreter in your Obsidian vault
Metadata Auto Classifier
2 years ago by Beomsu Koh
AI-powered Obsidian plugin that automatically classifies and generates metadata (tags, frontmatter) for your notes.
Smart Composer
2 years ago by Heesu Suh
AI chat assistant for Obsidian with contextual awareness, smart writing assistance, and one-click edits. Features vault-aware conversations, semantic search, and local model support.
NeuroVox
a year ago by Synaptic Labs
Obsidian plugin for transcription and generation
Gemini Scribe
a year ago by Allen Hutchison
An obsidian plugin to interact with Google Gemini
AI bot
a year ago by kuzzh
The AI Bot Plugin is a powerful tool designed to enhance your note-editing experience in Obsidian by leveraging the capabilities of AI. This plugin allows you to interact with an AI assistant directly within Obsidian, making it easier to generate, edit, and organize your notes with intelligent suggestions and automated tasks.
Insta TOC
a year ago by Nick C.
Generate, update, and maintain a table of contents for your notes while typing in real time.
LLM workspace
a year ago by Olivér Falvai
ExMemo Tools
a year ago by Yan.Xie
Use large models for smart document management and optimization, including relocating files, enhancing text, and generating metadata.
Smart Context
a year ago by 🌴 Brian
AI Revisionist
a year ago by Synaptic Labs
YouTube Video Summarizer
a year ago by mbramani
Generate AI-powered summaries of YouTube videos directly in Obsidian using Google's Gemini AI.
InlineAI
a year ago by FBarrca
MCP Tools
a year ago by Jack Steam
Add Obsidian integrations like semantic search and custom Templater prompts to Claude or any MCP client.
AI Providers
a year ago by Pavel Frankov
This plugin is a hub for setting AI providers (OpenAI-like, Ollama and more) in one place.
AI integration Hub
a year ago by Hishmat Salehi
A modular AI integration hub for Obsidian
Automatic Linker
a year ago by Kodai Nakamura
Title As Link Text
a year ago by Lex Toumbourou
An Obsidian plugin to set the Link Text using the document title
Vision Recall
a year ago by Travis Van Nimwegen
Transform screenshots into searchable Obsidian notes using AI vision and text analysis
AI Tagger Universe
a year ago by Hu Nie
An intelligent Obsidian plugin that leverages AI to automatically analyze note content and suggest relevant tags, supporting both local and cloud-based LLM services.
Memos AI Sync
a year ago by leoleelxh
obsidian-memos-sync-plugin,将 Memos 内容同步到 Obsidian 的插件,提供无缝集成体验。
Blog AI Generator
a year ago by Gareth Ng
Obsidian Plugin: generate blog via AI based on the current note.
Student Repo
a year ago by Feirong.zfr
学生知识库助手(Student Repository Helper)是一个面向学生或学生家长的Obsidian 插件,这款插件旨在解决学生在学习阶段面临的资料管理难题,将学习过程中产生的各类重要资料,如试卷、笔记、关键文档、绘画手工作品等,进行系统性的数字化整合与管理,并利用 AI 助手定期进行学习分析总结。随着时间的推移,它将助力你逐步搭建起一座专属你自己的知识宝库,这座宝库将伴随你一生,成为你知识成长与积累的见证。
Research Quest
a year ago by Nathan Arthur
Smart ChatGPT
a year ago by 🌴 Brian
LLM Test Generator
a year ago by Aldo E George
AI Helper
a year ago by David Connolly
AI Note Tagger
a year ago by Jasper Mayone
Auto tagging obsidian notes w/ AI
HiNote
a year ago by Kai
Add comments to highlighted notes, use AI for thinking, and flashcards for memory.
Proofreader
a year ago by pseudometa (aka Chris Grieser)
AI-based proofreading and stylistic improvements for your writing. Changes are inserted as suggestions directly in the editor, similar to suggested changes in word processing apps.
Date Range Expander
a year ago by Mil
Obsidian plugin - Date Range Expander
Images to Notes
a year ago by Rodolfo Terriquez
Turn photos of your handwritten notes into markdown
Folder Filelist
a year ago by Bill Anderson
Obsidian plugin for simple folder listing
EasyLink
10 months ago by isitwho
Select text in your obsidian editor to find the most similar content from other notes and easily create links.
Private AI
9 months ago by GB
Effortlessly chat with your Obsidian notes using a privacy first LLM. Private by design, your notes never leave the device and use local processing only.
Note Companion AI
8 months ago by Benjamin Ashgan Shafii
Note Companion: AI assistant for Obsidian that goes beyond just a chat. (prev File Organizer 2000)
Hydrate
5 months ago by hydrateagent
YOLO
4 months ago by Lapis0x0
Smart, snappy, and multilingual AI assistant for your vault.
AI Transcriber
4 months ago by Musashino Software
AI-powered speech-to-text transcription using OpenAI GPT-4o and Whisper APIs
Nova
3 months ago by Shawn Duggan
Nova - AI plugin for Obsidian that edits your documents directly through natural conversation. Stop copying from chat, start collaborating with AI.