Nodes

Nodes Reference

This page documents all available node types in Node Banana. Each node serves a specific purpose in your workflow.

Overview

NodePurposeInputsOutputs
Image InputLoad imagesImage
PromptText promptsText
Generate ImageAI image generationImage, TextImage
Generate VideoAI video generationImage, TextVideo
LLM GenerateAI text generationText, ImageText
AnnotationDraw on imagesImageImage
Split GridSplit into gridImageReference
OutputDisplay resultsImage, Video

Image Input

The Image Input node loads images into your workflow from your local filesystem.

Outputs

  • Image — The loaded image as base64 data

Features

  • Drag and drop images directly onto the node
  • Click to open file picker
  • Supports PNG, JPG, and WebP formats
  • Maximum file size: 10 MB

Usage

  1. Add an Image Input node to the canvas
  2. Click the node or drag an image file onto it
  3. The image appears in the node preview
  4. Connect the output to downstream nodes

You can paste images directly from your clipboard using Cmd/Ctrl + V when an Image Input node is selected.


Prompt

The Prompt node provides text input for your workflow. Use it to write prompts for image or text generation.

Outputs

  • Text — The prompt text string

Features

  • Inline text editing
  • Expand button for larger editor (modal)
  • Full-screen editing mode for complex prompts

Usage

  1. Add a Prompt node
  2. Type your prompt in the text area
  3. Click the expand icon for a larger editor
  4. Connect to Nano Banana or LLM Generate nodes

Writing Effective Prompts

For image generation:

  • Be specific about subject, style, and composition
  • Include lighting and mood descriptions
  • Mention camera angle or perspective
A professional headshot of a business executive,
studio lighting, neutral gray background,
sharp focus, high resolution

Generate Image

The Generate Image node creates images using AI models from multiple providers including Gemini, Replicate, and fal.ai.

Inputs

  • Image (optional, multiple) — Reference images for the generation (supports image-to-image)
  • Text — The prompt describing what to generate
  • Dynamic inputs — Additional inputs based on selected model's schema

Outputs

  • Image — The generated image

Settings

SettingDescription
ProviderChoose from Gemini, Replicate, or fal.ai
ModelSelect from available models (use search dialog)
Custom ParametersModel-specific parameters appear dynamically

Provider Configuration

Configure API keys for each provider in Project Settings → Providers tab:

  • Gemini — Google AI API key
  • Replicate — Replicate API token
  • fal.ai — fal.ai API key

Model Discovery

Click the model selector to open the Model Search dialog:

  • Browse models from all configured providers
  • Filter by provider using icon buttons
  • View recently used models for quick access
  • See capability badges (image/video) and model IDs
  • External links to model documentation

Dynamic Parameters

Each model exposes its own parameters:

  • Parameters update automatically when changing models
  • Input handles appear/disappear based on schema
  • Parameter validation prevents invalid configurations
  • Custom UI for model-specific settings

Usage

  1. Add a Generate Image node
  2. Select a provider and model
  3. Connect a Prompt node to the text input
  4. Optionally connect Image Input nodes for image-to-image
  5. Configure model-specific parameters
  6. Run the workflow

Image-to-image generation works across all providers. Large images are automatically converted to temporary URLs for provider compatibility.

Image Carousel

After generating, use the carousel to:

  • Browse previous generations (arrow buttons)
  • See generation history for this node
  • Select a previous result as the current output

Legacy Workflows

Workflows using the old NanoBananaNode automatically migrate to GenerateImageNode on load.


Generate Video

The Generate Video node creates videos using AI models from providers that support video generation.

Inputs

  • Image (optional, multiple) — Reference images or starting frames
  • Text — The prompt describing the video to generate
  • Dynamic inputs — Additional inputs based on selected model's schema

Outputs

  • Video — The generated video

Settings

SettingDescription
ProviderChoose from providers with video capabilities
ModelSelect from available video models
Custom ParametersModel-specific parameters (duration, fps, etc.)

Video Generation Features

  • Extended timeout — 10-minute timeout for longer video processing
  • Video playback — In-node video player with controls
  • Format detection — Automatic handling of various video formats
  • Generation queue — Manages video generation tasks

Usage

  1. Add a Generate Video node
  2. Select a provider and video-capable model
  3. Connect a Prompt node describing the video
  4. Optionally connect Image Input for reference frames
  5. Configure video parameters (duration, style, etc.)
  6. Run the workflow
⚠️

Video generation typically takes longer than image generation and may have higher costs. Check provider pricing before running.

Video Carousel

After generating, use the carousel to:

  • Browse previous video generations
  • Play/pause videos directly in the node
  • Navigate through video generation history
  • Select a previous result as the current output

Output Display

Connect Generate Video to an Output node to:

  • Display videos in a larger preview area
  • Access download controls
  • View video metadata (duration, resolution)

LLM Generate

The LLM Generate node creates text using large language models. Use it for prompt enhancement, descriptions, or any text generation task.

Inputs

  • Text — Input prompt or context
  • Image (optional, multiple) — Images for multimodal generation

Outputs

  • Text — The generated text

Settings

SettingDescription
ModelSelect from Gemini or OpenAI models
TemperatureControls randomness (0-1)
Max TokensMaximum output length

Available Models

Google Gemini:

  • gemini-2.5-flash (fast, capable)
  • gemini-3-flash-preview (latest flash)
  • gemini-3-pro-preview (most capable)

OpenAI:

  • gpt-4.1-mini (balanced)
  • gpt-4.1-nano (fast)
⚠️

OpenAI models require a separate OPENAI_API_KEY in your environment.

Usage

  1. Add an LLM Generate node
  2. Connect a Prompt node with your instructions
  3. Optionally connect images for multimodal input
  4. Configure model and parameters
  5. Run to generate text

Example: Prompt Enhancement

Connect nodes like this:

[Prompt: "enhance this prompt for image generation: cat on roof"]
    → [LLM Generate]
    → [Nano Banana]

The LLM can expand simple prompts into detailed generation instructions.


Annotation

The Annotation node opens a full-screen drawing editor where you can draw on images.

Inputs

  • Image — The image to annotate

Outputs

  • Image — The annotated image

Drawing Tools

ToolDescription
RectangleDraw rectangular shapes
CircleDraw circular shapes
ArrowDraw arrows for highlighting
FreehandFree drawing with mouse/pen
TextAdd text labels

Features

  • 8 color presets
  • 3 stroke width options
  • Undo/redo support
  • Shape selection and transformation
  • Save or cancel changes

Usage

  1. Connect an image source to the Annotation input
  2. Click the Edit button on the node
  3. Use drawing tools to annotate
  4. Click Save to apply changes
  5. The annotated image flows to connected nodes

Use annotations to mask areas, add reference marks, or highlight regions for AI generation. The AI will see and respond to your annotations.


Split Grid

The Split Grid node divides an image into a grid of smaller images. This is useful for contact sheets or batch processing.

Inputs

  • Image — The image to split

Outputs

  • Reference (multiple) — Visual references to grid cells

Grid Options

OptionGrid Size
2×24 cells
2×36 cells
2×48 cells
3×39 cells
2×510 cells

Usage

  1. Connect an image (like a contact sheet) to Split Grid
  2. Select your grid configuration
  3. The node generates output references for each cell
  4. Connect references to organize downstream processing

How It Works

Split Grid is primarily for visual organization. It:

  • Divides the source image into equal cells
  • Creates reference outputs for each cell
  • Helps you visually track which part of an image flows where

Output

The Output node displays the final result of your workflow. Use it as the endpoint for generated images and videos.

Inputs

  • Image — Images to display
  • Video — Videos to display

Features

  • Large preview area
  • Click to open lightbox viewer
  • Download button for saving results
  • Shows image dimensions or video metadata
  • Video playback controls with format detection
  • Carousel for browsing media history

Usage

  1. Add an Output node at the end of your workflow
  2. Connect the final image or video source
  3. Run the workflow
  4. View and download results from the Output node

While you can view images and videos in any node, Output nodes provide a cleaner display area and make it clear where your workflow ends.


Groups

Groups aren't nodes, but they're an important organizational feature.

Creating Groups

  1. Select multiple nodes
  2. Right-click → "Create Group"
  3. Name your group

Group Features

  • Color coding — Groups have colored backgrounds
  • Collective movement — Drag to move all contained nodes
  • Lock/unlock — Locked groups skip execution

Use Cases

  • Organize related nodes visually
  • Disable workflow sections without deleting
  • Create reusable workflow "modules"

Common Node Features

All nodes share these capabilities:

Title Editing

Click the title to rename any node. Custom names help organize complex workflows.

Comments

Add comments to nodes for documentation. Hover to see the full comment.

Resizing

Drag the bottom-right corner to resize nodes.

Execution Controls

  • Play button — Run from this node
  • Regenerate — Re-run with current inputs

Error States

When a node encounters an error:

  • Red border appears
  • Error message displays
  • Check the browser console for details