LaserData Cloud
Connectors

Connectors

High-performance Rust-native source and sink connectors for Apache Iggy

Connectors are one of the most powerful features of the LaserData platform. Built on the Apache Iggy connectors runtime — a high-performance, modular framework for statically typed, dynamically loaded Rust plugins — they let you integrate your deployments with external systems without writing any code.

Natively compiled in Rust for extreme performance and minimal memory footprint — no JVM, no garbage collection pauses.

How It Works

Every connector is a Rust library implementing either the Source or Sink trait from the Apache Iggy Connectors SDK. The connectors runtime loads plugins dynamically at startup and manages their full lifecycle — configuration, execution, monitoring, and shutdown.

┌───────────────────────────────────────────────────────┐
│                    Deployment Node                    │
│                                                       │
│  ┌──────────────┐    ┌─────────────────────────────┐  │
│  │              │    │     Connectors Runtime      │  │
│  │  Iggy Server │◄──►│                             │  │
│  │              │    │  ┌──────────┐ ┌──────────┐  │  │
│  │  - Streams   │    │  │  Source  │ │   Sink   │  │  │
│  │  - Topics    │    │  │  Plugin  │ │  Plugin  │  │  │
│  │  - Messages  │    │  └────┬─────┘ └────┬─────┘  │  │
│  └──────────────┘    └───────┼────────────┼────────┘  │
│                              │            │           │
└──────────────────────────────┼────────────┼───────────┘
                               │            │
                       External System  External System
                       (e.g. API, DB)   (e.g. Postgres)

The runtime runs as a separate process managed by the Warden agent on the same node. It connects to the Iggy server locally over TCP with TLS — connector traffic stays on the node, your data doesn't transit any external network.

Connector Types

TypeDirectionWhat It Does
SourceExternal system → IggyProduces messages into Iggy streams from an external system
SinkIggy → External systemConsumes messages from Iggy streams and pushes them to an external destination

You can run multiple connector instances simultaneously — for example, a source connector ingesting from one system while multiple sink connectors push to different destinations.

Activating a Connector

From the Console

  1. Navigate to your deployment and open the Connectors tab
  2. Browse the available connector catalog — filter by source or sink
  3. Click Activate on the connector you want
  4. Optionally set a custom instance name and key
  5. The platform provisions the connector on all nodes with the Connectors runtime

The connector starts in Pending status and transitions to Active once the nodes have processed the activation task.

Instance Naming

Each connector instance gets a unique key within the deployment. If you don't provide a custom key, one is auto-generated:

  • First instance: {connector}-{type} (e.g. postgres-sink)
  • Subsequent instances: {connector}-{type}-{n} (e.g. postgres-sink-2)

You can activate multiple instances of the same connector — for example, two PostgreSQL sinks writing to different databases.

Connector Lifecycle

StatusMeaning
PendingInstance created, waiting for nodes to process activation
ActiveRunning and processing messages
InactiveDisabled but configuration preserved
FailedEncountered errors — check logs for details

Monitoring

The Connectors runtime reports detailed per-instance metrics:

MetricDescription
messages_producedTotal messages produced (source connectors)
messages_consumedTotal messages consumed (sink connectors)
messages_processedTotal messages successfully processed
errorsError count
statusRuntime status (Starting, Running, Stopping, Stopped, Error)

Runtime-level metrics are also available: CPU usage, memory usage, total sources/sinks running. View these in the Console's Metrics tab or via the Monitoring API.

Deleting a Connector Instance

From the Console's Connectors tab, click Delete on the instance. This removes the instance from all nodes and deletes its configuration.

Required permission: deployment:connector:manage (activate, configure, delete) or deployment:connector:read (view only)


API Reference

The main API handles connector catalog browsing and activation. The deployment API manages running instances.

List Available Connectors

curl "https://api.laserdata.cloud/tenants/{tenant_id}/divisions/{division_id}/environments/{environment_id}/deployments/{deployment_id}/connectors?type=sink&page=1&results=10" \
  -H "ld-api-key: YOUR_API_KEY"

Returns the connector catalog with availability and permission status for your deployment.

Activate a Connector

curl -X POST https://api.laserdata.cloud/tenants/{tenant_id}/divisions/{division_id}/environments/{environment_id}/deployments/{deployment_id}/connectors/activate \
  -H "ld-api-key: YOUR_API_KEY" \
  -H "Content-Type: application/json" \
  -d '{
    "connector_key": "postgres",
    "connector_type": "sink",
    "instance_name": "Orders to Postgres",
    "instance_key": "orders-pg-sink"
  }'

Returns 202 Accepted. The connector starts in Pending status and transitions to Active once nodes process the task.

List Connector Instances

curl {supervisor_url}/deployments/{deployment_id}/connectors/instances \
  -H "ld-api-key: YOUR_API_KEY"
[
  {
    "id": 1,
    "deployment_id": 42,
    "connector_type": "sink",
    "connector_key": "postgres",
    "name": "Orders to Postgres",
    "key": "orders-pg-sink",
    "status": "active"
  }
]

Delete a Connector Instance

curl -X DELETE {supervisor_url}/deployments/{deployment_id}/connectors/instances/{instance_id} \
  -H "ld-api-key: YOUR_API_KEY"

On this page