Skip to content

Connectors Overview

Connectors are the building blocks of Meddle workflows. Each connector is a specialized component that can read data from sources, write data to destinations, or process and transform data in transit.

Connectors are connected together in a visual workflow editor to create powerful data integration pipelines without writing code.

Meddle provides 18 different connector types organized into three main categories:

Connect to industrial automation systems and PLCs:

OPC UA

Industry-standard protocol for industrial automation. Supports multiple authentication methods and security modes.

Modbus

Serial and TCP communication with PLCs and industrial devices. Supports all register types and data formats.

Siemens S7

Direct communication with Siemens S7-300, S7-400, S7-1200, and S7-1500 PLCs.

Integrate with IoT protocols and databases:

MQTT

Publish/subscribe messaging for IoT devices with QoS support.

HTTP/REST

Connect to REST APIs and web services with customizable headers and methods.

InfluxDB

Time-series database optimized for industrial and IoT data.

MongoDB

NoSQL document database for flexible data storage.

SQL

Support for MySQL, PostgreSQL, and SQL Server databases.

Transform, filter, and process data:

Filter

Whitelist or blacklist specific data keys.

Conveyor

Pass data through unchanged for many-to-many routing.

Merge

Combine data from multiple sources with timing or key-based strategies.

Reshape

Rename fields or enrich data with static values.

Trigger

Conditional logic using MXL expressions.

Cron

Schedule data release at specific times.

Auth

JWT authentication and validation.

Anomaly Detection

ML-based anomaly detection using Isolation Forest.

Alert

Send notifications based on conditions.

Chart

Real-time data visualization.

All connectors exchange data using a standard key-value payload format:

{
"temperature": 25.5,
"pressure": 101.3,
"humidity": 60,
"status": "active",
"timestamp": 1234567890
}

This standardized format allows any connector to communicate with any other connector, regardless of the underlying protocol or system.

Each connector is configured using JSON with three main sections:

  1. Type: The connector type (e.g., OpcuaReader, MqttV3Writer)
  2. Config: Connector-specific configuration (endpoints, credentials, etc.)
  3. Variables: (Optional) For industrial connectors, defines what data to read/write

Example:

{
"type": "OpcuaReader",
"config": {
"endpoint": "opc.tcp://localhost:4840",
"pollingRate": 1000
},
"variables": [
{
"key": "temperature",
"nodeId": "ns=1;s=Temperature"
}
]
}

Connectors are organized into worksheets (workflows) where:

  1. Reader connectors collect data from sources
  2. Processing connectors transform and filter data
  3. Writer connectors send data to destinations

Data flows through connections between connectors, with each connector processing data in parallel for maximum performance.

Many reader connectors support a pollingRate parameter (in milliseconds):

{
"pollingRate": 1000 // Poll every 1 second
}

Connectors that require authentication typically support multiple methods:

{
"username": "user",
"password": "pass"
}

Or for token-based auth:

{
"authToken": "your-token-here"
}

All connectors include built-in error handling and will:

  • Log errors with specific error codes
  • Continue operation when possible
  • Provide detailed error messages for troubleshooting
  • Use appropriate pollingRate values to avoid overwhelming systems
  • Faster polling = more data but higher CPU/network usage
  • Typical values: 100ms (fast), 1000ms (normal), 5000ms (slow)

Some connectors accumulate data in memory:

  • Merge: Combines data from multiple sources
  • Cron: Holds data until scheduled release
  • Use maxRetained parameter to limit memory usage
  • Worksheets execute connectors in parallel
  • Multiple worksheets run simultaneously
  • Horizontally scaled across CPU cores

Begin with basic reader → writer workflows before adding processing connectors.

Place Filter connectors early in the workflow to reduce payload size and improve performance.

Use Reshape connectors to standardize field names across different sources.

Use Trigger connectors to detect error conditions and route to Alert connectors.

Use Cron connectors for batch processing and scheduled data releases.

Use Auth connectors to validate JWT tokens in data flows that require authentication.

Add Anomaly Detection connectors to identify unusual patterns in your data.

Explore the connector documentation by category: