FLUX Users Guide

Documentation for FLUX Users Guide

OpenAI Integration

Introduction

FLUX provides integration with the OpenAI API, enabling you to create elements that leverage artificial intelligence capabilities within your workflows.

Overview

The OpenAI integration is implemented through two key components:

You can view the profile configuration here: OpenAI API Profile Example

Available OpenAI Elements

FLUX includes several pre-built elements that demonstrate OpenAI integration capabilities:

Python Helper

TTP Helper

JSON Query

Using OpenAI Elements in Your Code

To integrate OpenAI elements into your code, follow these steps:

  1. Locate the Element – Search for the Element that contains your desired OpenAI model
  2. Invoke the Function – Call make_question_to_openai_element() with your question as the second parameter
  3. Receive the Response – The function returns the OpenAI-generated answer

How It Works

The make_question_to_openai_element() function executes an OpenAI API call using:

The function then returns the AI-generated response for use in your application.

Example Implementation

Additional Resources

For detailed information on creating custom models with OpenAI integration capabilities, refer to the Models documentation page.

Models

Overview

Models in FLUX enable you to perform data transformations using different processing classes. Each model class requires specific formatting in the Model data field to define how data should be parsed and transformed.

You can validate your model's behavior by entering sample input in the Test data field and running a test to preview the output results.


Model Classes

CLI

Execute command-line instructions on a device.

Model data format: A command string that will be executed.

Structured Text

Process text containing variables in a structured format.

Model data format: Structured text with embedded variables.

Python

Execute Python code to transform data.

Model data format: Python code that uses a data variable, which will be populated with the content from the Test data input.

JSON & XML

Parse and transform JSON or XML data structures.

Model data format: JSON structure with variables.

Block

Parse block-formatted files using custom or predefined rules.

Model data format: YAML parser configuration with multiple parsing rules.

Using predefined rules: Leave the Model data field empty to automatically detect the block type and parse it using FLUX's built-in rules.


Testing Models

Basic Model Testing

  1. Navigate to the Test model tab
  2. Fill the Model data input with the parsing configuration
  3. Fill the Test data input with sample data
  4. Run test in the CLI to perform the transformation

Example:

Testing Against Element Context

You can test a model using the context of a specific element by entering the following in the Test data input:

id={x}

Replace {x} with the ID of the element whose context you want to use for testing.


Creating Model Templates with OpenAI

FLUX can generate model templates automatically using OpenAI based on your test data.

Prerequisites

  1. Create an empty model with your desired class
  2. Create an OpenAI helper element with the naming pattern: openai-helper-{class} (where {class} matches your model class)

Steps

  1. Navigate to the Test model tab
  2. Fill the Test data input with your sample data
  3. Leave the Model data input empty
  4. Run template.create in the CLI terminal

The system will call the openai-helper-{class} element, query the OpenAI API, print the generated template, and automatically evaluate it against your test data.


Dynamic Parameters

Models support dynamic variable substitution using element context data. Variables can be loaded in steps prior to model execution, allowing the model to replace them during transformation.

Example configuration:

Example usage:

PublicElementConfig

Overview

The PublicElementConfig class is used within FLUX task processing scripts to manage and validate element configurations. It provides methods for retrieving, comparing, and auditing configuration values across multiple formats.

Configuration Object

Attributes

The config object provides the following attributes:

Attribute Type Description
klass string Format class of the current configuration. Supported values: txt, py, xml, json, yaml
section string Section identifier of the current configuration
version string Version identifier of the current configuration
required_approvement_percentage integer Minimum percentage of audit tests that must pass for the configuration to be considered approved
audit_value integer Final calculated result of the audit

Methods

get_last_config()

Returns the most recent configuration for the current element.

Parameters: None

Returns: Configuration object


to_json()

Converts the current configuration to JSON format.

Parameters: None

Returns: JSON string representation of the configuration


get(key)

Retrieves a specific value from the configuration by key.

Parameters:

Returns: Value associated with the specified key

Example:

domain_value = config.get('domain_value')

compare_numeric_value(key, compare_operator, value, required_for_audit)

Compares a numeric configuration value against a specified value using a comparison operator.

Parameters:

Returns: Boolean result of the comparison

Example:

settings['audit.valid_domain'] = config.compare_numeric_value('size', '>=', 4, required_for_audit=True)

compare_string_value(key, compare_operator, value, required_for_audit)

Compares a string configuration value against a specified value using a comparison operator.

Parameters:

Returns: Boolean result of the comparison

Example:

settings['audit.valid_domain'] = config.compare_string_value('domain', 'startswith', 'https', required_for_audit=True)

check_key_matches(key_pattern, required_for_audit)

Checks whether any configuration key matches the specified pattern.

Parameters:

Returns: Boolean indicating if a match was found

Example:

config.check_key_matches("interface.GigabitEthernet1.*", required_for_audit=True)

check_key_occurrences(key_pattern, compare_occurrences_operator, value, required_for_audit)

Counts the number of configuration keys matching a pattern and compares the count against a specified value.

Parameters:

Returns: Boolean result of the comparison

Example:

config.check_key_occurrences("interface.*", ">=", 90, required_for_audit=True)

complex_compare(key_pattern, compare_str, required_for_audit)

Executes a complex comparison using OpenAI natural language processing to evaluate configuration values.

Parameters:

Returns: Boolean result of the comparison

Example:

config.complex_compare('interface.x', 'comienza con Virtual', required_for_audit=True)

Complete Example

The following example demonstrates a comprehensive configuration audit script using multiple validation methods:

# Set configuration metadata
config.version = '1.7'
config.section = 'core'
config.required_approvement_percentage = 85

# Validate interface count
settings['audit.interfaces'] = config.check_key_occurrences("interface.*", ">=", 90, required_for_audit=True)

# Validate domain configuration
settings['audit.valid_domain'] = config.compare_string_value('domain', 'startswith', 'zequenze', required_for_audit=True)

# Complex validations using natural language
settings['audit.virtual_interface'] = config.complex_compare('interface.x', 'comienza con Virtual', required_for_audit=True)
settings['audit.custom_hop'] = config.complex_compare('route.x', 'contiene un nexthop a la ip 172.16.254.1', required_for_audit=True)

# Validate specific routing configuration
settings['audit.route'] = config.check_key_matches('route.0.0.0.0/0', required_for_audit=True)

# Validate hostname format
settings['audit.valid_hostname'] = config.compare_string_value('hostname', 'contains', '01', required_for_audit=True)

# Check for specific interface features
settings['audit.has_logging_interface'] = config.check_key_matches("interface.*.logging", required_for_audit=True)
settings['audit.has_vrf_interface'] = config.check_key_matches("interface.*.vrf", required_for_audit=True)
settings['audit.has_gigabit_interface'] = config.check_key_matches("interface.GigabitEthernet1.*", required_for_audit=True)
settings['audit.has_gigabit_two_interface'] = config.check_key_matches("interface.GigabitEthernet2.*", required_for_audit=True)

# Store final audit result
settings['audit.result'] = config.audit_value

Element Configurations

Overview

This guide explains how to use FLUX to create task flows that extract device configurations and perform automated audits. You'll learn how to configure audit parameters, set up automated extraction tasks, and review audit results.

Prerequisites

Before beginning, ensure you have:

Step 1: Configure Element Parameters

Create a Parameter Group

First, add a parameter group to your element's profile to store configuration audit settings.

Define Audit Parameters

Create individual parameters for each configuration aspect you want to evaluate during the audit process.

Step 2: Create the Audit Task

Your audit task must perform three key functions:

  1. Extract the device configuration
  2. Parse the configuration into a manageable format
  3. Apply audit conditions using the parameters defined in Step 1

Extract Device Configuration

Create an automation model that executes a command on the target device to retrieve its configuration output.

Example: The following automation model extracts device configuration data.

Parse the Configuration

Create a processing model to parse the extracted configuration data into a structured format.

Example: The following processing model is configured to parse block text files (such as YAML files).

For more information about processing models, see the Models documentation.

Configure Audit Conditions

In your task processing script, define the configuration data and audit conditions using the element's parameters.

For detailed implementation guidance, refer to the PublicElementConfig documentation.

Step 3: Review Audit Results

Once the audit task has executed, you can view the results in the Element settings interface.

Running Audits on Historical Configurations

You can configure tasks to run audits against previously stored device configurations instead of extracting a new configuration each time.

Implementation

To audit a historical configuration:

  1. In the task's management script, set the automation_custom_output variable
  2. Populate this variable with the stored configuration data using the last_config method from the PublicElementConfig class

This approach simulates the output of the automation model using previously extracted data.

Example: Configuration for auditing historical data.

Notifications

Introduction

FLUX allows you to configure flows that send notifications containing data from previous tasks. By creating specialized notification tasks, you can automatically deliver email notifications to any recipient with customized content.

Overview

Notification functionality in FLUX enables you to:

Configuration

Setting Up a Notification Task

  1. Create a task with the "Notification operations" class within your flow
  2. Configure the notification parameters in the management script:
    • Recipient: Specify the email address(es) to receive the notification
    • Subject: Define the email subject line

Creating the Email Template

Create a dedicated model for your email that defines the body content. Within this model, you can reference:

Accessing Task Variables

Variable Syntax

To reference variables from other tasks in your flow, use the following syntax:

tasks.{id}.{method}.{result|status}

Available Methods

You can access data from these task methods:

Context Variables

Context variables are accessible at the moment the notification task executes, allowing you to include real-time flow information in your notifications.

Example

View a live flow example demonstrating notification configuration:

View here

Scripts with visualization feature

Scripts with visualization feature

Sequential flow designer basics

The Scripts configuration allows you to execute Python code with the option of using a visual representation, as shown in the screenshot below:

sequential 1.png

Interface Components

The sequential flow designer consists of four main components:

Toolbox

Contains all available steps that can be added to your flow. Simply drag and drop steps from the toolbox into the steps diagram to build your sequence.

Control Bar

Provides viewport and editing controls:

Steps Diagram

Displays the flow of steps to be executed in sequence. Click on any step to view and modify its settings in the Step Editor panel.

Step Editor

Shows the configuration settings for the currently selected step. This panel updates dynamically based on which step you have selected in the diagram.

Scripts with visualization feature

Available steps for management scripts in task elements

Set Value in Objects

step1.png

This step allows you to assign different types of values to three available object types within your management scripts.

step1-form.png

Available Objects

step1 dict_type.png

The three available objects where you can set values are:

Value Types

step1-value type.png

You can set values using different data types and sources depending on your requirements.

Examples

Example 1: Setting a Value in the Settings Object

step1exam1.png

The above configuration generates the following code:

Example 2: Adding an Email Destination to the Context Object

Adding an email destination to the context object:

step1exa2.png

The above configuration generates the following code:

context['email_destination'] = 'user@mail.com'

Set Last Configuration in Object

step2.png

This step allows you to retrieve the last configuration and assign it to one of the available objects. This is useful when you need to access or reference previously saved configuration data within your management scripts.

Examples

Example: Setting Last Configuration in the Context Object

Setting the last configuration in the context object inside the automation_custom_output key:

step2exa1.png

The above configuration generates the following code:

context['automation_custom_output'] = config.get_last_config()
Scripts with visualization feature

Available steps table

Overview

FLUX provides three types of scripts, each with specific available steps for workflow automation. This page details all available steps and their properties for each script type.


Management Script Steps

The management_script supports the following steps:

Set Value in Objects

Stores different types of values in one of three available objects using a typed key.

Properties:

Property Name Type Label Options
dictionary name str Object Name Results, Context, Settings
key name str Name of key
value type Type of value str, number, none, expression
value

Set SSH Key Pair in Context

Retrieves an SSH key pair and stores it in one of the defined objects using a custom key.

Properties:

Property Name Type Label Options
privKeyKey
pubKeyKey

Set Last Configuration

Retrieves the element's last configuration and stores it in one of the available objects using a custom key.

Properties:

Property Name Type Label Options
dictionary name str Results, Context, Settings
key name str

Processing Script Steps

The processing_script supports the following steps:

Set Value in Objects

Stores different types of values in one of three available objects using a typed key.

Properties:

Property Name Type Label Options
dictionary name str Object Name Results, Context, Settings
key name str Name of key
value type Type of value str, number, none, expression
value

Set Last Configuration

Retrieves the element's last configuration and stores it in one of the available objects using a custom key.

Properties:

Property Name Type Label Options
dictionary name
key name

Set Variable on Element Config

Sets an attribute in the element configuration object.

Properties:

Property Name Type Label Options
key name
value type
value

If Condition Simple for Numbers

Checks a simple condition with a numeric variable. Examples: if a=b, if a >= b, if a < 12

Properties:

Property Name Type Label Options
variable to evaluate
operator
value

If Condition Simple for Strings

Checks a simple condition with a string variable. Examples: if a.startswith('device_'), if a in b, if s.endswith(':')

Properties:

Property Name Type Label Options
variable to evaluate
operator
value

If Condition Advanced

Evaluates Python conditions. Examples: if context['host'] == "0.0.0.0.0", if not data.get('apt_upgradable.table')

Properties:

Property Name Type Label Options
condition

Pause

Executes a time.sleep(time) operation.

Properties:

Property Name Type Label Options
time

Pause with Log

Executes a sleep operation with logging.

Properties:

Property Name Type Label Options
time

Update Element Credentials

Reads SSH credentials from context and updates them in the element.

Properties:

Property Name Type Label Options
privKeyKey
pubKeyKey

Wait for Connect

Waits for a specified time until the connection against the element executes correctly.

Properties:

Property Name Type Label Options
timeout
retry
sleep

Set Variable Value

Note: To be documented.

Properties:

Property Name Type Label Options
variable_name
value_type
value

Execution Script Steps

The Flow execution_script includes all steps from the Processing Script, plus the following additional steps:

Filter Flow Elements

Applies a function to filter the given elements, with support for in-place modification or returning a new list.

Properties:

Property Name Type Label Options
elements list List of elements elements, ...
filter_function function Function to filter elements
inplace bool Replace or return True, False

Task Execution

Runs a specific task on given elements. If the flow containing the task is serial, the elements parameter is not required.

Properties:

Property Name Type Label Options
short_name str Task short name
elements list List of elements to run the task elements, ...
retry_attempts int Retry attempts
retry_delay int Retry delay

Flow Execution

Runs a specific flow on given elements.

Properties:

Property Name Type Label Options
short_name str Flow short name
elements list List of elements to run the flow elements, ...
retry_attempts int Retry attempts
retry_delay int Retry delay

Task

Task Class

The task class determines the action that will be executed when the task runs. FLUX supports the following task classes:

  1. CLI Operations: Establishes a connection with a device and sends model commands via console
  2. REST/HTTP: Sends model data (typically JSON format) via HTTP
  3. SOAP/HTTP: Sends model data via HTTP
  4. Notifications: Sends a custom notification with the model as the message body

Management Script

The Management Script adds extra data to the execution context that can be referenced within models.

For detailed code documentation, see Visual Workflow Designer Scripts.

Special Variables by Task Class

Notifications Task

Use these context variables to configure notification parameters:

context['email_destination'] = 'devops@zequenze.com'
context['email_subject'] = 'Flow notification'

Simulating Automation Model Output

You can set the model output by retrieving the last configuration:

context['automation_custom_output'] = config.get_last_config()

Automation Model

The Automation Model is used to build the messages or commands that will be sent to the target element.

Processing Model

The Processing Model is used to process and parse the element's response to the executed message or command.

Processing Script

The Processing Script adds extra data to the results of command or message execution.

For detailed code documentation, see Visual Workflow Designer Scripts.

Special Variables by Task Class

REST/HTTP Task

In REST/HTTP tasks, you can access the response data using the data variable:

if data.get('status_code') == 200:
    settings['nms_downtime.status'] = 'false'

All Task Classes

All task classes can save specific configuration data for the target element. See PublicElementConfig for object reference documentation.

Example configuration and audit checks:

config.version = '1.7'
config.section = 'core'
config.required_approvement_percentage = 85

settings['audit.interfaces'] = config.check_key_occurrences("interface.*", ">=", 90, required_for_audit=True)
settings['audit.valid_domain'] = config.compare_string_value('domain', 'startswith', 'zequenze', required_for_audit=True)
settings['audit.virtual_interface'] = config.complex_compare('interface.x', 'comienza con Virtual', required_for_audit=True)
settings['audit.custom_hop'] = config.complex_compare('route.x', 'contiene un nexthop a la ip 172.16.254.1', required_for_audit=True)
settings['audit.route'] = config.check_key_matches('route.0.0.0.0/0', required_for_audit=True)
settings['audit.valid_hostname'] = config.compare_string_value('hostname', 'contains', '01', required_for_audit=True)
settings['audit.has_logging_interface'] = config.check_key_matches("interface.*.logging", required_for_audit=True)
settings['audit.has_vrf_interface'] = config.check_key_matches("interface.*.vrf", required_for_audit=True)
settings['audit.has_gigabit_interface'] = config.check_key_matches("interface.GigabitEthernet1.*", required_for_audit=True)
settings['audit.has_gigabit_two_interface'] = config.check_key_matches("interface.GigabitEthernet2.*", required_for_audit=True)
settings['audit.result'] = config.audit_value

Flow

Overview

Flows enable you to coordinate the execution of multiple steps across several elements. A step can be:

Schedule

You can configure a specific schedule for your flow. When scheduled, the flow executes automatically on the specified days at the specified time.

Execution Mode

Serial

In Serial mode, the flow executes all steps for a single element before moving to the next element. This ensures that each element completes its entire workflow sequentially.

Parallel

In Parallel mode, the flow executes one step across all elements before proceeding to the next step. This allows for simultaneous execution across multiple elements.

Processing Script

The Processing Script allows you to sort and filter the elements that will be used in the flow execution as you wish.

Example

sort_function = lambda x: 1 if x['context']['patroni_leader'] == 'master' else 0
order_flow_elements(elements, sort_function, reverse=False)

This code sorts elements from DESC to ASC based on whether they are "master" or not.

Available Variables and Functions

  1. elements: Contains all elements that match the profile, groups, and filters
  2. order_flow_elements: Sorts elements based on specified criteria (you can use element context)
  3. filter_flow_elements: Filters elements based on specified criteria (you can use element context)

Execution Script

The Execution Script defines which steps will be executed when the flow runs. See Visual Script Editing to learn how to use the visual view.

Special Variables

Custom Steps

Execution Examples

You can combine multiple flows of different execution modes (Serial and Parallel) to achieve more complex logic. When defining a flow execution script, you can pass specific elements to each child flow execution.

Example Scenario

Given 2 elements (E1, E2), 2 flows (F1 & F2), and 5 tasks (A, B, C, D, F):

image.png

Parent Serial - Child Serial

What happens when F1 is Serial and F2 is Serial?

image.png

Parent Serial - Child Parallel

What happens when F1 is Serial and F2 is Parallel?

image.png

Parent Parallel - Child Serial

What happens when F1 is Parallel and F2 is Serial?

image.png

Parent Parallel - Child Parallel

What happens when F1 is Parallel and F2 is Parallel?

image.png

Dashboard: Main

Screenshot - Dashboard: Main

Overview

The Zequenze CONTROL Portal Dashboard provides a comprehensive monitoring and management interface for automation systems. This main dashboard view displays real-time automation logs, performance metrics, and system statistics over a 24-hour period, enabling administrators to monitor system health and automation execution status.

Key Features

Real-Time Automation Monitoring

System Metrics Dashboard

Four key metric cards displaying:

UI Elements

Navigation Sidebar

Left-side navigation panel includes:

Top Navigation Bar

Chart Controls

User Interactions

Dashboard Monitoring

Navigation Options

Navigation

Access Path

Users can navigate to:

Data Displayed

Automation Logs Chart

System Statistics

Actions Available

Dashboard Management

System Navigation

Notes/Tips

Best Practices

Performance Insights

System Health Indicators

Elements

Screenshot - Elements

Overview

The Inventory Elements page in the Zequenze CONTROL Portal provides a comprehensive view of all network elements and devices managed within the system. This centralized dashboard allows administrators to monitor device status, manage configurations, and perform various operations on network infrastructure components.

Key Features

UI Elements

Navigation Bar

Tab Navigation

Search and Filter Controls

Map View

Data Displayed

The main table shows detailed information about network elements:

Column Structure

Element Types and Status

The table displays various network elements including:

Parent-Child Relationships

Elements display hierarchical relationships:

Status Indicators

User Interactions

Search and Discovery

Element Management

Status Monitoring

Navigation

Access Path

Home → Inventory → Elements

Actions Available

Individual Element Actions

Bulk Operations

Map Controls

Filtering Options

Filter Panel Options

The right-side filter panel provides advanced filtering capabilities:

Notes/Tips

Automation models

Screenshot - Automation models

Overview

The Automation Models page in the Zequenze Control Portal provides a comprehensive interface for managing automation models within the FLUX admin portal. This page displays a detailed table of automation models with their configurations, versions, and operational status, allowing administrators to monitor and manage their automation infrastructure.

Key Features

UI Elements

Navigation Bar

Tab Navigation

Action Buttons

Filter Panel

The collapsible filter panel on the right side (currently collapsed) includes:

Main Data Table

The table contains the following columns:

Displayed Models

The table shows several automation models:

  1. Send notifications docs zequenze (send-docs-zequenze) - CLI, version 1.0, Out direction
  2. state and details Parser OUTPUT DOCS (state-details_pars_docs) - JSON, version 1.0, Out direction
  3. Exec script generate docs (exec-generate_docs) - CLI, version 1.0, Out direction
  4. Refresh ecr secrets on gke (refresh-ecr-secrets-gke) - CLI, version 1.0, Out direction
  5. GKE Cluster Maintenance icinga2 downtime remove (gke-icinga2-maintenance-remove) - JSON, version 1.0, Out direction
  6. GKE Cluster Maintenance icinga2 downtime schedule (gke-icinga2-maintenance-downtime) - JSON, version 1.0, Out direction
  7. GKE Cluster icinga2 downtime remove (gke-icinga2-downtime-disable) - JSON, version 1.0, Out direction
  8. GKE Cluster icinga2 downtime schedule (gke-icinga2-downtime) - JSON, version 1.0, Out direction
  9. COPY Gunicorn.py model (gunicorn-model-copy) - version 1.0, In direction
  10. Exec migrate script (exec-migrate-script) - CLI, version 1.0, Out direction
  11. AUTH Settings.py model (settings-model-auth) - version 1.0, In direction

User Interactions

Search and Filter

Table Interactions

Navigation

Access Path

Users can reach this page through:

  1. Main dashboard navigation
  2. Automation section in the left sidebar
  3. Models subsection under Automation

The left sidebar shows related automation features:

Data Displayed

Model Types

The page displays various automation model types:

Status Indicators

Actions Available

Primary Actions

Filtering Actions

Model-Specific Actions

Notes/Tips

Best Practices

Important Information

Automation tasks

Screenshot - Automation tasks

Overview

The Automation Tasks page in the Zequenze CONTROL Portal provides a comprehensive interface for managing and monitoring automated tasks across your network infrastructure. This page displays a detailed list of all automation tasks including documentation management, GKE cluster maintenance, Kubernetes deployments, and Docker image builds.

Key Features

UI Elements

Header Section

Main Task Table

The table displays the following columns:

Filter Panel (Right Side)

The filter panel shows the following filtering options:

User Interactions

Task Management

Task Configuration

Navigation

Current Location

From the left navigation menu, users can access:

Data Displayed

Task Categories Visible

  1. Documentation Tasks:

    • Task Send notifications docs (send_notif_task_docs) - build.ops group
    • Docs IA (docs-ia-generate)
  2. GKE Cluster Maintenance:

    • Refresh ecr secrets on gke (refresh-ecr-secrets-gke)
    • GKE Cluster Maintenance icinga2 downtime disable (icinga2-maintenance-downtime-dis) - nms.ops group
    • GKE Cluster Maintenance Window icinga2 downtime schedule (gke-icinga2-downtime-maintenance) - nms.ops group
    • GKE Cluster icinga2 downtime disable (icinga2-downtime-disable) - nms.ops group
    • GKE Cluster icinga2 downtime schedule (gke-icinga2-downtime) - nms.ops group
  3. Template Management:

    • TEMPLATE gunicorn.py.j2 (gunicorn-py-j2) - auth.dev group
  4. Deployment & Migration:

    • Execute migration script (exe-migrate-app)
    • AUTH Settings.py.j2 (settings-transfer-auth) - auth.dev group
    • Execution of post-deployment tasks (exec-post_deploy)
  5. Kubernetes Operations:

    • Deploy Kubernetes app statefulset templates (statefulset-app_deploy)
    • Run the application build and upload the Docker image (run-build_and_push)
  6. Additional Tasks:

    • Clone/checkout application repository (git-clone-checkout) - auth.dev and build.ops groups
    • list of docker images (docker-images-list)
    • Task Send notifications certificates (send_notif_task_Cert) - build.ops group
    • Costos GCP (app_gcp_costs)

Status Indicators

Actions Available

Primary Actions

Secondary Actions

Notes/Tips

Automation flows

Screenshot - Automation flows

Overview

The Automation Flows page in the Zequenze FLUX admin portal provides a comprehensive management interface for viewing, monitoring, and controlling automated workflows. This page displays all automation flows configured in the system with their current status, scheduling information, and execution details.

Key Features

UI Elements

Header Section

Action Bar

Filter Panel

The right-side filter panel displays the following filter options:

Main Data Table

The table displays the following columns:

User Interactions

Navigation

Search and Filter

Flow Management Actions

Data Displayed

Flow Information

Status Indicators

Execution Details

Actions Available

Primary Actions

Secondary Actions

Filter Panel Options

Status Filters

Organizational Filters

Additional Filtering

Notes/Tips

Automation schedules

Screenshot - Automation schedules

Overview

The Automation Schedules page in the Zequenze Control Portal provides a centralized view for managing all automation schedules within the system. This interface allows administrators to view, filter, and manage both interval-based and specific schedules that control various automated processes across the platform.

Key Features

Schedule Management Interface

Advanced Filtering System

UI Elements

Main Table Structure

Control Panel

Filter Panel Interface

Header Section

User Interactions

Viewing Schedules

  1. Text Search: Use the search bar to find schedules by name
  2. Filter Panel Access: Click the "FILTER" button to expand the filter options panel
  3. Class Filter: Select between Interval and Specific schedule types (available when filter panel is expanded)
  4. Public Status Filter: Choose between public/private status options (available when filter panel is expanded)
  5. Organization Filter: Filter by Root, Zequenze, or other organizations with sub-organization support (available when filter panel is expanded)

Schedule Management

Navigation

Accessing This Page

Data Displayed

Schedule Types

  1. Interval Schedules (Time-based recurring):

    • 1 hour, 1 minute, 2 hours, 24 hours
    • 30 minutes, 30 seconds, 5 minutes
    • 7 days, 8 hours
    • All marked as public and owned by Root organization
  2. Specific Schedules (Task-specific):

    • DOCS scheduler
    • GKE Maintenance Window
    • GKE Maintenance Window Remove
    • GKE refresh ecr token
    • gcp_costs_scheduler
    • reboot_scheduler
    • All marked as private and owned by Zequenze organization

Visual Indicators

Actions Available

Primary Actions

Secondary Actions

Notes/Tips

Best Practices

Important Considerations

Filter Usage Tips

Services

Screenshot - Services

Overview

The AI Services management page in the Zequenze Control Portal provides administrators with a centralized interface to manage and monitor artificial intelligence services within the FLUX admin portal. This page displays all configured AI services, their current status, and provides tools for service management and organization.

Key Features

UI Elements

Main Navigation

Service Management Table

The main table displays the following columns:

Filter Panel (Right Side)

The filter panel is currently open and displays the following filter options:

User Interactions

Primary Actions

Service Management

Navigation

Access Path

  1. Navigate to HomeAllfredServices
  2. Or use direct URL: https://flux-dev.zequenze.com/admin/alfred/assistantservice/

Data Displayed

The page currently shows 18 results with various services including:

Each service displays:

Actions Available

Service Management Actions

Filter Management Actions

Administrative Functions

Notes/Tips

Best Practices

Important Considerations

User Account Information

Metric logs

Screenshot - Metric logs

Overview

The Metric Logs page in the Zequenze FLUX Admin Portal provides a comprehensive view of all metric data collected from various devices and sensors in your fleet. This centralized logging system displays real-time and historical metric information, allowing administrators to monitor system performance, track data collection activities, and troubleshoot connectivity issues across their IoT infrastructure.

Key Features

UI Elements

Main Data Table

The central table displays metric logs with the following columns:

Filter Panel (Right Sidebar)

The filter panel is positioned on the right side with the following elements:

Top Navigation Bar

User Interactions

Viewing Metrics

Filtering Data

Data Export

Navigation

Accessing the Page

Data Displayed

Metric Types

The system tracks package management operations across devices:

Device Integration

The system tracks metrics from various infrastructure elements including:

Data Values

The system tracks metrics with:

Temporal Data

Actions Available

Primary Actions

Administrative Actions

Notes/Tips

Performance Optimization

Data Interpretation

Best Practices

Automation Contexts

Overview

This document explains how automation contexts are constructed, composed, and made available across every execution layer of the FLUX automation application: flows, tasks, scripts, models, and services (email-msg).

Contexts provide a structured way to pass data and configuration throughout the automation pipeline, ensuring that each component has access to the appropriate information needed for execution.

Think of context as a "backpack of information" that moves through the execution. It starts with base device data and gets enriched as the process moves forward. The main idea: each step can reuse what previous steps already discovered.

Base Element Context

Function: element_context_generate(element)

The element context serves as the foundation of every execution. It is generated once per element at the beginning of each task or flow execution step and is passed throughout the entire pipeline.

Source Location: apps/inventory/utils.py → element_context_generate()

Context Structure

The base element context contains the following components:

{
    # ── Group Variables ─────────────────────────────────────────────────────────
    # One key per GroupVariable configured in any Group assigned to the element.
    # Variables with dots in their name are expanded into nested dictionaries.
    # Example: "if.description" → context["if"]["description"] = value
    "<variable_name>": "<value>",
    "<nested.variable>": "<value>",   # expanded: context["nested"]["variable"]

    # ── Element Settings (ElementSetting) ───────────────────────────────────────
    # One key per parameter configured in the element's Profile.
    # Key format: [group.variable_root.][instance.]parameter.variable_name
    # Example: "net.0.hostname" → context["net"]["0"]["hostname"]
    "<parameter_variable_name>": "<value>",

    # ── Element Serialization ───────────────────────────────────────────────────
    "element": {                       # Serialized Element (all database fields)
        "id": 1,
        "name": "router-01",
        "uuid": "...",
        "status": True,                # operational status
        "status_change": "...",        # last status change datetime
        "prev_status_change": "...",
        "is_active": True,
        "debug": False,
        "sync": False,
        "reconf": False,
        "internal": False,
        "location": {                  # nested — all Location fields
            "id": 1, 
            "name": "DataCenter A", 
            ...
        },
        "latitude": None,
        "longitude": None,
        "profile": 3,                  # Foreign Key ID
        "organization": 1,             # Foreign Key ID
        "is_public": False,
        "group": [2, 5],               # list of Foreign Key IDs
        "certificate": None,
        "key": None,
        "password": None,
        "elevated_priv_password": None,
        "management_address": "192.168.1.1",
        "management_gateway": None,
        "transport_settings": None,
        "software_version": "17.3.4",
        "hardware_version": None,
        "serial_number": "FTX1234ABCD",
        "serial_number_alt": None,
        "ip_address": None,            # Foreign Key ID
        "ip_network": None,
        "dfgw_address": None,
        "dns_servers": None,
        "mac_address": None,
        "created": "...",
        "created_by": 1,
        "last_change": "...",
        "first_execution": "...",
        "last_execution": "...",
        "description": "Core router",
        "description2": None,
        "description3": None,
        "description4": None,
        "notes": None,
        "active_alert": None,
        "alert": None,
    },

    # ── Timestamp ───────────────────────────────────────────────────────────────
    "epoch": 1710000000,               # UTC timestamp (integer) at context generation time
}

Flow Context Addition: context['flow']

When a task runs inside a flow (via run_task_wrapper), the following key is automatically added before the TaskContext is built:

context['flow'] = {
    'user': {
        'id': user.id,
        'username': user.username,
        'full_name': user.get_full_name(),
        'email': user.email,
    }
    # Returns empty dictionary {} if no user is associated with the flow
}

When a user runs the flow, user info is also loaded as real runtime variables:

flow_variables = {'user': None}
flow_variables['user'] = {
    'id': user.id,
    'username': user.username,
    'full_name': user.get_full_name(),
    'email': user.email,
}

Task Results Addition: context['tasks']

At the beginning of each task execution within a flow, the task results accumulated so far for the current element are merged into the context:

if self.ctx.flow and self.ctx.flow_context:
    self.ctx.context['tasks'] = self.ctx.flow_context[-1]['tasks']

This allows scripts to read results from previously-executed tasks of the same element within the current flow run.

TaskContext Class

The TaskContext groups all execution-time variables for a single task run on a single element.

Source Location: apps/automation/task_utils.py → TaskContext

TaskContext Properties

class TaskContext:
    element        # Element ORM instance
    flow           # Flow ORM instance (or None if standalone task)
    flow_context   # list[dict] — the live flow context list (see Flow Context section)
    context        # dict — base element context (element_context_generate), modified in-flight
    user           # User ORM instance (or None)

Important: The context property is mutable throughout the pipeline. Management scripts, processing scripts, and the task runner itself add keys to it as the task progresses.

Flow Context (flow_context)

A Flow runs over one or more elements (devices). The flow_context is a list built progressively as the flow processes elements — one entry per element.

Source Location: apps/automation/flow_utils.py → FlowManager

Per-Element Entry Structure

For each element, the flow creates this structure:

{
    "id": element.id,                  # element.id
    "context": element_context_generate(element),  # full element_context_generate() result
    "tasks": {}                        # populated after each task finishes
}

Each element in the flow generates an entry with the following structure:

{
    "id": 42,                          # element.id
    "context": {                       # full element_context_generate() result
        ...                            # all keys from Base Element Context section
    },
    "tasks": {                         # populated after each task finishes
        "<task_pk_as_str>": {
            # Keys depend on which steps ran:
            "management_script": {
                "status": "OK", 
                "result": {...}
            },
            "process_model": {
                "status": "OK", 
                "result": "<rendered model string>"
            },
            "model_commands": {
                "status": "OK", 
                "result": "<raw command output>"
            },
            "processing_model_out": {
                "status": "OK", 
                "result": {...}
            },
            "element_data_mapped": {
                "status": "OK", 
                "result": {...}
            },
            "execute_service": {
                "status": "OK", 
                "result": {}
            },
        },
        "<another_task_pk>": { ... },
    }
}

The list grows as each element is processed by the flow:

How Flow Context is Used

Script Contexts

FLUX automation provides five distinct types of scripts, each receiving a different set of variables and serving specific purposes in the automation pipeline.

Flow Pre-Execution Script

Purpose: Runs once before any element loop, used to sort or filter the element list.

Script Type: processing_script on Flow
Source Location: FlowManager.flow_execute_processing_script()

Context Variables

{
    "elements": list[Element],         # full list of elements that will run the flow
    "order_flow_elements": Callable,   # sort_flow_elements(elements, sort_fn, reverse=False)
    "filter_flow_elements": Callable,  # filter_flow_elements(elements, filter_fn, inplace=True)
    "settings": {},                    # unused output placeholder
}

Output Expectations

Available Modules

math, json, random, naturaltime, re, ssh_key_generate, time

Flow Execution Script

Purpose: The main flow script that calls run_task() / run_flow() to orchestrate execution.

Script Type: execution_script on Flow

Serial Flow Context

Execution: One invocation per element
Source Location: SerialFlowManager.run_script()

{
    "context": dict,            # flatten_dictionary(flow_context) — flattened snapshot
    "flow_variables": {
        "user": {
            "id": ..., 
            "username": ..., 
            "full_name": ..., 
            "email": ...
        }                       # or None if no user
    },
    "element": Element,         # current element ORM instance
    "elements": list[Element],  # parent flow's elements (if nested), else None
    "current": int,             # index of current element (0-based)
    "previous": int,            # current - 1
    "last": int,                # len(elements) - 1
    "first": 0,
    "settings": {},             # unused
    "run_task": Callable,       # run_task(id=None, short_name=None, retry_attempts=1, retry_delay=0)
    "run_flow": Callable,       # run_flow(id=None, short_name=None, elements=None, ...)
}

Key Points:

Magic Replacements: Applied to the script string before execution:

Placeholder Replaced With
current str(current_order)
previous str(current_order - 1)
last str(flow_elements_count - 1)
first "0"

Parallel Flow Context

Execution: One invocation for all elements at once
Source Location: ParallelFlowManager.run_script()

{
    "context": dict,                   # flatten_dictionary(flow_context)
    "flow_variables": dict,            # same as serial (see above)
    "elements": list[Element],         # all elements of this flow
    "settings": {},
    "run_task": Callable,              # run_task_in_elements(id=, short_name=, elements=None, ...)
    "run_flow": Callable,              # run_flow_in_elements(id=, short_name=, elements=None, ...)
    "filter_flow_elements": Callable,
}

Key Points:

Available Modules

math, json, random, naturaltime, re, ssh_key_generate, time, pause

TaskFlow Condition Script

Purpose: Executed before deciding whether to execute a task step for a given element.

Script Type: condition_value on TaskFlow
Source Location: task_flow_execute_condition_script()

Context Variables

{
    "flow_context": list[dict],   # full flow_context list (see Flow Context section)
    "current": int,               # current element order
    "previous": int,              # current - 1
    "last": int,                  # len(elements) - 1
    "first": 0,
    "settings": {},
}

Output Requirements

The script must assign: condition = True or condition = False

Available Modules

math, json, random, naturaltime, re, ssh_key_generate, time

Task Management Script

Purpose: Runs as Step 1 of the task pipeline, before any connection to elements. Used to enrich the context (e.g., resolve credentials, build payloads, call APIs, wait for element availability).

Script Type: management_script on Task
Source Location: task_execute_script(..., klass='mgmt')

A Task receives element context and can enrich it while it runs. When a task starts inside a flow:

element_context = element_context_generate(element)
element_context['flow'] = flow_variables if flow_variables else {}

Then, right before task execution, previous task results for that same element are injected:

self.ctx.context['tasks'] = self.ctx.flow_context[-1]['tasks']

Context Variables

{
    "element": PublicElement,         # wraps the Element (restricted write interface)
                                      # only if 'element.' appears in the script text
    "config": PublicElementConfig,    # wraps ElementConfig (read/save last config)
                                      # only if 'config.' appears in script text
    "data": {},                       # always empty for management scripts
    "context": dict,                  # full element context dict (mutable — write here)
    "flow_context": None,             # NOT passed from run_management_script
    "settings": {},                   # not used as output for mgmt scripts
    "pause": Callable,                # pause(sleep=30, user_msg=None)
    "wait_for_connect": Callable,     # wait_for_connect(timeout=10, retry=1, sleep=30, user_msg=None)
    "log_message": Callable,          # log_message(level, msg)
}

Writing Output

# Add/update keys in element context
context['my_new_variable'] = 'value'
context['credentials'] = {'user': 'admin', 'pw': 'secret'}

Output Processing

If a management script returns new context data, it is merged directly:

self.ctx.context.update(script_out)

Available Modules

math, json, random, naturaltime, re, ssh_key_generate, time

Task Processing Script

Purpose: Runs as Step 7, after model commands have been sent and parsed. Used to transform, validate, or enrich the data extracted from the element before it is stored in the database.

Script Type: processing_script on Task
Source Location: task_execute_script(..., klass='proc')

Context Variables

{
    "element": PublicElement,         # same as management script
    "config": PublicElementConfig,
    "data": dict,                     # current processing_out (command results, parsed data)
    "context": dict,                  # full element context dict (read + write)
    "flow_context": None,             # NOT currently passed from run_processing_script
    "settings": {},                   # output dict — write results here
    "pause": Callable,
    "wait_for_connect": Callable,
    "log_message": Callable,
}

Writing Output

settings['hostname'] = data.get('Hostname', '').strip()
settings['action_status'] = 'ok'       # optional — stored in task results
settings['action_info'] = 'Hostname updated'

Special Output Keys

Key Purpose
action_status Stored in TaskResults.action_status and flow log
action_info Stored in TaskResults.action_info and flow log

Output Processing

Available Modules

math, json, random, naturaltime, re, ssh_key_generate, time

How Task Context is Used

The task can:

Concrete runtime behavior already used in code:

if not self.results.commands_out and self.ctx.context.get('automation_custom_output'):
    self.results.commands_out = self.ctx.context['automation_custom_output']

Model Python Script Context

For models with klass in ('pyms', 'pyps'), a Python script runs inside the model data mapping process.

Source Location: apps/automation/utils.py → model_data_map_python_script()

Context Variables

{
    "model": Model,          # the Automation Model ORM instance
    "data": str,             # raw text data received from element (for pyps)
                             # empty string for pyms (no raw data)
    "context": dict,         # full element context dict (read-only recommendation)
    "output": {},            # write results here
    "settings": {},          # alternative output dict
}

Writing Output

output['parsed_value'] = some_function(data)
output['hostname'] = re.search(r'hostname (\S+)', data).group(1)

Output Processing

Additional Modules

math, json, random, naturaltime, re, time, boto3, botocore, collections, datetime, timedelta, gcp_bigquery, gcp_service_account, timezone, pytz

Service Execution Context (email-msg)

Purpose: Handles email service execution during task completion.

Source Location: apps/automation/utils.py → task_execute_service()
Called In: TaskManager.execute_service() (Step 9)

A Service (email service in this case) uses the context available at service execution time.

Email Context Assembly

At service step, the email context is assembled exactly like this:

email_context = {
    # ── Fixed Keys ──────────────────────────────────────────────────────────────
    'element': element,               # raw Element ORM instance
    'task': task,                     # Task ORM instance (name, short_name, id, ...)
    'body_custom_msg': notification_content,  # 'content' param from service settings

    # ── Task Context (spread) ───────────────────────────────────────────────────
    # Everything from TaskContext.context at the time execute_service runs:
    #   - all base element context keys (group variables, element settings, context['element'], epoch)
    #   - context['flow'] = flow_variables (if running inside a flow)
    #   - context['tasks'] = previous tasks results for this element in this flow
    #   - any keys added by management_script or processing_script
    **task_context,

    # ── Flow Context Entry (spread) ─────────────────────────────────────────────
    # The flow_context[-1] dict for the current element (see Flow Context section), or {} if standalone:
    #   - "id": element.id
    #   - "context": element_context_generate(element)  (original snapshot)
    #   - "tasks": { "<task_pk>": task_detail, ... }
    **flow_context_entry,
}

Service Settings Parameters

These parameters are configured on the Service instance of type email-msg:

Parameter Usage
from-email Sender address
custom_to Comma-separated recipient list
subject Email subject (Django template rendered)
content body_custom_msg in the email context
template Template ID (optional, selects HTML template)

Injecting Custom Variables

The task_context received by task_execute_service() is directly self.ctx.context — the same mutable dictionary that circulates through the entire task pipeline. Since the service runs in step 9, results from both the management script (step 1) and processing script (step 7) are available.

Management Script → Official Channel

Script results are applied with .update() to ctx.context:

# In task management_script (klass='svc'):
context['recipient_email'] = element.get('email_contact', '')
context['alert_level'] = 'critical' if context.get('status') == 'down' else 'info'
context['custom_subject'] = f"[{element['name']}] Alert processed"
# All these keys will be available in the email template

Processing Script → Direct Dictionary Mutation

The processing script receives the same context object by reference. Its official output (settings) goes to processing_out (and from there to the database), not to ctx.context. However, since the dictionary is passed by reference, writing directly to context within the processing script also persists and reaches the service:

# In task processing_script:
settings['sw_version'] = data.get('version', '')   # → processing_out → Database

# To pass something additional to the service:
context['parsed_hostname'] = data.get('hostname', '')  # → persists in ctx.context → reaches service

Data Flow Summary

Source Destination Available in email_context?
context['x'] = val in management script ctx.context via .update() ✓ Yes
context['x'] = val in processing script ctx.context direct (by reference) ✓ Yes
settings['x'] = val in processing script results.processing_out → Database ✗ No (unless also copied to context)
flow_variables (user info from flow) context['flow'] ✓ Yes (as context['flow']['user'])

How Service Context is Used

Email subject/body and template can use:

Context Lifecycle Within Task Execution

The following table summarizes which context is available at each step of TaskManager.execute():

Step Method Context at Entrance Context Modifications
execute() starts Base element context context['tasks'] added if inside flow
1 run_management_script() Base context Script can add/modify any key in context
2 element_data_mapping() Context enriched by step 1 results.model_out set
3–6 connect_with_element() Same context results.commands_out, processing_out built
7 run_processing_script() Same context + data=processing_out results.processing_out updated from settings; action_status/action_info stored
8 mapping_result_to_database() Same context + processing_out Element parameters saved to database
9 execute_service() Same context Email sent; service_out merged into processing_out
10 finish_execution() Final context Task log created; flow_context[-1]['tasks'][task_pk] updated

Context Lifecycle Within Serial Flow Execution

FlowManager.flow_execute()
│
├─ flow_execute_processing_script()  ← receives: elements list (can sort/filter)
│
└─ for each element:
       flow_context.append(get_flow_context_for_element(element))
       │
       └─ SerialFlowManager.run_script(element, order)
              │   receives: flattened flow_context, element, positions, run_task, run_flow
              │
              └─ run_task(id, short_name)
                     │
                     └─ run_task_wrapper()
                            │   element_context = element_context_generate(element)
                            │   element_context['flow'] = flow_variables
                            │
                            └─ TaskManager.execute()
                                   ├─ context['tasks'] = flow_context[-1]['tasks']
                                   ├─ run_management_script()   → context enriched
                                   ├─ element_data_mapping()
                                   ├─ connect_with_element()
                                   ├─ model_commands_execution()
                                   ├─ custom_model_commands_output_processing()
                                   ├─ convert_command_results...()
                                   ├─ run_processing_script()   → processing_out enriched
                                   ├─ mapping_result_to_database()
                                   ├─ execute_service()         → email sent with full context
                                   └─ finish_execution()
                                          └─ flow_context[-1]['tasks'][task_pk] = task_detail

Context Lifecycle Within Parallel Flow Execution

ParallelFlowManager
│
├─ for each element:  flow_context.append(get_flow_context_for_element(element))
│
└─ run_script()
       │   receives: flattened flow_context, all elements, run_task_in_elements
       │
       └─ run_task_in_elements(id, short_name, elements=None)
              │
              └─ spawns one thread per element → run_task_wrapper() (same as serial)

Key Difference: The parallel flow script sees all elements at once. The context variable in the script is the flattened snapshot of flow_context (all elements merged), not a per-element view.

Variable Availability Reference

This table shows which variables are available in each script type:

Variable Flow Pre-exec Flow Exec (Serial) Flow Exec (Parallel) Task Condition Task Mgmt Script Task Proc Script Model pyms/pyps Service (Email)
element (Element ORM) ✓ (PublicElement) ✓ (PublicElement)
elements (list) ✓ (parent)
context (element ctx dict) ✓ (flattened) ✓ (flattened) ✓ (mutable) ✓ (read) ✓ (read) ✓ (spread)
flow_context (list) ✓ (latest entry)
flow_variables (user info)
data ✓ (empty) ✓ (processing_out) ✓ (raw text)
settings (output) write here
output (output) write here
condition (output) write here
run_task
run_flow
current/previous/last/first
config (PublicElementConfig)
pause(sleep)
wait_for_connect(...)
task (Task ORM)
body_custom_msg

Writing to Context from Scripts

Management Script → Context Enrichment

context['resolved_ip'] = '10.0.0.1'
context['credentials'] = {'username': 'admin', 'password': 'secret'}
# These are available to all subsequent steps in the same task execution

Processing Script → Settings Population

settings['hostname'] = data.get('hostname', '').strip()
settings['sw_version'] = re.search(r'Version (\S+)', data.get('version', '')).group(1)
settings['action_status'] = 'ok'   # or 'warning', 'error'
settings['action_info'] = 'Parsed 3 parameters successfully'
# Keys in settings are merged into processing_out and saved to ElementSetting

Model pyms/pyps Script → Output Writing

output['interface_count'] = len(re.findall(r'^interface', data, re.M))
output['mgmt_address'] = context.get('element', {}).get('management_address')

Flow Execution Script → Results Reading

# After run_task runs, results are accumulated in flow_context and
# available via the flattened context dict in the next run_task call
# (for reading, use the full flow_context structure)
run_task(short_name='collect-hw')
run_task(short_name='send-report')  # can read results from collect-hw via context['tasks']

Summary: End-to-End Context Flow

Context is shared runtime data that follows this path:

Case Real context source Practical use
Flow {"id", "context", "tasks"} per element + flow_variables['user'] Coordinate and track full flow execution
Task element_context_generate(...) + flow + tasks + script updates Run task logic with up-to-date shared data
Service email_context = fixed keys + merged task/flow context Send dynamic email content with real execution data

Real Path Through System

  1. Flow creates per-element context: id, context, tasks.
  2. Task starts and receives base context + flow_variables.
  3. Task injects previous task results into context['tasks'].
  4. Management/processing steps enrich available data.
  5. Service runs and receives merged email_context (element + task + accumulated context).

Result: Services run with real, current data from the same execution — no manual copy/paste. Flows create and organize context, tasks enrich and consume it, and services use it to execute with full, real execution state.