Skip to content

Run and Validate Federator

Purpose

This section starts a local Federator deployment and validate federation end-to-end. You will:

  • Select a federation topology
  • Start containers using Docker Compose
  • Verify Federator Producer behaviour (server role)
  • Verify Federator Consumer behaviour (client role)
  • Verify security label filtering using securityLabel headers
  • Verify Kafka-to-Kafka federation into federated.* topics
  • Confirm Redis-backed offset tracking

This section is runnable locally once all required configuration and environment variables are set, and mirrors production > > patterns for development and testing.

Stage 4 - Run and Validate Federator

How to complete this stage

You will:

  • Set required environment variables
  • Review and update required local configuration
  • Select and start a deployment topology
  • Validate Producer and Consumer behaviour
  • Verify filtering, federation, and offset tracking
  • Confirm end-to-end message flow

4.1 Set required environment variables

Before starting any Federator deployment, you must set the environment variables that tell the server and client where to load their properties files from. If these environment variables are not set, the application will fail to load configuration. These variables are mandatory for local execution. From the repository root, run:

export FEDERATOR_SERVER_PROPERTIES=./src/configs/server.properties
export FEDERATOR_CLIENT_PROPERTIES=./src/configs/client.properties

Checkpoint

Before continuing, confirm:

  • Both environment variables are set in the terminal session you will use to start the deployment
  • The referenced files exist
  • The paths are correct relative to the repository root

Verify the variables are set:

echo $FEDERATOR_SERVER_PROPERTIES
echo $FEDERATOR_CLIENT_PROPERTIES

4.2 Review required local configuration

The default Federator configuration files may contain unresolved placeholders and are not runnable for local development without modification. For local execution, you must review and populate the required Identity Provider and TLS-related properties before starting the Federator.

Examples of unresolved placeholders may include:

idp.jwks.url=${IDP_JWKS_URL}
idp.client.id=${IDP_CLIENT_ID}
idp.keystore.path=${IDP_KEYSTORE_PATH}

If these values are left unresolved, the application may fail at runtime with configuration loading errors, SSL failures, or JWT validation failures. The following properties must be set explicitly for local environments:

idp.jwks.url=https://localhost:8443/realms/management-node/protocol/openid-connect/certs
idp.token.url=https://localhost:8443/realms/management-node/protocol/openid-connect/token
idp.client.id=FEDERATOR_HEG
idp.keystore.path=/path/to/client.p12
idp.keystore.password=changeit
idp.truststore.path=/path/to/truststore.jks
idp.truststore.password=changeit

Update the values above to match your local certificate and truststore locations.

Checkpoint

Verify:

  • The Federator server and client properties files do not contain unresolved placeholder values for required IDP settings
  • idp.jwks.url points to a reachable local Identity Provider JWKS endpoint
  • idp.token.url points to a reachable local token endpoint
  • idp.client.id is set explicitly
  • idp.keystore.path points to an existing local keystore file
  • idp.truststore.path points to an existing local truststore file

4.3 Select and start a deployment topology

Meaningful federation requires at least one Producer and one Consumer. The repository provides multiple federation scenarios. Select the topology that matches your evaluation model.

The provided Docker Compose configurations simulate realistic NDTP participation patterns:

  • One organisation distributing data to many consumers
  • One organisation ingesting data from multiple producers
  • Multi-party cross-federation

These scenarios allow you to validate:

  • Mutual TLS authentication
  • Explicit topic authorisation
  • Label-based filtering
  • Secure Kafka-to-Kafka federation
  • Offset tracking and replay safety

For full evaluation and demonstration purposes, the multiple-clients / multiple-server configuration is recommended.

4.3.1 Set Federator image variables

Docker Compose references images using ARTIFACT_ID and VERSION.

From the repository root:

export ARTIFACT_ID="federator"
export VERSION="1.1.0"

Ensure VERSION matches the version defined in the root pom.xml. Update VERSION if the project version changes.

4.3.2 Choose a Topology

Start the deployment:

docker compose \
  --file docker/docker-compose-multiple-clients-multiple-server.yml \
  up --no-build --pull never

Use for full end-to-end validation and demonstrations.

Option 2: Multiple Producers / Single Consumer

Start the deployment:

docker compose \
  --file docker/docker-compose-multiple-clients-single-server.yml \
  up --no-build --pull never

Recommended for:

  • Testing inbound federation
  • Validating offset tracking from multiple sources
  • Verifying policy enforcement across producers
Option 3: Single Producer / Multiple Consumers

Start the deployment:

docker compose \
  --file docker/docker-compose-single-client-multiple-servers.yml \
  up --no-build --pull never

Recommended for:

  • Testing controlled data distribution
  • Observing authorisation behaviour across multiple clients
  • Validating selective release of labelled data

⚠️ For all options run Docker Compose in a dedicated terminal.

Start the stack in Terminal 1 and leave it running. Use Terminal 2 for inspection commands such as:

docker compose ps
docker compose logs
docker exec ...

If you stop Terminal 1 (Ctrl+C), the entire stack stops and container checks will fail.

Optional — Pre-pull supporting images

If you have not previously pulled supporting public images (or if using --pull never), pre-pull them once:

docker pull redis:latest
docker pull confluentinc/cp-zookeeper:7.5.3
docker pull confluentinc/cp-kafka:7.5.3

This prevents image-not-found errors during startup.

Expected Startup Behaviour

For all Federator configurations:

  • Containers start successfully
  • Kafka brokers initialise
  • Producers expose gRPC endpoints
  • Consumers connect via MTLS
  • Authorised topics are discovered
  • Federation begins once Kafka is ready
  • Test topics may be created automatically
Checkpoint

View running containers:

docker compose ps
docker compose ps -a

docker compose ps shows only running containers. If it appears empty, use docker compose ps -a to see exited containers and exit codes.

Tail logs:

docker logs federator-server --tail 80
docker logs federator-client --tail 80

Operational Notes

  • Initial startup may take several minutes while Kafka initialises.
  • If Kafka is not ready, Federator services may retry connections.
  • Logs are the primary diagnostic tool during startup.
  • You may see LEADER_NOT_AVAILABLE during topic creation or broker startup. This is common while Kafka initialises and should resolve within 10–30 seconds. If it persists:
  • Confirm correct Kafka ports
  • Check broker logs
  • Confirm topics exist

Useful commands:

docker compose ps
docker compose logs

Checkpoint (startup)

  • Docker Compose starts without errors
  • All containers are running
  • No repeated restarts
  • Producers and Consumers initiliased successfully (view logs)

If a container repeatedly exits:

docker inspect <container-name> --format 'RestartCount={{.RestartCount}}'

If RestartCount increases, the container is crash-looping.

Proceed to Producer validation.

4.4 Verify Producer behaviour

Federator Producers act as controlled gateways between organisational domains.

They do not blindly expose Kafka topics. Instead, they:

  • Discover source Kafka topics
  • Apply configuration and authorisation rules
  • Filter messages based on security labels
  • Expose only authorised streams via gRPC

Validating Producer behaviour ensures:

  • Trust boundaries are enforced at the source
  • No unauthorised topics are exposed
  • gRPC services are correctly initialised

Without correct Producer initialisation, federation cannot occur.

List running containers
docker compose ps

Identify containers corresponding to Producer services.

Inspect Producer logs
docker compose logs <producer-service-name>
Expected behaviour

Logs should indicate:

  • Producer service started
  • gRPC server initialised
  • Connection to source Kafka established
  • Topics discovered
  • Authorisation rules loaded
  • Authorisation topics registered

If topics are not registered Federation will not occur If Kafka connection fails, the Producer may retry until brokers are available

Checkpoint (Producers)

  • Producers running
  • Kafka connectivity confirmed
  • Authorised topics registered
  • gRPC active

Proceed to Consumer validation.

4.5 Verify Federator Consumer behaviour

Federator Consumers enforce pull-based federation.

They do not receive data automatically. They:

  • Establish a mutually authenticated (MTLS) gRPC connection to a Producer
  • Request the list of authorised topics
  • Pull approved message streams
  • Write federated messages to local Kafka topics (prefixed with federated.)
  • Track consumption offsets in Redis

Validating Consumer behaviour confirms:

  • Mutual authentication is working
  • Topic authorisation rules are enforced
  • Data is flowing across the federation boundary
  • Offset tracking ensures replay safety

Without successful Consumer initialisation, federation cannot occur.

List running services
docker compose ps

Identify containers corresponding to Consumer services. In some scenarios, there may be multiple Consumers.

Inspect logs for each Consumer
docker compose logs <consumer-service-name>
Expected behaviour

Logs should indicate:

  • Successful MTLS connection to a Producer
  • Authentication completed
  • Authorised topics discovered
  • Subscription to approved topics
  • Streaming of messages over gRPC
  • Messages written to federated.* topics

If authentication fails:

  • Check certificate configuration
  • Check Producer availability

If no topics are authorised Federation will not begin

4.6 Verify Redis offset tracking

Federator Consumers use Redis to track offsets and maintain deterministic replay behaviour.

Identify Redis container:

docker compose ps

Enter the Redis CLI:

docker exec -it <redis-container-name> redis-cli

Inside Redis: keys *

Expected:

  • Offset tracking keys present
  • Offsets update as messages are consumed

Restart a Consumer and confirm it resumes from stored offsets.

Checkpoint (Consumers)

  • Consumers authenticate successfully via MTLS
  • Authorised topics are discovered
  • Federated topics are being written locally
  • Redis contains offset tracking keys
  • No authentication or authorisation failures are present in logs

Proceed to label filtering validation.

4.7 Verify security label filtering

Federation is not all-or-nothing. Federator Producers apply policy-based filtering before releasing messages across organisational boundaries.

Filtering is based on:

  • Kafka message headers (e.g. securityLabel)
  • Client identity and authorisation rules
  • Configured access policies

Only messages whose security labels match the Consumer’s authorised credentials are streamed over gRPC. Messages that do not match are silently ignored and never leave the source domain. This enforces the NDTP minimum-trust model: data is shared only when explicitly authorised.

Inspect source Kafka messages

Identify the source Kafka container:

docker compose ps
Inspect source topic
docker exec -it <kafka-source-container> \
  kafka-console-consumer \
  --bootstrap-server localhost:<source-port> \
  --topic <source-topic-name> \
  --from-beginning \
  --property print.headers=true

Confirm messages contain: securityLabel=nationality=GBR

Use the exact header format defined in your configuration and ensure consistent casing (securityLabel is case-sensitive).

Inspect federated target topic

Inspect federated topic in the Consumer’s Kafka broker:

docker exec -it <kafka-target-container> \
  kafka-console-consumer \
  --bootstrap-server localhost:<target-port> \
  --topic federated.<source-topic-name> \
  --from-beginning \
  --property print.headers=true
Expected behaviour
  • Only messages with authorised security labels appear in the federated topic
  • Messages with non-matching labels are silently ignored and do not appear
  • Message payloads remain unchanged
  • Headers may include additional federation metadata

If all messages are transferred regardless of label:

  • Filtering is not being enforced

If no messages are transferred, verify:

  • Consumer authorisation configuration
  • Producer filtering rules
  • Matching security label format

Operational Notes

  • Header key names are case-sensitive in Kafka
  • Ensure you are using the exact header name configured in the Producer (securityLabel)
  • The default filter performs an exact match
  • Custom filters may apply additional logic depending on configuration

Checkpoint (filtering)

  • Source messages contain security labels in headers
  • Federated topics contain only authorised messages
  • Non-matching messages are not transferred
  • Filtering is confirmed to be functioning correctly

Proceed to Kafka-to-Kafka federation validation.

4.8 Verify Kafka-to-Kafka federation

Federation is complete only when authorised messages arrive in the Consumer’s target Kafka broker, ready for downstream ingestion by IA Nodes.

The Federator should:

  • Read from a source topic on the Producer-side Kafka broker
  • Apply label-based filtering and authorisation
  • Stream approved messages over gRPC/MTLS
  • Write those messages into a target Kafka broker under federated.* topics

This stage confirms that the boundary crossing results in durable, auditable Kafka records inside the Consumer domain.

Federated topic naming convention

Federated topics written by Consumers use a consistent prefix to distinguish them from locally produced topics. Default naming convention:

federated.<source-topic-name>
Example

Source topic on Producer:

DP1

Federated topic on Consumer:

federated.DP1

This naming convention ensures:

  • Clear separation between local and federated data
  • Explicit identification of cross-boundary traffic
  • Controlled ingress into the Consumer domain
Identify source and target Kafka brokers

List running containers:

docker compose ps

Identify:

  • Source Kafka container (Producer-side)
  • Target Kafka container (Consumer-side)
Kafka Ports (Internal to This Stack)

Each Kafka broker advertises multiple listeners. Use the listener that matches where you run your command.

  • From another container in the same compose network: use kafka-:19092
  • From the host: use the mapped localhost port (localhost:19093 for kafka-src, localhost:29093 for kafka-target)
  • From 127.0.0.1:9092/9093 listener if configured.

⚠️ If you use the wrong port, you may see: Connection to node -1 could not be established Broker may not be available

This usually indicates an incorrect port, not a broker failure.

List federated topics on the target broker

Run topic listing on the target Kafka container:

docker exec -it <kafka-target-container> \
  kafka-topics \
  --bootstrap-server localhost:<target-port> \
  --list | grep '^federated'

Expected:

  • Topics prefixed with federated

Note:

  • Standardise on a single convention (recommended: federated.)
  • Ensure this matches what the running compose configuration creates
Consume from a federated topic and verify data is present

Pick one federated topic discovered above (example: federated.DP1) and consume messages:

docker exec -it <kafka-target-container> \
  kafka-console-consumer \
  --bootstrap-server localhost:<target-port> \
  --topic federated.<source-topic-name> \
  --from-beginning \
  --property print.headers=true

Verify:

  • Messages are present
  • Headers are visible (including the original securityLabel header and/or federation metadata)
Expected behaviour
  • Federated topics exist on the target broker with the federated prefix
  • Messages appear for authorised label matches
  • Message payloads are unchanged (value-level equivalence)
  • Headers may include additional federation metadata
  • Unauthorised messages are not present

If federated topics exist but are empty:

  • Confirm Consumers are running and connected
  • Confirm Producers have discoverable topics
  • Confirm label filtering is not excluding all data

If no federated topics exist:

  • Confirm the Consumer is configured to write to the target broker
  • Confirm Consumers are subscribing to at least one authorised topic

Checkpoint (End-to-End Validation: Kafka-to-Kafka)

  • Federated topics exist on the target Kafka broker
  • Federated topics contain messages
  • Payload integrity between source and target is confirmed
  • Federation behaviour matches security filtering expectations

4.9 Perform an end-to-end Test

Terminal 1 – Consumer on target, inside container
docker exec -it kafka-target kafka-console-consumer \
  --bootstrap-server kafka-target:19092 \
  --topic federated-FederatorServer1-knowledge
Terminal 2 – Producer on source, inside container
echo "test-$(date +%s)" | docker exec -i kafka-src kafka-console-producer \
  --bootstrap-server kafka-src:19092 \
  --topic knowledge

The message should appear in Terminal 1.

This confirms:

  • kafka-src working
  • federator-server consuming
  • Filtering applied
  • kafka-target producing
  • Networking configured correctly

If you see Connection to node -1 could not be established, it usually means you used a host listener from inside a container (or vice versa). Re-check whether your command is running on the host or in a container, then pick the matching advertised listener.

Federation is now validated end-to-end.

Checkpoint

At the end of this stage, confirm:

  • The selected topology starts successfully
  • Producers initialise and expose authorised topics
  • Consumers authenticate and subscribe successfully
  • Redis tracks offsets correctly
  • Security label filtering is enforced
  • Federated Kafka topics are created and populated
  • End-to-end message flow is confirmed
  • Federation is now validated end-to-end.

Next Step: Review