App Engine

Live Data

A unified data fabric where live market feeds, intraday events, and resting data from warehouses and legacy platforms converge: governed, queryable, and ready for real-time decision-making.

The data foundation

Where live feeds and resting data converge

In financial institutions, data is never purely "real-time" or purely "historical". Risk, trading, compliance, and operations all depend on the continuous interaction between live market feeds, intraday events, and vast stores of resting data held in warehouses and legacy platforms. Treating these sources as separate domains creates artificial boundaries that slow decision-making, complicate architectures, and introduce operational risk. As a result, finance increasingly expects a single, coherent data layer where live and historical information can be accessed, queried, and acted upon as one.

Live data, in this context, means a unified access point that intersects streaming and resting datasets across feeds, databases, and legacy infrastructures, governed by entitlements and security policies. This data layer is not passive: it enriches raw data with business logic through complex event processing, enabling systems to detect conditions, aggregate signals, and react deterministically as events unfold. It then makes the data available to users and agents through many integrations and interfaces, from real-time dashboards and reports to streams into Python, ODBC, and C++ applications. In finance, such a live data foundation is no longer a differentiator: it is a prerequisite for building systems that are responsive, auditable, and fit for real-time decision-making.

Data connectivity

Consuming data and virtualizing access

With over 35 database adapters and 15 feed adapters, 3forge can consume real-time and resting data from virtually any source, and present it as a unique access point to data consumers. By insulating downstream apps from idiosyncratic data formats, 3forge accelerates project deployments, implements explicit access control, and performance management. Once in 3forge, the data can be queried via the REST API, JDBC, ODBC, MCP, as well as through bidirectional libraries in .Net, Python, Java, and RPC between the 3forge Relays and external systems.

Whether the data is consumed by users on UI screens, PDF and email reports, AI language models, or third-party applications, 3forge can mesh different datasets, augment them with derived analytics updated as data changes, and protect them with granular entitlement.

3forge data connectivity — feeds, databases, and access protocols
3forge connects to over 35 database adapters and 15 feed handlers, presenting a unified, governed access point to all downstream consumers

Supported Database Adapters

  • AMI DB
  • Flat File
  • AMI Shell
  • Excel
  • Fred
  • Quandl
  • Oracle
  • Msft SQL
  • MySQL
  • Apache Spark
  • SAP Sybase
  • SybaseIQ
  • KX
  • Hadoop
  • Mongo DB
  • R
  • Impala
  • PostgreSQL
  • Phoenix
  • Chronicle Queue
  • Netezza
  • MemSQL
  • Couchbase
  • SQLite
  • REST API
  • Snowflake
  • Hazelcast
  • IBM DB2
  • Greenplum
  • Symphony
  • HP Vertica
  • Hive
  • Deephaven
  • Bloomberg
  • Apache Ignite
  • Redis

Supported Feed Handlers

  • SingleStore
  • Solace
  • Tibco
  • RabbitMQ
  • KX Stream
  • Chronicle Queue
  • IBM MQ
  • OneTick
  • FIX
  • Amazon SQS
  • Aeron
  • Kafka
  • ActiveMQ
  • 60 East-Amps
  • BPIPE
  • QuantHouse
  • Google RPC

Programmatic Access

  • REST API
  • JDBC
  • .NET
  • Python
  • Java
  • ODBC
  • Google RPC
  • C++
  • MCP
Proprietary columnar database

Real-time and historical tables

With its proprietary columnar database, 3forge delivers industry-leading analytical flexibility for streaming and time-sensitive workloads, including Real-time Tables (RDB) and Historical Tables (HDB). 3forge adds unique features that address difficult data problems that software engineers encounter.

Real-time streaming analytics

Delta-based triggers provide fast and efficient aggregations, joins, projections, and more.

Docs

Intuitive yet comprehensive language

The combination of a Java/.NET syntax with SQL and Python.

Docs

Replication and scalability

Enable replication, processing sharding, and web load-balancing without additional code.

Docs

Streaming to historical tables

Support for streaming inserts into historical tables with immediate consistency.

Docs

Historical table sparse updates

Ability to insert, update, and delete on an ad-hoc basis.

Docs

Advanced conflation

Consume data at exponential rates and down-sample for downstream consumers.

Docs

Self-discovery schemas

Auto-accept schema updates from dynamic remote data sources.

Docs

REST admin and query APIs

Automate monitoring and support activities, and create custom endpoints.

Docs

Remote procedure calls

Provides the ability to invoke functionality in remote systems.

Docs

Debugger and intuitive error codes

Add break statements and step through custom logic.

Docs

Testing and code coverage tools

Powerful analysis of code invoked, including runtimes.

Docs

Query planner

Pre-compiler and planner to achieve optimized execution.

Docs

Multi-threaded queries

High-performance data access in multi-user environments.

Docs

No-fuss historical table schema changes

Alter schemas instantly without the need to backfill data.

Docs

3forge Database Operational Envelope

> 10T Historical database
capacity (rows)
> 1,000 Historical database
columns
> 2M Real-time throughput
(ops/sec)
< 100µs Real-time database
latency

How much is 10 trillion rows?

The consolidated feed for equity trades and quotes on US markets, or SIP, has ~2.5 billion SIP messages per trading day. If 1 row is 1 consolidated SIP message (trades + quotes), 10 trillion rows would represent 4,000 trading days, or 16 years. Similarly, options market data is extremely dense at ~200 billion quote updates per day on OPRA during active markets. Even at that frantic pace, 10 trillion quote updates would be 2.5 months of option quotes traffic.

Scalable transformations

Real-time data aggregation

The 3forge platform is particularly well-suited for time-sensitive, large-scale data transformations and comparisons. Its inherent modularity allows for a broad array of deployment configurations that can be tailored for each specific use case. In addition, the pricing model of 3forge proves particularly beneficial for companies needing substantial scalability and flexibility while keeping costs predictable and affordable.

Price feed aggregation
Example of Price feed aggregation

A 3forge Center can receive market data from different feeds, reconcile the streams in real time, and propagate the fastest price to downstream systems. These exchange feeds may be split between data centers for additional redundancy.

Order aggregation
Example of Order aggregation

Similarly, a 3forge Center can receive orders from different order management systems representing asset classes or market access pathways, including no-touch and low-touch order flow, and aggregate them to achieve a unified representation of market activity for transparency, risk management, exposure control, and even internal order crossing.

Performance techniques

Optimizations for the largest data loads

3forge leverages several advanced approaches to balance the need for speed in data transmission with technical constraints that could slow down the data flow.

Delta-based processing

Rather than reprocessing or retransmitting the entire dataset each time an update occurs, 3forge only handles changes (deltas) in the data. This approach dramatically improves performance and efficiency in real-time systems.

Conflation

When replicating data across its tiered architecture, 3forge can be configured to send every update or instead send the latest one on an agreed interval. This effectively discards intermediate values, dramatically reducing the burden for downstream consumers.

Summarization

3forge can calculate and store regular and delta-based summary metrics, including averages, sums, counts, mins, or max values, to compress data volumes, allow for trend analysis, and support time-based analytics like moving averages.

Decoration

3forge supports automated data decoration through event-driven triggers invoked during data operations, enabling dynamic enrichment, validation, and propagation of data in response to table-level changes.

Event-driven logic

Complex event processing

3forge offers real-time tables and database-native functionality to allow business logic to be executed on the fly based on data updates. In combination with conflation, these triggers can smartly and efficiently process only as much work as needed.

The following trigger types are supported out of the box:

Aggregation

Create and update aggregation tables as source data changes.

Projection

Maintain automatically filtered or transformed projection tables.

Join

Create and keep joined tables synchronized across source updates.

Decorate

Enrich the data with additional fields from another table.

Relay

Send messages to downstream systems through the 3forge Relay.

AMIScript

Run custom scripts upon any insert, update, or delete activity.

Trigger syntax reference

Script trigger

Create trigger <trigger_name>
Oftype Amiscript
ON <table_name>
[PRIORITY priority]
Use...

Aggregation trigger

Create trigger <trigger_name>
Oftype Aggregate
ON <source_table>,
   <target_table>
Use...

Projection trigger

Create trigger <trigger_name>
Oftype Projection
ON <source_tables>,
   <target_table>
Use...
Columnar archiving at scale

Petabyte historical database for archiving

While the highest query performance is achieved with real-time databases, 3forge also offers columnar historical tables capable of holding trillions of rows. Persisted to disk and supporting partitioning, these tables are designed for storing large volumes of data at high speed with fast retrieval, all while using the same SQL syntax as real-time tables.

Data from historical tables can be queried and loaded into a real-time table where the full breadth of querying optimizations can be accessed, including joins with other tables.

3forge uniquely supports the following features in its Historical database (HDB):

Large column counts and heavy data types

Support for large column counts and heavy data types including blob fields, without compromising query performance or storage efficiency.

Configurable storage strategies

Configurable storage strategies for each column, including four storage types for optimal performance. The system dynamically adapts storage types during optimization on a partition basis for disk efficiency and query speed based on actual data usage.

Type Description
FLAT Fixed-length types like INT, FLOAT, DOUBLE
VARSIZE Variable-length types like STRING and BINARY, up to 1 TB
BITMAP Efficient for low-cardinality strings
PARTITION Organizes rows into isolated partitions

Schema management

Add, drop, or modify columns without impacting historical partitions. Partition columns are immutable, so careful planning is essential when designing the table schema. HDB ensures older partitions are mapped to new schemas seamlessly, preserving historical integrity.

Row-level operations

HDB supports UPDATE and DELETE clauses while preserving partition optimization. Significant changes within a partition are re-optimized automatically. Sort indexing further enhances query performance.

Archiving real-time data from streaming updates

Seamlessly move data from real-time tables into HDB using event, batch, or timer-driven approaches. This ensures historical records remain up to date without interrupting ongoing operations.

HDB architecture: partitioned columnar storage with real-time ingestion
Resilience and throughput

Reliable data scaling

When processing and archiving high-velocity datasets, 3forge makes improving performance and resiliency straightforward and safe.

Database replication

3forge delivers simple and effective data replication between primary and warm standby Center databases with a simple configuration. The standby can check for the health of the primary and take over in case of failure with all the data already loaded in memory.

Database load-balancing

Replication can also be used for load balancing between multiple Centers based on routing rules in the Relay. These multiple Centers increase redundancy and scalability. The flexibility of the platform allows for more advanced deployments, such as multi-region and even global replication.

Architecture resiliency

Global infrastructure resiliency

Every aspect of the tiered 3forge architecture can implement data replication for redundancy and resiliency.

Diversity of feed and data source adapters

Allow data to be sourced and archived to and from remote systems.

3forge Relays

Can disseminate data to multiple Centers for hot-hot and hot-warm redundancy scenarios.

3forge Centers

Can be configured together to distribute work or provide hot or warm standby.

3forge Web components

Able to connect to multiple centers and support distributed user profile management.

3forge Web Balancer

Allows for use of multiple webs with a single IP address, routing users based on available Web capacity.

Tiered 3forge architecture: adapters, Relays, Centers, Web Balancer
Intelligent dispatch

Dynamic message routing & map reduce

3forge enables dynamic routing rules that intelligently dispatch messages to specialized processing centers based on content, load, or custom logic, all configurable in real time. Each center can independently process its portion of the workload before the results are seamlessly merged through a map-reduce operation. This architecture ensures optimal resource utilization, parallel processing, and high-performance aggregation across distributed systems.

Dynamic routing and map-reduce across distributed centers
Routing rules dispatch messages to specialized centers based on content or load; results are merged via map-reduce for high-performance aggregation
Durability and continuity

Guaranteed messaging

Guaranteed messaging ensures that every critical update is durably persisted and delivered reliably, regardless of system load or transient network conditions. Messages are journaled to a write-ahead log (WAL), allowing downstream centers, including those temporarily offline, to recover and replay missed data upon reconnection. While guaranteed messaging does not perform duplicate suppression, it provides the durability and continuity required for real-time trading, risk, and compliance systems.

Write-ahead log journaling and downstream replay for guaranteed delivery
The WAL ensures downstream centers can recover and replay missed messages upon reconnection, without interrupting live operations
In-flight enrichment

Message transformation and routing

3forge includes powerful message transformation capabilities, allowing firms to inspect, modify, enrich, or filter messages in-flight, all without writing custom code. This is especially valuable for FIX and other financial protocols, where dynamic adjustments may be required to support different counterparties, normalize formats, redact sensitive fields, or route based on content. With support for declarative rules and scripting, 3forge enables rapid, flexible logic tailored to evolving trading and compliance needs.

Message transformation pipeline — inspect, enrich, filter, and route in-flight
Transformation rules operate on FIX and financial protocol messages in-flight, enabling normalization, redaction, and content-based routing without custom code
Next steps

See Live Data handling your most demanding feeds.