Building the Migration Engine That Moved ETQ's 5,000-Customer QMS Platform to Multi-Tenant SaaS
ETQ ran thousands of QMS customers across incompatible single-tenant environments with no path to consolidation. Ideas2IT built a custom migration engine that moved 10TB+ per customer from SQL Server and Oracle to a multi-tenant MySQL platform, with checksum validation and audit trail continuity at every stage.


Client
ETQ

Industry
Technology

Service
Data Modernization

Platform
Custom QMS Data Migration Engine
01 Challenge
ETQ operated QMS software across thousands of customers on incompatible single-tenant systems, some on-premises, some on private cloud. Each version required independent maintenance. Onboarding a new customer or pushing a compliance update required separate engineering work per environment, and the cost of that fragmentation compounded with every new contract.
02 Solution
Ideas2IT built a custom migration platform with four integrated modules: a pre-check engine that assessed schema compatibility before any data moved; a DB converter that normalized SQL Server and Oracle schemas to MySQL; a parallel migration engine with full, incremental, and replication modes; and a post-migration validation layer with checksum, row count, and audit log verification at every stage.
03 Outcome
Maintenance cost dropped 36%. Migration completed at 99.99% success rate. ETQ's customers now run on a unified multi-tenant cloud architecture, and new commercial models like subscription pricing became viable once single-tenant infrastructure was no longer a constraint.
Phase 01
Migration readiness engine: compatibility assessment before any data moved
The first constraint was epistemic: before migrating a single table, the team needed to know exactly what would break.
Ideas2IT built
- a pre-check module that interrogated source environments on-premises and in the cloud, capturing database size, object counts, authentication methods, integration profiles, and unsupported feature flags.
- ETQ's customers ran on SQL Server and Oracle with custom scripts, Java class references in formula fields, stored procedures, and LDAP or Kerberos SSO configurations that had no direct equivalents in the target MySQL environment.
- The assessment engine produced customer-specific compatibility reports that classified each issue by type, flagged it for remediation, and gave the migration team a clean handoff surface before conversion began.
This Phase Produced
- Pre-check compatibility engine (On-prem and cloud source assessment)
- Authentication conflict detector (LDAP, Kerberos, HTTP SSO flag-and-report)
- ETQ Script compatibility scanner (Unsupported imports, Java class references)
- Integration profile validator (XML and CSV/Text connection profile checks)
- SQL compatibility analyser (Stored procedures, custom views, formula fields)
- Database integrity checker (Corrupted tables, duplicate design names)
- Customer compatibility report (Per-customer issue classification and export)
Phase 02
Schema conversion and parallel migration: terabytes moved without breaking referential integrity
The conversion module used SQLines to transform SQL Server and Oracle schemas to MySQL, handling DDL, indexes, primary and foreign keys, views, stored procedures, and embedded queries.
Custom business logic migration handled coupling and decoupling of related tables and re-created objects without constraint loss. The migration engine ran in three configurable modes: full lift-and-shift, incremental CDC delta, and real-time replication.
Parallel migration from multiple source tenants to a single multi-tenant RDS target ran simultaneously, with AWS DataSync managing file system transfers and PySpark handling core schema migration. Tenant data isolation was maintained at the storage layer through environment-specific EFS paths, preventing cross-tenant data leakage by architecture rather than policy. Datadog provided live monitoring, audit logs, and alerting throughout.
This Phase Produced
- SQLines-based DB conversion module (SQL Server / Oracle to MySQL schema transform)
- Custom business logic migration layer (Table coupling/decoupling, constraint re-creation)
- Parallel multi-tenant migration engine (Multi-source to single-destination, simultaneous)
- Full / Incremental / Replication migration modes (CDC delta and real-time replication support)
- AWS DataSync file system transfer (Source EBS/EFS to target EFS, environment-scoped)
- PySpark core schema migration (High-volume table migration with validation hooks)
- Tenant data isolation layer (Environment-ID-scoped EFS paths, no cross-tenant access)
Phase 03
Post-migration validation and cutover: the verification layer that decommissioned legacy systems safely
Before any legacy environment was decommissioned, the validation module ran checksum verification, table counts, row counts, and column counts across every migrated schema. Auto-increment adjustments, constraint cleanup, and ID alignment were applied post-import to confirm state parity.
The EMU module handled environment-to-environment operations across PODs: export, DataSync transfer, import, and removal, enabling parallel POD utilization without manual orchestration. A full audit trail covering every migration job, transfer task, and validation result was published to Datadog dashboards accessible to both teams.
The migration completed at a 99.99% success rate. ETQ's customers moved to a scalable multi-tenant architecture with no version-specific maintenance overhead remaining.
This Phase Produced
- Checksum and metadata validation module (Pre/post state comparison per migrated schema)
- Post-migration constraint and ID alignment (Auto-increment, foreign key, and ID normalization)
- EMU (NXG-to-NXG) environment module (Export, transfer, import, and removal across PODs)
- DataSync cross-POD EFS transfer (Environment-scoped data movement between PODs)
- Datadog migration audit dashboard (Progress, warnings, errors, audit logs, metrics)
- Legacy decommission verification framework (Sign-off checklist before source environment removal)
Ideas2IT built us a migration engine, not a migration project. The same platform that moved our first customer moved our thousandth — without a single incident that required us to roll back.
The Outcome
5,000 customers. 10TB+ per migration. One platform that scales without a version ceiling.
The 36% maintenance cost reduction followed directly from the architectural consolidation: one codebase, one compliance pipeline, one upgrade cycle. The 99.99% success rate was not a performance target — it was the product of a validation framework that refused to decommission a legacy environment until every checksum matched. ETQ now onboards new customers through configuration, not infrastructure provisioning. That is what eliminating single-tenant fragmentation actually produces.