Data Engineer

RemoteIndividual ContributorFull-time

Overview

Design, build, and maintain data infrastructure that enables reliable ingestion, transformation, storage, and availability of data for analytics, data science, and artificial intelligence in enterprise environments.

Responsibilities

  • Design and maintain robust, scalable data pipelines (ETL / ELT).
  • Integrate multiple data sources, including APIs, databases, files, and streaming systems.
  • Optimize data storage, performance, and query efficiency.
  • Ensure data quality, lineage, security, and reliability.
  • Implement data models to support analytics and business intelligence use cases.
  • Collaborate closely with Data Scientists, AI Engineers, and Analytics teams.
  • Document data flows, schemas, and operational processes.
  • Participate in continuous improvement of data architecture and scalability.

Qualifications

Technical Requirements

  • Advanced SQL skills and strong data modeling experience.
  • Proficiency in Python or Scala for data processing.
  • Experience with data storage systems such as PostgreSQL, MySQL, BigQuery, Snowflake, or Redshift.
  • Hands-on experience with ETL/ELT tools (Airflow, dbt, Fivetran, or similar).
  • Knowledge of cloud-based data architectures (AWS, Azure, or GCP).
  • Solid understanding of Data Lake and Data Warehouse concepts.

Experience

  • 2+ years building and maintaining production-grade data pipelines.
  • Experience working with high-volume data or multiple heterogeneous data sources.
  • Experience collaborating in multidisciplinary teams.

Core Competencies

  • Structured thinking with a strong focus on data quality.
  • Ability to anticipate and address scalability challenges.
  • High sense of ownership over data reliability and availability.