Data Engineer

Engineering · Full-time · Hyderabad, India

Job description

About Turvo:

Turvo provides a collaborative Transportation Management System (TMS) application designed specifically for the supply chain. Turvo Collaboration Cloud connects freight brokers, 3PLs, shippers, and carriers to unite supply chain ecosystems, delivering outstanding customer experiences, real-time collaboration, and accelerated growth. The technology unifies internal and external systems, providing one end-to-end solution that streamlines operations, enhances analytics, and automates business processes while eliminating redundant manual tasks. Turvo’s customers include some of the world’s largest Fortune 500 logistics service providers and shippers as well as small to mid-sized freight brokers.

Turvo is based in Dallas, Texas, with offices in Hyderabad, India. (www.turvo.com).

Responsibilities:

  • Expertise in designing and implementing scalable data pipelines (e.g., ETL) and processes.
  • Experience building enterprise-scale data warehouse and database models end-to-end.
  • Proven hands-on experience with Snowflake and related ETL technologies.
  • Experience working with Tableau, Power BI or other BI tools.
  • Experience with native AWS technologies for data and analytics such as Redshift, S3, Lambda, Glue, EMR, Kinesis, SNS, CloudWatch, etc.
  • Experience with NoSQL databases (MongoDB, Elasticsearch).
  • Experience working with relational databases and awareness of writing & optimising SQL queries for analytics and reporting.
  • Experience developing scalable data applications and reporting frameworks.
  • Experience working with message queues, preferably Kafka and RabbitMQ.
  • Ability to write code in Python, Java, Scala or other languages.

Qualification:

  • 3+ years experience in architecture of DW/Data Lake solutions for the Enterprise using multiple platforms.
  • Experience writing high quality, maintainable SQL on large datasets.
  • Expertise in designing and implementing scalable data pipelines (e.g., ETL) and processes in Data Warehouse/Data Lake to support dynamic business demand for data.
  • Experience working on building/optimising logical data models and data pipelines while delivering high quality data solutions that are testable and adhere to SLAs.
  • Excellent knowledge and experience of query optimisation and tuning.
  • Knowledgeable about a variety of strategies for ingesting, modelling, processing, and persisting data.

Org chart

Sign up to view 0 direct reports

Get started


Teams

This job is not in any teams