The Role
We are seeking an experienced Senior Data Engineer to join the Reference Data Team, a critical component of the Addepar Platform team. The Addepar Platform is a comprehensive data fabric that provides a single source of truth for our product set, encompassing a centralized and self-describing repository, API driven data services, integration pipeline, analytics infrastructure, warehousing solutions, and operating tools.
The Reference Data Team is responsible for the acquisition, conversion, cleansing, reconciliation, modeling, tooling, and infrastructure related to the integration of market and security master data from third-party data providers. This team plays a crucial role in our core business, enabling alignment across public and alternative investment data products and empowering clients to effectively manage their investment portfolios.
As a Senior Data Engineer on the Reference Data Team, you will collaborate closely with product counterparts in an agile environment to drive business outcomes. Your responsibilities will include contributing to complex engineering projects using a modern and diverse technology stack, including PySpark, Python, AWS, Terraform, Java, Kubernetes and more.
What You'll Do
Partner with multi-functional teams to design, develop, and deploy scalable data solutions that meet business requirements.
Build pipelines that support the ingestion, analysis, and enrichment of financial data by collaborating with business data analysts
Advocate for standard methodologies, find opportunities for automation and optimizations in code and processes to increase the throughput and accuracy of data
Develop and maintain efficient process controls and accurate metrics that improve data quality as well as increase operational efficiency
Working in a fast-paced, dynamic environment to deliver high-quality results and drive continuous improvement.
Who You Are
6+ years of professional data engineering experience
A computer science degree or equivalent experience
Proficiency with at least one object oriented programming language (PySpark, Python, Java)
Proficiency with relational databases, SQL and data pipelines
Rapid learner with strong problem solving skills
Knowledge of financial concepts (e.g., stocks, bonds, etc.) is helpful but not necessary
Experience in data modelling and visualisation is a plus
Passion for the world of FinTech and solving previously intractable problems at the heart of investment management is a plus
Experience with any public cloud is highly desired (AWS preferred).
Experience with data-lake or data platforms like Databricks highly preferred.
Important Note - This role requires working from our Pune office 3 days a week (Hybrid work model)