>

Job Summary


Job Type
Permanent

Seniority
Senior

Years of Experience
At least 6 years

Tech Stacks
ETL
play
Analytics
Grafana
Prometheus
SQL
Python
AWS

Job Description


Apply
We are looking for a Full Stack Data Engineer to play a pivotal role in how we digest and analyze our internal business data. You will work closely with our co-founders to utilize and leverage data to drive business decisions.

As a Full Stack Data Engineer, you will be responsible for defining and managing data sets and designing our reporting solutions and analytics capabilities. You will collaborate to understand the business processes that generate the data, define key data attributes, and build systems that make the data available in a secure and meaningful way. You will also be responsible for making our vast amounts of data accessible and efficient to query so our teams can make the best data-driven decisions. You are a self-starter and are comfortable working cross-functionally with other teams.

Summary of key responsibilities

  • Work closely with the upstream engineering team and downstream finance & operations team to understand internal data and translate business requirements into technical design to build scalable data pipelines/ETL processes using modern tools and best practices, including selection of said tools
  • Work closely with business stakeholders to identify and define key metrics and understand specific requirements for analysis
  • Design and develop efficient data pipelines and models to add data views that power internal analytics
  • Ensure data definitions are standardized and available throughout the organization via a data dictionary
  • Ensure data ingested into data pipelines are of high quality and processes are in place to detect and address any changes in or issues arising from upstream operational tables
  • Design business intelligence infrastructure and interfaces to improve availability and usability of data and insights, including selection and maintenance of tooling (e.g., auroraDB, Prometheus, Grafana, etc.)
  • Build scalable automation/dashboarding solutions utilizing SQL, Python, or other relevant languages to produce models and analyses of large data sets to measure and alert on performance and risk management outcomes
  • Carry out ad hoc analysis as needed and tell ‘a story’ based on data but focused on business insights such as operational opportunities for improvement
  • Partner with and clearly communicate results to influence leaders and key decision-makers

Ideal qualifications

  • At least 6 years of experience in business intelligence, data engineering, or data analytics with a focus on building data pipelines and conducting analysis on large data sets
  • Degree in mathematics, statistics, engineering, or a related technical field with an interest in the digital asset space
  • Strong knowledge of SQL and Python
  • Advanced knowledge of data warehousing concepts and schema optimization based on usage patterns
  • Prior experience with writing and debugging data pipelines
  • Hands-on experience with cloud-based data warehousing solutions, preferably in AWS
  • Direct experience with metrics and visualizations using business intelligence reporting tools (Looker, Power BI, or similar)
  • Experience in developing ETL processes and using databases in a business environment with large data sets
  • Ability to work cross-functionally, build trust with internal stakeholders, derive requirements, and architect shared data sets
  • Excellent communication skills with strong business intuition and ability to understand complex business systems, versatility and willingness to learn new technologies on the job

Salaries of Software Engineer at AlphaLab Capital

Salaries from AlphaLab Capital that are similar to Full Stack Data Engineer

- SGD

Estimated Salary Range

View more Software Engineer salaries at AlphaLab Capital View more Software Engineer salaries


NodeFlair Insights of AlphaLab Capital