>

Job Summary


Job Type
Permanent

Seniority
Junior (≤ 2 yrs)

Years of Experience
At least 2 years

Tech Stacks
ETL
Google Cloud
Dataflow
EMR
RedShift
BigQuery
HBase
Composer
Hive
Flink
Airflow
kafka
SQL
Hadoop
Python
AWS
Java

Job Description


Our Data Engineer is pivotal in working on data ETL pipelines and data integrations as well as other big data systems. This role reports into our Data team lead and will be based in our Singapore office. Our ideal candidate should be familiar with and passionate about big data technologies, strong communication skills, quick learner, and has attention to detail.


About Technology & Engineering

In a fast-growing industry like ours, we can’t afford to stand still. At Technology & Engineering, we constantly test and improve our products to create the best experience in the travel and leisure industry. The team hires curious and analytical people who are always to push boundaries and have real impact.


What you’ll do

  • Design and develop solutions to store and process data on cloud platforms
  • Design and develop ETL processes to ingest data into data warehouse, and ensure data accuracy and stability
  • Participate in building big data platform that processes data at scale
  • Work on data streaming systems and frameworks to ensure timely delivery of accurate data
  • Work on cloud-native data infrastructure, optimize data performance, implement data monitoring and alert tools, and manage various data workloads
  • Create scripts and workflows to automate repeated data processing tasks and simplify complex task flows
  • Assisting in quality control of quantitative and qualitative research projects
  • Participate in peer code reviews and produce high quality documentation

What you’ll need

  • BS/MS or higher degree in mathematics, computer science or other technical/quantitative discipline
  • Ideally 2 years experience in data development and engineering. Fresh graduates may be considered for the role.
  • Familiarity with Hadoop, Hive, HBase, Flink, Hudi, Airflow, Kafka, etc 
  • Exposure to big data solutions from Amazon AWS/ Google Cloud, such as Redshift, EMR, BigQuery, Dataflow, Composer, etc. 
  • Proficient in SQL, and familiar with Java and Python
  • A spirit of collaboration and transparent communication
  • High personal code/development standards (peer testing, unit testing, documentation, etc)



Klook is proud to be an equal opportunity employer. We hire talented and passionate people of all backgrounds. We believe that a joyful workplace is an inclusive workplace, one where employees from all walks of life have an equal opportunity to thrive. We’re dedicated to creating a welcoming and supportive culture where everyone belongs.


Klook does not accept unsolicited resumes from any temporary staffing agency, placement service or professional recruiter (“Agency”). Klook will not be responsible for, and will not pay, any fees, commissions or other payments related to such unsolicited resumes.


An Agency must obtain advance written approval from Klook’s Talent Acquisition Team to submit resumes, and then only in conjunction with a valid fully-executed agreement for service and in response to a specific job opening for which the Agency has been requested to submit resumes for. Klook will not be responsible for, and will not pay, any fees, commissions or other payments to any Agency that does not have such agreement in place or does not comply with the foregoing.

Salaries

There are no salaries from Klook that are similar to this job

View more salaries from Klook
Apply

NodeFlair Insights of Klook