>

Job Summary


Salary
$6,600 - $8,800 SGD / Monthly EST

Job Type
Permanent

Seniority
Senior (≥ 6 yrs)
Mid (3-5 yrs)

Years of Experience
3-9 years

Tech Stacks
Amazon S3
Analytics
Spark Streaming
Teradata
Avro
Apache
Presto
Hive
Spark
JSON
Hadoop
Java

Job Description


The position exists to continuously enhance the efficiency of existing Data pipeline Job/framework and setup a best practice for Big Data projects planned on the BI & Analytics platform.

Key Accountability :
1.        Review and tune the code for the Spark framework to increase the throughput.
2.        Establish best practices and guidelines to be followed by tech teams while writing codes in Spark.
3.        Create a framework to continuously monitor the performance and identify the resource intensive jobs.

Job Duties & responsibilities :
·        Able to write code, services and components in Java, Apache Spark and Hadoop.
·        Responsible for systems analysis Design, Coding, Unit testing, CICD and other SDLC activities.
·        Proven experience working with Apache Spark streaming and batch framework.
·        Proven experience in performance tuning Java, Spark based applications is a must.
·        Knowledge of working with different file formats like Parquet , JSON and AVRO.
·        Data Warehouse experience working with RDBMS and Hadoo,  Well versed with concepts of change data capture and SCD implementation.
·        Hands-on experience in writing codes to efficiently handle files on Hadoop.
·        Ability to work in projects following Micro Services based architecture.
·        Knowledge of S3 will be a plus.
·        Ability to work proactively, independently and with global teams.
·        Strong communication skills should be able to communicate effectively with the stakeholders and present the outcome of analysis.
·        Working experience in projects following Agile methodology is preferred.

Required Experience :
-        At least 3 to 9 years of working experience, preferably in banking environments
-        Expert knowledge of technologies e.g. Hadoop, Hive, Presto, Spark, Java, RDBMS like Teradata and File storage like S3 are necessary.
-        Candidate must speak and write well.
-        Degree Holder.

Core Competencies :
·        Experienced on Hadoop frameworks such as Hive, Spark, Java.
·        Strong knowledge of relational database systems and data warehousing techniques.
·        Strong listening and communication skill set, with ability to clearly and concisely explain complex technical issue

Salaries of Data Engineer at Encora Inc.

Salaries from Encora Inc. that are similar to Hadoop Engineer

6600 - 8800 SGD

Estimated Salary Range

View more Data Engineer salaries at Encora Inc. View more Data Engineer salaries
Apply

NodeFlair Insights of Encora Inc.