There is an excellent opportunity for a talented Data Engineer with a proven track record of delivering high volume data and data pipeline solutions and architecture.
The role requires in-depth knowledge with cutting-edge big data technologies. Tuning, troubleshooting and scaling these big data technologies, where having a curiosity with the internal workings of these systems is key to being successful.
- 5-8 years programming experience with Python / Java
- Experience with Data Engineering - Hadoop ecosystem and streaming
- Cloud experience, ideally with AWS services
- Excellent communication and collaborative skills
- Evaluating and defining requirements and problem statements.
- Developing user documentation, diagrams & flowcharts.
- RECOMMENDED REQUIREMENTS:
- Good understanding of distributed messaging systems (Kafka, Solace etc.)
- Comfortable building and maintaining CloudFormation templates, Ansible playbooks and scripts to automate and deploy AWS resources and configuration changes
- Familiar with Machine Leaning concepts
- Solid understanding of Big data technologies like HDFS, Hive, Spark, Yarn etc
- Experience of a major relational or NoSQL database (e.g. Oracle / Cassandra)
- AWS certified
This is a hard-core software engineering role, where a large part of an engineer's time is spent writing code with the remainder being spent on designing and architecting systems, tuning and debugging big data systems, supporting production systems and engaging in ML advancements.
The ideal candidate will have cloud, preferably AWS experience, and a great teammate with a forward thinking approach, ability and confidence to challenge the status quo to define future visions.
The position is based in Singapore and forms part of the global Apple Online Store team.